The best books I read or listened to in 2023

Below is my annual summary of some of the best books I read in the past year. There are a few themes that weave their way throughout: the brain, running, history, and biographies. Here I also try to rank them loosely – the first ones are my top choices. Enjoy! 

The Strange Order of Things by Antonio Damasio

I heard Damasio talk a few times and was not really impressed, however, I decided to give his book a try as it seems he has similar thoughts to Mark Solms’ book “Hidden Spring,” and I’m I glad I did. He is a truly gifted writer. This book was pure brilliance – perhaps the best book I have read in the last 3 years. He describes the fundamentally homeostatic role that consciousness (a sense of self) plays, and then goes into the idea of civilization and culture as just a natural progression of that homeostatic process. Argues that culture is just a further manifestation of homeostasis and is fundamental to the maintenance of civilization, humanity’s most powerful invention.

The Man from the Future: The Visionary Life of John von Neumann by Ananyo Bhattacharya

This is a fascinating book on John von Neumann – perhaps the flat out smartest (in terms of raw horsepower) and most influential thinker on the planet during his time. This helped me appreciate all his contributions, and feel pure awe at how he was both overwhelmingly smart and quick as well as, uniquely creative.

What’s our Problem by Tim Urban

I love his “wait but why” blog. This was a fun, irreverent, but very insightful sizing up of our uniquely turbulent social/political situation today! He frames dialogue in terms of high vs low level rather than right/wrong, which is useful. The second half of his book lays out in great detail his view of Social Justice Fundamentalism as a movement that started with good intentions but has gone off the rails, as every movement can when the line is crossed in “ends justifying the means.”  Good food for thought and perspective. I don’t know enough in this realm to have a well thought out perspective, but am happy to take it all in! 

Draft No. 4 by John McPhee

This is a unique book on the writing process of John McPhee who wrote for the New Yorker. He shares anecdotes, stories, and advice. I loved the writing and some of the insights into his thought process on how to get it just right. I couldn’t put it down. 

Today we Die a Little by Richard Askwith

Great book about the life of the Czech runner Emil Zatopek. At his peak, he was truly a beast, winning the 1952 5K, 10K, and Marathon (his first Marathon, entered on a whim). He also pioneered in his uniquely hard core manner, the concept of interval training, doing up to 100 repeat 400M in a session! Crazy! I love it!  While I liked the running descriptions, the depiction of the wider context of the post WWII situation was eye truly opening.

The Idea of the Brain, by Matthew Cobb

Starts well! Brilliant and packed history of the latest in our understanding of the brain. Overall great perspective piece. It starts to fall apart a bit near the end as it comes upon more recent history, as his own biases more prominently enter in. I also didn’t appreciate his dismissal of fMRI in parts (but that’s my bias!) as some complaints were slightly unfair. Later he talks up fMRI so he redeemed himself somewhat 🙂.  

Rethinking Consciousness by Michael S. A. Graziano (A)

Reading popular books by prominent scientists on consciousness is a secret (or not so secret) hobby of mine.  I like Graziano’s thoughts, and while I think that there are a few areas where his construct is not so air tight, I think he’s onto something as  “hard problem” disappears with his attention schema construct of our sense of self…nested, external and internal world models which we attend to.

The Future of Seeing by Dan Sodickson (A)

I was asked to review this by a publisher, and hopefully it will be coming out soon! Dan is a luminary in the field of MRI, having won the ISMRM Gold Medal for co-inventing parallel imaging approaches. He’s a brilliant physicist and radiologist. Now I know he’s also a great writer of popular books that transmit his infectious enthusiasm and deep insights. This is about the future of imaging – with a heavy emphasis on medical imaging. It was packed with information and an inspiring read! I actually listened to it, as I uploaded the pdf onto my speechify app and listened while driving to and from my National Senior Games National Meet in Pittsburgh. 

Shakespeare by Bill Bryson (A)

Bill Bryson is a super entertaining, engaging, and deeply scholarly writer who exudes irreverence with every sentence. I’m personally fascinated by Shakespear as I feel he was a super genius who single-handedly influenced the English language and informed the human condition in a once-in-a-millenium way (I know..I’m not really sticking my neck out here with this opinion). This lays out what is known and speculated about him in a way only Bryson can pull off. 

Embrace the Suck by Brent Gleeson (A)

This is by a former Navy Seal and is all about developing resilience. Good stuff. I listened to it on a long drive.  Practical, inspiring, solid advice and engaging stories.

Talent by Tyler Cowen

This was recommended to me by Adam Thomas and it’s all about recognizing talent in the context of hiring or pretty much anything. I interview many people, and am always trying to figure out the best things to ask or look for to really get at whether they would be great for the job. This delivered some solid, actionable advice. 

Never Finished by David Goggins (A)

Listened to this audiobook on my runs. It’s a followup to his first book, and while good, didn’t have the same “punch” as his first one. Goggins is both inspiring and curious. I’m not sure I resonate with what motivates him (much as to do with deep anger) but hearing about his hard core exploits is fun and inspiring. 

Indestractable by Nir Eyal

We all suffer from distraction and have challenges in controlling our attention. I figured I would give it a read as it had good reviews and promised advice on helping kids become less distracted – something I’m always looking for as well. Overall, a good book with solid usable advice. Insightful but nothing fundamentally new. 

Feeling and Knowing by Antonio Damasio (A)

As much as I loved Damasio’s book that I read earlier, and as much as I wanted to like this, I found this one too vague and a bit flat.  Nothing really new. Perhaps it was the audio format. I had a feeling he was contracted to do this and just whipped something out quickly. Had a hard time paying attention to this one. 

Modern Training and Physiology for Middle and Long-Distance Runners by John Davis

Solid timeless advice and a few good insights, also unique tidbits that I never know, including that the famous writer/thinker Joseph Campbell briefly held the Columbia University school record in the half mile! 

The Slummer: Quarters Till Death by Geoffrey Simpson

Amature writing and a strange dystopian setting with undeveloped characters, but the visceral descriptions of running kept me engaged. As a runner, I could relate. 

Beyond Illusions by Brad Barton

Brad ran a mind-boggling age-group world record of 4:19 at age 53 so I was interested. Some slightly interesting descriptions of his process, but a pretty below average book.

Dr. Sean Marrett: A Life of Joyful Engagement

Peter Bandettini1, Bruce Fischl2,4, Richard Hoge3, Albert Gjedde5 and Alan Evans3

1 National Institute of Mental Health, Bethesda, MD

2 Martinos Center, Massachusetts General Hospital, Boston, MA

3 McGill University, Montreal, Quebec, CA

4 Department of Radiology, Harvard Medical School, Boston, MA.

5 University of Copenhagen, Copenhagen, DK

Dr. Sean Marrett, a treasured staff scientist for the NIH intramural program’s Functional MRI Facility (FMRIF) for over 20 years, passed away on December 12, 2023, after a 16-month battle with mesothelioma at the age of 62. The cruel irony is that it was discovered weeks before his planned retirement. Sean radiated “joie de vivre” more strongly than anyone we have ever known, and has touched so many within the NIH IRP and international brain imaging community.

After receiving his Bachelor’s degree in Electrical Engineering from McGill University in 1983, Sean took the position of system manager and programmer under Dr. Alan Evans in the McConnell Brain Imaging Center (BIC) at the Montreal Neurological Institute. It was immediately obvious that Sean was a brilliant mind with a passion for the scientific side of BIC life. He dove into many aspects of the work there, most notably with Drs. Keith Worsley, Evans and Peter Neelin in their legendary ‘Bunker’ office meetings on the spatial statistics of activation studies with PET and fMRI. Sean later received his Ph.D. in Neuroscience from McGill University, supervised by Dr. Albert Gjedde, on oxidative metabolism in the brain. Albert was a close friend of Sean’s, harboring a great respect and affection. According to Albert, Sean in many ways inspired the development of PET imaging in Denmark, beginning with the deposit of an early device from Montreal in Copenhagen, and a second device in Aarhus that Sean came to work with and to use to develop novel imaging methods for the mapping of oxygen metabolism and blood flow. 

He carried out his post-doc at the Massachusetts General Hospital (MGH) NMR Center (now the Martinos Center) from 1997 to 2000, training under the PhD scientists Bruce Rosen, Roger Tootell, and Anders Dale during their seminal fMRI-based research on the human visual system. In his time at MGH, Sean contributed to the many projects probing the retinotopic and frequency-tuning characteristics of human early visual cortex. He also made significant contributions to the software that would go on to become the FreeSurfer suite of neuroimaging analysis tools. His burgeoning skills during his postdoctoral research years illustrated Sean’s seminal strengths: he combined a wide-ranging curiosity, a deep and broad knowledge of human neuroimaging and neurophysiology with technical skills that enabled him to enhance the research of the many scientists who were fortunate enough to interact with him.

In 2000, he joined the nascent, jointly funded NINDS/NIMH FMRIF, as a staff scientist. Sean’s influence has permeated the NIH brain imaging community. He forged the positive, open, and helpful culture that now defines the FMRIF. Over the course of 20 years, thousands have been helped by Sean. As the de facto FMRIF manager, the multitude of tasks he performed included balancing the budget, creating the computational and stimulus infrastructure, and troubleshooting innumerable issues as they came up – all while setting the tone and policy of how the FMRIF is run. He successfully navigated the siting and installation of five of our scanners, including our two recent 7T scanners. Lastly, he collaborated with many groups across several NIH institutes – helping them get the best data possible. 

His career spanned the beginnings of functional brain mapping. Nearly every brain imaging scientist from back in the day knew and loved Sean. In Montreal, he was a force of nature, whether engaging in intense scientific debates or carousing with other BICers at the Thompson House student bar. At MGH his contributions helped surface-based analysis become ubiquitous in the study of human cortical function. He was also part of OHBM history, as one of the driving forces behind the OHBM Hackathons and embodied the field’s energy going forward. At national and international meetings, Sean would greet new and old colleagues with his radiant smile and good cheer – as if they were the primary person that he was looking forward to meeting – always knowing what they were working on, and deeply curious to get any updates – personal or professional. His knowledge of the literature was encyclopedic and up to date. This deep grasp of salient information went beyond literature, as he developed a reputation as having a preternatural awareness of what was going on throughout the NMR Center, Clinical Center, IRP, NIH, and brain mapping community worldwide. Of all the people I know, he was among the most intensely curious about literally everything. 

In the past decade, his focus of choice was scanner hardware and pulse sequences, and all the possible combinations of resolution, sensitivity, and contrast  the latest in each could produce. This was an area initially well outside of his training but in time, he mastered it. When the scanner was open, chances were that he would be there testing a sequence. His favorite meetings, aside from OHBM, were the annual ISMRM (International Society for Magnetic Resonance in Medicine) meeting and the RSNA (Radiological Society for North America) meeting where he would talk shop with as many people as he could. In particular, he loved to make his annual day trip to RSNA to take in the latest in technology, and to deeply engage with the MRI-related vendors. He knew them all on a first name basis, and he was so engaging and obviously caring that almost everyone  considered him a good friend. 

Sean’s defining traits were his unassuming openness and genuine interest in others as well as a deep empathy for them. Regarding his work and the people he worked with, he really cared. He had boundless energy to personally engage with everyone he met. He would also remind us that we were surrounded by amazing technology and brilliant people. What more could one want? Throughout his personal and professional life, his constant and radiant smile was that of a kid in a candy store. He was a deeply devoted husband and a very proud father to his two sons. He was also a proud Canadian – or more precisely – a proud “Québécois.” He was equally ready to delve into an intense science discussion or to share a laugh or a good story. He was the first to march unhesitatingly into the ocean that he loved, so he could play in the waves, no matter how cold. During his life, and in particular, during his last year, he traveled widely. Each new location was a source of wonder, joy and excitement.  It was clear that he embraced this world with every ounce of his being. 

He cherished social gatherings and celebrations, and no matter how trivial or inconsequential they may have been, he always mentioned afterwards, “That was so fun! Just wonderful!” Perhaps the most appropriate summary of his life would be his dancing. Anyone who knew Sean also knew, as truth itself, that whenever there was a dance floor, he was ALWAYS out there, drenched in sweat, radiating joy, fully in the moment, dancing as if that was all that ever mattered – and indeed, he was right. 

Sean attending to  the delivery of the National Institute of Mental Health Functional MRI Facility’s Siemens Terra 7T, March 30, 2022. 

Link to Photos of Sean: https://photos.app.goo.gl/1t8P85LdpzxxG4T29

The Challenge of BWAS: Unknown Unknowns in Feature Space and Variance

The paper by Marek et al (Reproducible brain-wide association studies require thousands of individuals, Nature, 602, 7902, pp 654-660, 2022) came out recently, and caused a bit of a stir in the field for a couple of reasons: First, the title, while an accurate description of the findings of the paper, is bold and lacking just enough qualifiers to quell immediate questions. “Does this imply that fMRI or other measures used in BWAS are lacking intrinsic sensitivity?” “Is this a general statement about all studies now and into the future?” “Is fMRI doomed to require thousands of individuals for all studies?” The answers to all these questions is “no,” as becomes clear on reading the paper.

Secondly, I think that the reaction of many on reading the title was a sigh and a thought that this is yet another paper in the same vein as the dead salmon study, the double dipping paper, or the cluster failure paper that makes a cautionary statement about fMRI that is then wildly spun by the popular media to imply more damning impact than brain imaging experts would gather. Again, it’s not this kind of paper, however there was a bit of hyperbole in places. The Nature News article titled “Can brain scans reveal behavior? Bombshell study says not yet” discusses this in an overall reasonable manner but the need for an attention-grabbing title was unfortunate. The study was not a bombshell. The Marek study was a clear, even-handed, well-done (clearly a huge amount of work!) description of a specific type of comparison in fMRI and MRI performed in a specific way. While my reaction to the Merek paper was that of mild surprise that the reported correlation values were a bit lower than expected, I was more curious than anything, and thankful that such a study was performed to clarify precisely where the field – again, for a specific type of study performed in a specific manner – was.

I was asked by several groups to comment on it. First, I discussed my thoughts with Nature News. At the time of my discussion, I was still not certain what I thought of the paper, and was suggesting that there may be sources of error and low power that might be improved upon: such as population selection, the choice of resting state as the measure, time series noise, or even spatial normalization pipelines that might be smearing out much of the useful information. I aimed to emphasize in that discussion that it should be made clear that the Marek paper is emphatically NOT a statement about the intrinsic sensitivity of fMRI – which sensitive enough to reliably detect activation in single subjects – and even in single runs or with single events. It was more a statement on the challenges of extracting subtle differences between populations having different behaviors. While I feel that there is quite a bit that can be done to push the necessary numbers down (as a field, we are really just getting started), I can’t rule out the fact that people may just be too different in how their brains manifest differences in behavior – thus confounding attempts to capture population effects. It’s really an interesting question for future study.

I was also asked to write something for an upcoming collection of opinions on the Marek paper to be published in Aperture Neuro – a new publishing platform associated with the Organization for Human Brain Mapping. I finally submitted it a few weeks ago.

In the mean time, four of the authors (Scott Marek, Brenden Tervo-Clemmens, Damian Fair, and Nico Dosenbach) graciously agreed to be interviewed by me on the OHBM Neurosalience Podcast. This episode can be reached here. During this truly outstanding conversation, the authors further clarified the methods and impact of the paper. I pushed them on all the things that could be improved, methodologically, to bring these numbers down but was just a bit further swayed that one implication of these results may be that the variability of people, as we currently sort them based on their behavior, really might be larger than we fully appreciate. It should be emphasized that the authors main message was overall extremely positive on the potential impact and importance of these large N studies as well as the many other ways that fMRI can be used with small N or even individual subjects to assess activity or changes in activity with interventions.

I was lastly asked to write a commentary for Cell Press’s new flagship medical and translational journal, Med, which I just submitted yesterday and am adding to this blog post, below. However, before you read that, I wanted to leave you with a thought experiment that might help illustrate the challenge – at least as I see it:

It’s been shown that fMRI can track changes in brain activity or connectivity with specific interventions. Let’s say, after a month of an intervention, we clearly see a change. This is not unreasonable and has been reported often. We repeat this for 100 or 1000 subjects. In each subject, we can track a change! Now, here’s the problem. If we repurposed this study as a BWAS study by grouping all subjects together before and then after the intervention and compare the groups, the implication (as I understand Marek et al) is that we would likely not see a reliable effect that comes through, and those effects that we did see from this BWAS-style approach would lack the richness of the individual changes that we are able to see longitudinally with every one of the subjects. The implication is that each subject’s brain changed in a way that was reliably measured with fMRI, but each brain changed in a way that was just different enough so that when grouped, the effects mostly disappeared. Again, this is just a hypothetical thought experiment. I would love to see such a study done as it would shed light on specifically what it is about BWAS studies that result in effect sizes that are lower than intuition suggests.

Either way, here is the paper that I just submitted to Med. I would like to thank my coauthors, Javier Gonzalez-Castillo, Dan Handwerker, Paul Taylor, Gang Chen, and Adam Thomas for all their insights and in helping to write it. On last note, since this paper was a commentary, I was limited to 3000 words and 15 references. Otherwise it would have been much longer with many more relevant references.


The challenge of BWAS: Unknown Unknowns in Feature Space and Variance

Peter A. Bandettini1,2, Javier Gonzalez-Castillo1, Dan Handwerker1, Paul Taylor3, Gang Chen3, Adam Thomas4

1 Section on Functional Imaging Methods

2 Functional MRI Core Facility

3 Scientific and Statistical Computing Core Facility 

4 Data Science and Sharing Team

National Institute of Mental Health

Bethesda, MD 20817

Abstract:

The recent paper by Marek et al. (Reproducible brain-wide association studies require thousands of individuals, Nature, 602, 7902, pp 654-660, 2022) has shown that to capture brain-behavioral phenotype associations using brain measures of cortical thickness, resting state connectivity, and task fMRI, thousands of individuals are required. For those outside the field of human brain mapping and even for some within, these results are potentially misunderstood to imply that MRI or fMRI lack sensitivity or specificity. This commentary expands and develops on what was touched upon in the Marek et al. paper and focuses a bit more fMRI. First it is argued that fMRI is exquisitely sensitive to brain activity and modulations in brain activity in individual subjects. Here, fMRI advancement over the years is described, including examples of sensitivity to robustly map activity and connectivity in individuals. Secondly, the potential underlying – yet still unknown – factors that may be determining for the need for thousands of subjects, as described in the Marek paper, are discussed. These factors may include variation in individuals’ anatomy or function that are not accounted for in the processing pipeline, sub-optimal choice of features in the data from which to differentiate individuals, or the sobering reality that the mapping between behavior (including behavior differences) and brain features, while readily tracked within individuals, may truly vary across individuals enough to confound and limit the power of group comparison approaches – even with fully optimized pipelines and feature extraction approaches. True human variability is a potentially rich area of future research – that of more fully understanding how individuals expressing similar behavior vary in anatomy and function. A final source of variance may be inaccurate grouping of populations to compare. Behavior is highly complex, and it is possible that alternative grouping schemes based on insights into brain-behavior relationships may stratify differences more readily. Alternatively, allowing self-sorting of data may inform dimensions of behavior that have not been fully appreciated. Potential ways forward to explore and correct for the unknown unknowns in feature space and unwanted variance are finally discussed.

The Emergence and Growth of fMRI:

Human behavior originates in the brain and differences in human behavior also have brain correlates. The daunting task of neuroscience is to trace differences and similarities in behavior over time scales of milliseconds to decades back to the brain which is organized across temporal and spatial scales of milliseconds to years and spatial scales of microns to centimeters. Capturing the salient features across these scales that determine behavior is perhaps the defining challenge of human neuroscience. Insights derived from this effort shape our understanding of brain organization and may provide clinical utility in diagnosis and treatment. Advances in this effort are fundamentally driven by more powerful tools coupled with more sophisticated questions, experiments, models, and analyses.

When functional MRI (fMRI) emerged, it was embraced because activation-induced signal changes are robust and repeatable. Blood oxygen level dependent (BOLD) contrast allows non-invasive mapping of neuronal activity changes in human brains with high consistency and fidelity on the scales of seconds and millimeters. Because it was able to be implemented on the already vast number of clinical MRI scanners in the world, its growth was explosive. The activation-induced hemodynamic response, while limited in many ways, has become a widely used and effective tool for indirectly mapping brain human activation. It is indirect because it relies on the spatially localized and consistent relationship between brain activation and hemodynamic changes that result in an increase in flow, volume, and oxygenation. Increases in flow are measured with techniques such as arterial spin labeling (ASL), volume with techniques such as vascular space occupancy imaging (VASO), and blood oxygenation with T2* or T2 weighted contrast (i.e. BOLD contrast). BOLD contrast is far and away the most common of the techniques because of its ease in implementation and highest functional contrast of the three.

Early on, richly featured and high-fidelity motor and sensory activation maps were produced, followed quickly by maps of cognitive processes and more subtle activation. Then resting state fMRI emerged in the late 1990’s, demonstrating that temporally correlated spontaneous fluctuations in the BOLD signal organized themselves into coarse networks across 100’s of nodes. The study of the functional significance of these networks rapidly followed, accompanied by revelations that these networks dynamically reconfigured over time, and were modulated in association with specific tasks, brain states, or measures of performance(1). 

Functional MRI has flourished over three decades in a large part because of its success in creating detailed and informative maps of brain activation in individuals in single scanning sessions. At typical resolutions, the functional contrast to noise of fMRI is about 5/1, depending on many factors. This robustness has enabled fMRI to delineate, at the individual level, activity changes associated with vanishingly subtle variations in stimuli or task, learning, attention, and adaptation to name a few.  Additionally, in quasi-real time, fMRI has successfully provided neuro-feedback to individuals, leading to changes in connectivity and, in some cases, behavior(2). Clinically,  fMRI is increasingly used for presurgical mapping of individuals(3). There is no doubt that the method itself is sufficiently robust and sensitive to be applied to individual subjects to map detailed organization patterns as well as subtle changes with interventions. 

Functional MRI has been taken further. Voxel-wise patterns of activity within regions of activation in individuals were shown to delineate subtle variations in task or stimuli. This pattern-effect mapping, known as representational similarity analysis(4), has shown continued success and growth. Because each pattern is subject and even session-specific, it currently defies multi-subject averaging; however, approaches such as hyper-alignment(5) show promise even at this level of detail. 

Over time, fMRI signal has been shown to be stable, repeatable, and sensitive enough to reveal induced differences in activity as an individual brain learns, adapts, and engages. Functional MRI can consistently delineate functional activation in individual brains – going so far as to be able to allow approximate reconstruction of the original stimuli, from activation patterns associated with movie viewing or sentence reading(6,7). All these approaches rely on within-individual contrasts, thus sidestepping the less tractable problem of variance across subjects. 

For “central tendency” mapping, it was determined that combining data across subjects shows the generalizability from individuals to a population. The “central tendency” effects and derived time courses are more stable but inevitably minimize or remove more subtle effects that population subsets may reveal. These approaches are negatively impacted by variation in structure and function that may be unaccounted for or defy current best practices in spatial normalization and alignment. 

Over the past three decades, since fMRI and structural MRI have been able to provide individualized information, the desire has been to go beyond central tendency mapping to reveal individual differences in activation, connectivity, and function. With “standard” clinical MRI, scans of the brain, lesions, tumors, vascular, or gross structural abnormalities have been straightforward for a trained radiologist to identify; however, psychiatric and most behavioral differences have brain correlates that are much too subtle for standard clinical MRI approaches. An effort has been made over at least the past two decades to pool and average functional and/or structural images together towards the creation of reproducible and clinically useful biomarkers. No one doubts that differences between individuals or truly homogeneous groups reside in the brain; however, whether they can be seen robustly or at all at the specific temporal and spatial niche offered by structural and functional MRI remains an open question. This question remains open because the brain is organized across a wide range of temporal and spatial scales and the causal physical mechanisms that lead to trait or state differences are not currently understood. At this stage, neuroscientists and clinicians are using fMRI to determine if any signatures related to behavioral or state differences can be robustly seen at all. It may well be that distinct brain differences across many scales can lead to similar trait differences or it may be that they reside at a spatial or temporal scale – or even magnitude – that is outside of what fMRI or MRI can capture. It remains to be fully determined.

The challenge of the Marek paper:

The recent paper by Marek et al. (8) has argued that behavioral phenotype variations associated with variations in cortical thickness, activation, and resting state connectivity, which they termed Brain-Wide Associations (BWAS) as measured with MRI, are reproducible only after thousands of individuals are considered. The authors of the paper suggest that the unfortunate reality is that the effect sizes are so small that reproducible studies require about two thousand subjects, and would benefit somewhat from further reduction in time series noise and multivariate analysis approaches. It is good news that we can get an effect, but for many invested in fMRI studies with this goal, this may be cause for despair and confusion. How is it that we can map individual brains so robustly, efficiently, and precisely, yet require so many subjects to derive any meaningful result when looking for differences in this readily mapped functional and structural information? 

While single subjects can produce robust activation and connectivity maps, the differences in activation or structure as they relate to differences in traits across individuals are either so subtle and/or so variable that thousands of subjects are required to see emerging (i.e., “central tendency”) effects – and these may be just the most robust effects. Put another way, if the unwanted variability across subjects were vanishingly small, then the results of Marek et al would suggest that the BWAS – related differences in measured activation, structure, or connectivity would be about three orders of magnitude smaller than the main effect that is commonly seen in individual maps (1 subject required for an activation map vs 1000 subjects required for reliable difference). Given the much more readily observed changes observed while tracking individuals longitudinally as they change state, the small difference explanation seems highly unlikely. Therefore, the need for thousands of subjects is more likely explained predominantly by the unwanted and unaccounted for variance in trait-relevant or processing pipeline-related structural, activation, or connectivity patterns. 

The problem or challenge, as it exists, is not primarily with the sensitivity or specificity of fMRI or structural MRI. Rather it likely resides in the uncharacterized and tremendously large variation in observed brain-behavior relationships across individuals. The underlying brain structure-function relationships, as measured with fMRI or MRI, that may be different for, say, a depressed individual may be numerous, subtle, and idiosyncratic. The study of BWAS is an attempt to determine the most common brain-based causes from a turbulent sea of possible causes across individuals. The Marek et al study has shown that this challenge is more profound than most of us may have imagined – at least on the temporal and spatial scale that we have access to through our tools. It may also be true that those effects that we do eventually see after studying groups of thousands of subjects are but a small fraction of the dispersed effects unique to each individual – and that those that we are able to observe are not necessarily the most influential to the trait observed, as they are simply, by definition, the most commonly observed. 

Marek et al have done a service to the field by pointing out concerns for a type of fMRI study that has wide-spread interest but so far, relatively few reported studies. Their work may be interpreted to suggest that, given the formidable number of subjects needed, BWAS-style studies are not a practically tenable use of fMRI. This conclusion should be tempered by an alternate view. Large databases of deeply characterized subjects may be queried in many different ways, potentially increasing their utility into the future. The authors also point out that the effect sizes shown are at least comparable to large database gene-wide association studies (GWAS). Improvement is still likely. It’s important to make sure the field of fMRI has done due diligence in being certain that it has minimized the irrelevant variance across subjects as it is manifest through our techniques for determining function and in our techniques for pooling multi-subject data. 

The unknown unknowns in feature space and variance:

Is there something we are missing – hidden sources of irrelevant variance, inaccurate choices in feature space, or mischaracterization and therefore mis-grouping of behavioral phenotypes – that are suppressing the more informative features and thus reducing effect size? In the tables below, the “unknown unknowns” in understanding BWAS power and possible approaches to address them are described. Table 1 lists potential unknown confounders that may be reducing BWAS power. Within this table are some considerations on how to understand and address these unknowns. Much more could be said for any of these, and indeed work is already taking place worldwide on all these topics. Table 2 lists other considerations that are not necessarily unknowns but areas of active research that should also be considered when designing BWAS or perhaps any fMRI study.

Table 1: Potential Confounders that are not fully understood nor addressed:

ConfoundDescription
Resting State fMRIWhat really is resting state fMRI – aperiodic bursts of synchronized activity? How much is conscious? How much is arousal? How much is breathing? How does it vary with brain state, prior tasks, time of day, etc.? How deeply can we truly interpret correlated time series signals as the correlation depends on signal phase, shape, and underlying noise – all that could change, implying a change in connectivity where there is none, or vice versa. As easy as it is to implement resting state in the scanner, without more precise ways of dissecting and interpreting the most informative aspects of this signal, other approaches might be more powerful. At the very least, external measures that help inform the analysis of resting state (e.g., eye tracking or alertness measures) are needed.  
Spatial NormalizationIndividual brain anatomy varies as a function of spatial scale. Transforming brains to normalized and standardized space may be removing informative features. Nonlinear warping and registration approaches have advanced over the years yet remain far from perfect. One source of imperfection is anatomical:  when aligning brains with strongly varying sulcal and gyral patterns, diffeomorphic warp fields have errors in some areas. On a coarser scale, brains have regionally differing gyral and sulcal patterns as well as different functional/structural relationships. Echo planar images have additional warping due to field inhomogeneities.  
ParcellationIf a standard parcellation template is applied to a cohort of normalized brains, the mismatch between the true functional delineation of each parcel in each subject’s brain and the applied parcellation may be profound, causing extreme mixing of the signal between adjacent parcels. It may also result in misidentified parcels: a subject’s region X is, in reality, mostly in region Y, so it gets binned and compared with wrong information, either washing out real effects or pointing to false ones. Effects from small parcels may be entirely washed out. Additionally, it’s likely that the typical parcels are substantially larger than most informative cortical units. A difference between may reside as a connection difference between a small sub-component at the border of one parcel, which may be mixing with the signal from other parcels, thus eliminating the effect. Such a useful feature, if it existed, would be invisible in the analysis described in Marek et al. The variation between functionally derived individual subject parcellation maps should be further explored. Misalignment, misregistration, and mis-parcellation may be substantial sources of unwanted variance.  
Processing PipelinesThe Marek paper had well-controlled pipelines, however, each pipeline has many steps, well beyond the scope of this perspective piece, that, if varied, would result in perhaps different conclusions. Pipeline comparisons have shown the sensitivity to processing steps for the results produced, however, missing is the lack of “ground truth.” Every pipeline likely has shortcomings. Quality control metrics for each time series, combined with visual inspection of the data in an efficient manner is fundamental for the development of more automated methods for identifying and reducing variance in population-level studies.    
Population SortingPsychosis and intelligence, used here to sort the populations being compared, are likely oversimplifications of highly multidimensional behavioral phenotypes that may have no one correspondence in the brain. If they are all pooled together for comparison, interesting and perhaps strong differences may be washed out. More precise and nuanced pooling of populations or even data driven population sorting (while carefully avoiding circularity of course) would perhaps improve these results significantly. Behavioral phenotypes and brain measures are high dimensional. As these manifolds are better understood, it’s likely that stronger associations will be obtained with greater efficiency.   
Anna Karenina effectThis effect was first suggested by Finn et al (9)and based on the first line of the famous novel by Tolstoy: “Happy families are all alike; every unhappy family is unhappy in its own way.” It may be that the neuronal correlates of disorders are substantially more variable than the central tendencies of normal populations, reducing the effect size when attempting to discern a single network or set of networks associated with the disorder. This effect may play a role in the distributions of phenotypes even within typical non-pathologic ranges – such as intelligence.

Table 2: Other Avenues to Improvement

ApproachDescription
Dynamic resting state fMRIWhat really is resting state fMRI? Is it aperiodic bursts of synchronized activity that is transformed through the hemodynamic response to low frequency fluctuations? How much arises from conscious experience(10)? How much is arousal? How much is breathing? How does it vary with brain state, prior tasks, time of day, etc.? How deeply can we (or should we) interpret correlated time series signals as the correlation depends on signal phase, shape, and underlying noise – all that could change, implying a change in connectivity where there is none, or vice versa. As easy as it is to implement resting state in the scanner, without more precise ways of dissecting and interpreting the most informative aspects of this signal, other approaches might be more powerful. At the very least, external measures that help inform the analysis of resting state (e.g., eye tracking or alertness measures) are needed.  
Naturalistic StimuliEngaging subjects in passive or minimally demanding yet time-locked tasks has been shown to produce more stable connectivity maps and opens up new options for analyses. For instance, movie watching or story listening allows model driven or cross-subject correlation analysis and helps to tease apart informative elements of ongoing brain activity(11,12). Time locked continuous engagement in a task also may be optimized to differentiate behavioral phenotypes – used as “stress tests” in similar ways as cardiac stress tests are used to identify latent pathology. Continuously engaging tasks also control for vigilance changes over time – which has been shown to be a confound.   
Task fMRILike movies, as mentioned above, a well-chosen set of tasks may serve to better stratify effects across individuals and populations. Specific tasks could be optimized to produce a large range of fMRI responses depending on the question and associated behavioral measures. The field of fMRI has evolved a massive array of tasks, able to selectively activate a wide range of networks. With more precise control over activation magnitude and location, as well as precise monitoring of task performance with each response, selective dissection of differences might improve.   
Spatial ResolutionDifferences may perhaps reveal themselves more clearly at the layer or column resolution level – capable of being imaged with fMRI, however here, the problem of spatial normalization and registration becomes even more problematic and unsolved by any automated process. For example, to illustrate, an early fMRI paper demonstrated clear differences in ocular dominance column distribution in patients with amblyopia. If these data were put through the pipelines used in the Marek paper, the results would likely fall well below any statistical threshold or measure of replicability as the useful features are much finer than the spatial error inherent to spatial normalization – not to mention that ocular dominance columns are quasi-random, thus defying any current normalization scheme. We need to improve our ability to identify and use, in a principled manner, features such as these before we can make conclusive statements on effect size that is derivable with fMRI.  
Time Series VarianceIn these data physiological noise dominates over more well-understood thermal noise. Methods for reducing time series variance were mentioned in Marek et al. Novel acquisition approaches such as multi-echo fMRI may help, along with external measures of breathing, vigilance, and other contributors to variance. Even with these methods for measurement, robust ways of using these measures to eliminate this variance – or perhaps associate it with phenotype – requires substantial further development. It should be emphasized here that if the field is fully successful in eliminating all physiological noise from the data, then rather than having a ceiling temporal signal to noise ratio (SNR) of 100/1, the temporal SNR would only be limited by the intrinsic image SNR determined by the scanning parameters and the RF coil – thus allowing perhaps an order of magnitude improvement in temporal signal to noise. 
Other fMRI and MRI FeaturesCorrelation is but one feature of the fMRI time series. Other features such as entropy, network configuration dwell time, the sequence of network configurations over time, mutual information, and even standard deviation, may prove to be more robust and informative. The activation-elicited fMRI signal itself can be further reduced to other features such as latencies, undershoots, transients, NMR phase, and much more. Perhaps all of these contain independent information that may be leveraged in multivariate analysis to increase power. Structural features such as gyrification, fractal dimension, global T1, T2, etc… may also be more informative than gray matter thickness.

In summary, Marek et al provide a sobering snapshot of the state of BWAS using MRI and fMRI. The study of brain wide associations(13), like the study of gene-wide associations(14), does have promise however has barely just begun work towards objectively identifying and extracting the most meaningful features and identifying and removing the confounding variance from the signal – in time and space. We are at an early stage in this promising research. The Marek at al study has performed a profound service by clarifying, quantifying, and highlighting the challenge. 

The study of individuals and how they change with time and natural disease progression, or interventions will continue. In fact, large population longitudinal studies in which each participant is directly compared with themselves at an earlier time, and then compared across the cohort will likely have a high yield of deep insights into brain differences and similarities(15). These studies are difficult but are worth pursuing as they avoid many of the potential pitfalls of BWAS, related to between-subject variability, as described in Marek et al.

Individual or small N fMRI will continue as insights into healthy brain organization and function are still being derived at an increasingly rapid rate as the field develops methods to extract more subtle information from the data. Individual fMRI for presurgical mapping, real time feedback, and neuromodulation guidance also continues with extremely promising progress. 

Evolving fMRI from central tendency mapping to identifying differences in individuals has proven to be deeply challenging. As the field continues working to address this challenge, it will likely uncover unique sources of variance residing in every step of acquisition and analysis; as well as yet-uncovered structure in idiosyncratic brain-behavior relationships. The fMRI signal is intrinsically strong, reproducible, and robust, as has been shown over the past 30 years. To use it to compare individuals, we need to delve much more deeply into how individuals and their brains vary so we can identify and minimize the still unknown nuisance variance and maximally use the still unknown informative variance. Once we can do this, the effect sizes and replicability promise to reach a useful level with fewer required subjects. In the process of this work, new principles of brain organization may likely be derived. Perhaps before the field rushes ahead to collect more two-thousand subject cohorts, it should explore, understand, and minimize the unknown unknowns in the feature space and variance among individuals.

References

1.         Newbold DJ, Laumann TO, Hoyt CR, Hampton JM, Montez DF, Raut RV, et al. Plasticity and Spontaneous Activity Pulses in Disused Human Brain Circuits. Neuron. 2020;1–10.

2.         Ramot M, Kimmich S, Gonzalez-Castillo J, Roopchansingh V, Popal H, White E, et al. Direct modulation of aberrant brain network connectivity through real-time NeuroFeedback. Elife. 2017;6:e28974.

3.         Silva MA, See AP, Essayed WI, Golby AJ, Tie Y. Challenges and techniques for presurgical brain mapping with functional MRI. NeuroImage Clin. 2018 Jan 1;17:794–803.

4.         Kriegeskorte N, Mur M, Bandettini P. Representational similarity analysis – connecting the branches of systems neuroscience. Front Syst Neurosci. 2008 Nov;2(NOV):2007–8.

5.         Haxby JV, Guntupalli JS, Nastase SA, Feilong M. Hyperalignment: Modeling shared information encoded in idiosyncratic cortical topographies. Elife. 2020;9:e56601.

6.         Pereira F, Lou B, Pritchett B, Ritter S, Gershman SJ, Kanwisher N, et al. Toward a universal decoder of linguistic meaning from brain activation. Nat Commun. 2018 Mar 6;9(1):963.

7.         Nishimoto S, Vu AT, Naselaris T, Benjamini Y, Yu B, Gallant JL. Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies. Curr Biol. 2011 Oct 11;21(19):1641–6.

8.         Marek S, Tervo-Clemmens B, Calabro FJ, Montez DF, Kay BP, Hatoum AS, et al. Reproducible brain-wide association studies require thousands of individuals. Nature. 2022 Mar;603(7902):654–60.

9.         Finn ES, Glerean E, Khojandi AY, Nielson D, Molfese PJ, Handwerker DA, et al. Idiosynchrony: From shared responses to individual differences during naturalistic neuroimaging. NeuroImage. 2020 Jul;215:116828–116828.

10.       Gonzalez-Castillo J, Kam JWY, Hoy CW, Bandettini PA. How to Interpret Resting-State fMRI: Ask Your Participants. J Neurosci. 2021 Feb 10;41(6):1130–41.

11.       Hasson U, Nir Y, Levy I, Fuhrmann G, Malach R. Intersubject Synchronization of Cortical Activity during Natural Vision. Science. 2004 Mar;303(5664):1634–40.

12.       Finn ES. Is it time to put rest to rest? Trends Cogn Sci. 2021 Dec 1;25(12):1021–32.

13.       Sui J, Jiang R, Bustillo J, Calhoun V. Neuroimaging-based Individualized Prediction of Cognition and Behavior for Mental Disorders and Health: Methods and Promises. Biol Psychiatry. 2020 Dec 1;88(11):818–28.

14.       Visscher PM, Wray NR, Zhang Q, Sklar P, McCarthy MI, Brown MA, et al. 10 Years of GWAS Discovery: Biology, Function, and Translation. Am J Hum Genet. 2017 Jul 6;101(1):5–22.

15.       Douaud G, Lee S, Alfaro-Almagro F, Arthofer C, Wang C, McCarthy P, et al. SARS-CoV-2 is associated with changes in brain structure in UK Biobank. Nature. 2022 Apr;604(7907):697–707.

The New Age of Virtual Conferences

For decades, the scientific community has witnessed a growing trend towards online collaboration, publishing, and communication. The next natural step, started over the past decade, has been the emergence of virtual lectures, workshops, and conferences. My first virtual workshop took place back in about 2011 when I was asked to co-moderate a virtual session about 10 talks on MRI methods and neurophysiology. It was put on jointly by the International Society for Magnetic Resonance in Medicine (ISMRM) and the Organization for Human Brain Mapping (OHBM) and considered an innovative experiment at the time. I recall running it from a hotel room with spotty internet in Los Angeles as I was also participating in an in-person workshop at UCLA at the same time. It went smoothly, as the slides displayed well, speakers came through clearly, and, at the end of each talk, participants were able to ask questions by text which I could read to the presenter. It was easy, perhaps a bit awkward and new, but definitely worked and was clearly useful.

Since then, the virtual trend has picked up momentum. In the past couple of years, most talks that I attended at the NIH were streamed simultaneously using Webex. Recently, innovative use of twitter has allowed virtual conferences consisting of twitter feeds. An example of such twitter-based conferences is #BrainTC, which was started in 2017 and is now putting these on annually.

Using the idea started with #BrainTC, Aina Puce spearheaded OHBMEquinoX or OHBMx.  This “conference” took place on the Spring Equinox involving sequential tweets from speakers and presenters from around the world. It started in Asia and Australia and worked its way around with the sun during this first day of spring where the sun was directly above the equator and the entire planet had precisely the same number of hours of sunlight.

Recently, conferences with live streaming talks have been assembled in record time, with little cost overhead, providing a virtual conference experience to audiences numbering in the 1000’s at extremely low or even no registration cost. An outstanding recent example of a successful online conference is neuromatch.io. An insightful blog post summarized logistics of putting this on.

Today, the pandemic has thrown in-person conference planning, at least for the spring and summer of 2020, into chaos. The two societies with which I am most invested, ISMRM and OHBM, have taken different solutions to cancellations in their meetings. ISMRM has chosen to delay their meeting to August. ISMRM’s delay will hopefully be enough time for the current situation to return to normal, however, given the uncertainty of the precise timeline, even this delayed in-person meeting may have to be cancelled. OHBM has chosen to make this year’s conference virtual and are currently scrambling to organize it – aiming for the same start date in June that they had originally planned.

What we will see in June with OHBM will be a spectacular, ambitious, and extremely educational experiment. While we will be getting up to date on the science, most of us will also be having our first foray into a multi-day, highly attended, highly multi-faceted conference that was essentially organized in a couple of months.

Virtual conferences, now catalyzed by COVID-19 constraints, are here to stay. These are the very early days. Formats and capabilities of virtual conferences will be evolving for quite some time. Now is the time to experiment with everything, embracing all the available online technology as it evolves. Below is an incomplete list of the advantages, disadvantages, and challenges of virtual conferences, as I see them. 

What are the advantages of a virtual conference? 

1.         Low meeting cost. There is no overhead cost to rent a venue. Certainly, there are some costs in hosting websites however these are a fraction of the price of renting conference halls.

2.         No travel costs. No travel costs or time and energy are incurred for travel for the attendees and of course a corresponding reduction in carbon emissions from international travel. Virtual conferences allow an increased inclusivity to those who cannot afford to travel to conferences, potentially opening up access to a much more diverse audience – resulting in corresponding benefits to everyone.

3.         Flexibility. Because there is no huge venue cost the meeting can last as long or short as necessary and can take place for 2 hours a day or several hours interspersed throughout the day to accommodate those in other time zones. It can last the normal 4 or 5 days or can be extended for three weeks if necessary. There will likely be many discussions on what the optimal virtual conference timing and spacing should be. We are in the very early days here.

5.         Ease of access to information within the conference. With, hopefully, a well-designed website, session attendance can be obtained with a click of a finger. Poster viewing and discussing, once the logistics are fully worked out, might be efficient and quick. Ideally, the poster “browsing” experience will be preserved. Information on poster topics, speakers, and perhaps a large number of other metrics will be cross referenced and categorized such that it’s easy to plan a detailed schedule. One might even be able to explore a conference long after it is completed, selecting the most viewed talks and posters, something like searching articles using citations as a metric. Viewers might also be able to rate each talk or poster that they see, adding to usable information to search.

6.         Ease of preparation and presentation. You can present from your home and prepare up to the last minute in your home.

7.         Direct archival. It should be trivial to directly archive the talks and posters for future viewing, so that if one doesn’t need real-time interaction or misses the live feed, one can participate in the conference any time in the future at their own convenience. This is a huge advantage that is certainly also possible even for in-person conferences, but has not yet been achieved in a way that quite represents the conference itself. With a virtual conference, there can be a one-to-one conference “snapshot” preservation of precisely all the information contained in the conference as it’s already online and available.

What are the disadvantages of a virtual conference?

1.         Socialization. To me the biggest disadvantage is the lack of directly experiencing all the people. Science is a fundamentally human pursuit. We are all human, and what we communicate by our presence at a conference is much more than the science. It’s us, our story, our lives and context. I’ve made many good friends at conferences and look forward to seeing them and catching up every year. We have a shared sense of community that only comes from discussing something in front of a poster or over a beer or dinner. This is the juice of science. At our core we are all doing what we can towards trying to figure stuff out and creating interesting things. Here we get a chance to share it with others in real time and gauge their reaction and get their feedback in ways so much more meaningful than that provided virtually. One can also look at it in terms of information. There is so much information that is transferred during in-person meetings that simply cannot be conveyed with virtual meetings. These interactions are what makes the conference experience real, enjoyable, and memorable, which all feeds into the science.

2.         Audience experience. Related to 1, is the experience of being part of a massive collective audience. There is nothing like being in a packed auditorium of 2000 people as a leader of the field presents their latest work or their unique perspective. I recall the moment I first saw the first preliminary fMRI results presented by Tom Brady at ISMRM. My jaw dropped and I looked at Eric Wong, sitting next to me, in amazement. After the meeting, there was a group of scientists huddled in a circle outside the doors talking excitedly about the results. FMRI was launched into the world and everyone felt it and shared that experience. These are the experiences that are burnt into people’s memories and which fuel their excitement.

3.         No room for randomness. This could be built into a virtual conference, however at an in-person conference, one of the joys is to experience first-hand, the serendipitous experiences – the bit of randomness. Chance meetings of colleagues or passing by a poster that you didn’t anticipate. This randomness is everywhere at a conference venue perhaps more important than we realize. There may be clever ways to engineer a degree of randomness into a virtual conference experience, however.

4.         No travel. At least to me, one of the perks of science is the travel. Physically traveling to another lab, city, country, or continent is a deeply immersive experience that enriches our lives and perspectives. On a regular basis, while it can turn into a chore at times, is almost always worth it. The education and perspective that a scientist gets about our world community is immense and important.

5.         Distraction. Going to a conference is a commitment. The problem I always have when a conference is in my own city is that as much as I try to fully commit to it, I am only half there. The other half is attending to work, family, and the many other mundane and important things that rise up and demand my attention for no other reason than I am still here in my home and dealing with work. Going to a conference separates one from that life, as much as can be done in this connected world. Staying in a hotel or AirBnB is a mixed bag – sometimes delightful and sometimes uncomfortable. However, once at the conference, you are there. You assess your new surroundings, adapt, and figure out a slew of minor logistics. You immerse yourself in the conference experience, which is, on some level, rejuvenating – a break from the daily grind. A virtual conference is experienced from your home or office and can be filled with the distraction of your regular routine pulling you back. The information might be coming at you but the chances are that you are multi-tasking and interrupted. The engagement level during virtual sessions, and importantly, after the sessions are over, is less. Once you leave the virtual conference you are immediately surrounded by your regular routine. This lack of time away from work and home life I think is also a lost chance to ruminate and discuss new ideas outside of the regular context.

What are the challenges?

1.         Posters. Posters are the bread and butter of “real” conferences. I’m perhaps a bit old school in that I think that electronic posters presented at “real” conferences are absolutely awful. There’s no way to efficiently “scan” electronic posters as you are walking by the lineup of computer screens. You have to know what you’re looking for and commit fully to looking at it. There’s a visceral efficiency and pleasure of walking up and down the aisles of posters, scanning, pausing, and reading enough to get the gist, or stopping for extended times to dig in. Poster sessions are full of randomness and serendipity. We find interesting posters that we were not even looking for. Here we see colleagues and have opportunities to chat and discuss. Getting posters right in virtual conferences will likely be one of the biggest challenges. I might suggest creating a virtual poster hall with full, multi-panel posters as the key element of information. Even the difference between clicking on a title vs scrolling through the actual posters in full multi-panel glory will make a massive difference in the experience. These poster halls, with some thought, can be constructed for the attendee to search and browse. Poster presentations can be live with the attendee being present to give an overview or ask questions. This will require massive parallel streaming but can be done. An alternative is to have the posters up, a pre-recorded 3 minute audio presentation, and then a section for questions and answers – with the poster presenter being present live to answer in text questions that may arise and having the discussion text preserved with the poster for later viewing.

2.         Perspective. Keeping the navigational overhead low and whole meeting perspective high. With large meetings, there is a of course a massive amount of information that is transferred that no one individual can take in. Meetings like SFN, with 30K people, are overwhelming. OHBM and ISMRM, with 3K to 7K people, are also approaching this level. The key to making these meetings useful is creating a means by which the attendee can gain a perspective and develop a strategy for delving in. Simple to follow schedules with enough information but not too much, customized schedule-creation searches based on a wide rage of keywords and flags for overlap are necessary. The room for innovation and flexibility is likely higher at virtual conferences than at in-person conferences, as there are less constraints on temporal overlap. 

3.         Engagement. Fully engaging the listener is always a challenge, with a virtual conference it’s even more so. Sitting at a computer screen and listening to a talk can get tedious quickly. Ways to creatively engage the listener – real time feedback, questions to the audience, etc.. might be useful to try. Also, conveying effectively with clever graphics the size or relative interests of the audience might also be useful in creating this crowd experience.

4.         Socializing. Neuromatch.io included a socializing aspect to their conference. There might be separate rooms of specific scientific themes for free discussion, perhaps led by a moderator. There might also be simply rooms for completely theme-less socializing or discussion about any aspect of the meeting. Nothing will compare to real meetings in this regard, but there are some opportunities to potentially exploit the ease of accessing information about the meeting virtually to be used to enrich these social gatherings.

5.         Randomness. As I mentioned above, randomness and serendipity play a large role in making a meeting successful and worth attending. Defining a schedule and sticking to it is certainly one way of attacking a meeting, but others might want to randomly sample and browse and randomly run into people. It might be possible for this to be done in the meeting scheduling tool but designing opportunities for serendipity in the website experience itself should be given careful thought. One could decide on a time when they view random talks or posters or meet random people based on a range of keywords.

6.         Scalability. It would be useful to have virtual conferences constructed of scalable elements such as poster sessions, keynotes, discussion, proffered talks, that could start to become standardized to increase ease of access and familiarity across conferences of different sizes from 20 to 200,000 as it’s likely that virtual meeting sizes will vary more widely yet will be generally larger than “real” meetings.

7.         Costs vs. Charges? This will be of course determined on its own in a bottom up manner based on regular economic principles, however, in these early days, it’s useful to for meeting organizers to work through a set of principles of what to charge or if to make a profit at all. It is possible that if the web-elements of virtual meetings are open access, many of costs could disappear. However, for regular meetings of established societies there will be always be a need to support the administration to maintain the infrastructure.

Beyond Either-Or:

Once the unique advantages of virtual conferences are realized, I imagine that even as in-person conferences start up again, there will remain a virtual component, allowing a much higher number and wider range of participants. These conferences will perhaps simultaneously offer something to everyone – going well beyond simply keeping talks and posters archived for access – as is the current practice today.

While I have helped organize meetings for almost three decades, I have not yet been part of organizing a virtual meeting, so in this area, I don’t have much experience. I am certain that most thoughts expressed here have been thought through and discussed many times already. I welcome any discussion on points that I might have wrong or aspects I may have missed.

Virtual conferences are certainly going to be popping up at an increasing rate, throwing open a relatively unexplored wide open space for creativity with the new constraints and opportunities of this venue.  I am very much looking forward to seeing them evolve and grow – and helping as best I can in the process.

We Don’t Need no Backprop

Companion post to: “Example Based Hebbian Learning may be sufficient to support Human Intelligence” on Biorxiv.

This dude learned in one example to do a backflip.

With the tremendous success of deep networks trained using backpropagation, it is natural to think that the brain might learn in a similar way. My guess is that backprop is actually much better at producing intelligence than the brain, and that brain learning is supported by much simpler mechanisms. We don’t go from Zero to super smart in hours, even for narrow tasks, as does AlphaZero. We spend most of our first 20 years slowly layering into our brains the distilled intelligence of human history, and now and then we might have a unique new idea. Backprop actually generates new intelligence very efficiently. It can discover and manipulates the huge dimensional manifolds or state spaces that describe games like go, and finds optimal mappings from input to output through these spaces with amazing speed. So what might the brain do if not backprop?

Continue reading “We Don’t Need no Backprop”

Twenty-Six Controversies and Challenges in fMRI

•Neurovascular Coupling•Draining Veins•Linearity•Pre-undershoot•Post-undershoot•Long duration•Mental Chronometry•Negative Signal Changes•Resting state source•Dead fish activation•Voodoo correlations•Global signal regression•Motion artifacts•The decoding signal•non-neuronal BOLD•relationship to other measures•contrast: spin-echo vs gradient-echo•contrast: SEEP contrast•contrast: diffusion changes•contrast: neuronal currents•contrast: NMR phase imaging•lie detection•correlation ≠ connection•clustering conundrum•reproducibility•dynamic connectivity changes


This will be a chapter in my upcoming book “Functional MRI” in the MIT Press Essential Knowledge Series 


Functional MRI is unique in that, in spite of being almost 30 years old as a method, it continues to progress in terms of sophistication of acquisition, hardware, processing, in our understanding of the signal itself. There has been no plateau in any of these areas. In fact, by looking at the literature, one gets the impression that this advancement is accelerating. Every new advance opens the potential range where it might have an impact, allowing new questions about the brain to be addressed.

In spite of its success – perhaps as a result of its success – it has had its share of controversies coincident with methods advancements, new observations, and novel applications. Some controversies have been more contentious than others. Over the years, I’ve been following these controversies and have at times performed research to resolve them or at least better understand them. A good controversy can help to move the field forward as it can focus and motivate groups of people to work on the issue itself, shedding a broader light on the field as these are overcome.

While a few of the controversies or issues of contention have been fully resolved, most remain to some degree unresolved. Understanding fMRI through its controversies allows a deeper appreciation for how the field advances as a whole – and how science really works – the false starts, the corrections, and the various claims made by those with potentially useful pulse sequences, processing methods, or applications. Below is the list of twenty-six major controversies in fMRI – in approximately chronological order.

#1: The Neurovascular coupling debate.

Since the late 1800’s, the general consensus, hypothesized by Roy and Sherrington in 1890 (1), was that activation-induced cerebral flow changes were driven by local changes in metabolic demand. In 1986, a publication by Fox et al. (2)challenged that view, demonstrating that with activation, localized blood flow seemingly increased beyond oxidative metabolic demand, suggesting an “uncoupling” of the hemodynamic response from metabolic demand during activation. Many, including Louis Sokolof, a well-established neurophysiologist at the National Institutes of Health, strongly debated the results. Fox nicely describes this period in history from his perspective in “The coupling controversy” (3).

I remember well, in the early days of fMRI, Dr. Sokolof standing up from the audience to debate Fox on several circumstances, arguing that the flow response should match the metabolic need and there should be no change in oxygenation. He argued that what we are seeing in fMRI is something other than an oxygenation change.

In the pre-fMRI days, I recall not knowing what direction the signal should go – as when I first laid eyes on the impactful video presented by Tom Brady during his plenary lecture on the future of MRI at the Society for Magnetic Resonance (SMR) Meeting in August of 1991, it was not clear from these time series movies of subtracted images the direction he performed the subtraction operation. Was it task minus rest or rest minus task? Did the signal go up or down with activation? I also remember very well, analyzing my first fMRI experiments, expecting to see a decrease in BOLD signal – as Ogawa, in an earlier paper(4), hypothesized that metabolic rate increases would lead to a decrease blood oxygenation thus a darkening of the BOLD signal during brain activation. Instead, all I saw were signal increases. It was Fox’s work that helped me to understand why the BOLD signal should increasewith activation. Flow goes up and oxygen delivery exceeds metabolic need, leading to an increase in blood oxygenation.

While models of neurovascular coupling have improved, we still do not understand the precise need for flow increases. First, we had the “watering the whole garden to feed one thirsty flower” hypothesis which suggested that flow increases matched metabolic need for one area but since vascular control was coarse, the abundant oxygenation was delivered to a wider area than was needed, causing the increase in oxygenation. We also had the “diffusion limited” model, where it was hypothesized that in order to deliver enough oxygen to the furthest neurons from the oxygen supplying vessels, an overabundance of oxygen was needed at the vessel itself since the decrease of oxygen as it diffused from the vessel to the furthest cell was described as an exponential.  This theory has fallen a bit from favor as the increases in CMRO2or the degree to which the diffusion of oxygen to tissue from blood is limited tend to be higher than physiologic measures. The alternative to the metabolic need hypothesis involves neurogenic “feed-forward” hypotheses – which still doesn’t get at the “why” of the flow response.

Currently, this is where the field stands. We know that the flow response is robust and consistent. We know that in active areas, oxygenation in healthy brains always increases, however we just don’t understand specifically why it’s necessary. Is it a neurogenic, metabolic, or some other mechanism to satisfy some evolutionary critical need that extends beyond simple need for more oxygen? We are still figuring that out. Nevertheless, it can be said that whatever the reason for this increase in flow, it is fundamentally important, as the BOLD response is stunningly consistent.

#2: The Draining Vein Effect

The question: “what about the draining veins?” I think was first posited at an early fMRI conference by Kamil Ugurbil of the University of Minnesota. Here and for the next several years he alerted the community to the observation that, especially at low field, draining veins are a problem as they smear and distort the fMRI signal change such that it’s hard to know specifically where the underlying neuronal activation is with a high degree of certainty. In the “20 years of fMRI: the science and the stories” special issue of NeuroImage, Ravi Menon writes a thorough narrative of the “Great brain vs vein” debate (5).  When the first fMRI papers were published, only one, by Ogawa et al. (6), was at high field – 4 Tesla – and relatively high resolution. In Ogawa’s paper, it was shown that there was a differentiation between veins (very hot spots) and capillaries (more diffuse weaker activation in grey matter). Ravi followed this up with another paper (7)using multi-echo imaging, to show that blood in veins had an intrinsically shorter T2* decay than gray matter at 4T and appeared as dark dots in T2* weighted structural images yet appeared as bright functional hot spots in the 4T functional images. Because of the low SNR and CNR at 1.5T, allowing only the strongest BOLD effects to be seen, and because models suggested that at low field strengths, large vessels contributed the most to the signal, the field worried that all fMRI was looking at was veins – at least at 1.5T.

The problem of large vein effects is prevalent using standard gradient-echo EPI – even at high fields. Simply put, the BOLD signal is directly proportional to the venous blood volume contribution in each voxel. If the venous blood volume is high – as with the case of a vein filling a voxel – then the potential for high BOLD changes is high if there is a blood oxygenation change in the area. At high field, indeed, there is not much intravascular signal left in T2* weighted Gradient-echo sequences, however, the extravascular effect of large vessels still exists.  Spin-echo sequences (sensitive to small compartments) still are sensitive to the susceptibility effects around intravascular red blood cells within large vessels – even at 7T where intravascular contribution is reduced. Even with some vein sensitivity, promising high resolution orientation column results have been produced at 7T using gradient-echo and spin-echo sequences (8). The use of arterial spin labeling has potential as a method insensitive to large veins, although the temporal efficiency, intrinsic sensitivity, and brain coverage limitations blunt its utility. Vascular Space Occupancy (VASO), a method sensitive to blood volume changes, has been shown to be exquisitely sensitive to blood volume changes in small vessels and capillaries. Preliminary results have shown clear layer dependent activation using VASO where other approaches have provided less clear delineation(9).

Methods have arisen to identify and mask large vein effects – including thresholding based on percent signal change (large veins tend to fill voxels and thus exhibit a larger fractional signal change), as well as temporal fluctuations (large veins are pulsatile thus exhibit more magnitude and phase noise). While these seem to be workable solutions, they have not been picked up and used extensively. With regard to using the phase variations as a regressor to eliminate pulsatile blood and tissue, perhaps the primary reason for this not being adopted is because standard scanners do not produce these images readily, thus users do not have easy access to this information.

The draining vein issue is least problematic at voxel sizes larger than 2mm, as at these resolutions, mostly the region of activation- as defined as >1 cm “blob” is used and interpreted. Other than enhancing the magnitude of activation, vein effects do not distort these “blobs” thus are typically of no concern for low resolution studies. In fact, the presence of veins helps to amplify the signal in these cases. Spatial smoothing and multi-subject averaging – still commonly practiced – also ameliorate vein effects as they tend to be averaged out as each subject has a spatially variant macroscopic venous structure.

The draining vein problem is most significant where details of high resolution fMRI maps need to be interpreted for understanding small and tortuous activation patterns in the context of layer and column level mapping. So far no fully effective method works at this resolution as the goal is not to mask the voxels containing veins since there may be useful capillary and therefore neuronal effects still within the voxel. We need to eliminate vein effects more effectively on the acquisition side.

#3: The linearity of the BOLD response

The BOLD response is complex and not fully understood. In the early days of fMRI, it was found that BOLD contrast was both linear and nonlinear.  The hemodynamic response tends to overestimate activation at very brief (<3 seconds) stimulus durations (10)or at very low stimulus intensities (11).  With stimuli that were of duration of 2 seconds or less, the response was much larger than predicted by a linear system. The reasons for these nonlinearities are still not fully understood, however for interpreting transient or weak activations relative to longer duration activation, a clear understanding of the nonlinearity of neurovascular coupling across all activation intensities and durations needs to be well established.

#4: The pre-undershoot

The fMRI pre-undershoot was first observed in the late 90’s. With activation, it was sometimes observed that the fMRI signal, in the first 0.5 second of stimulation, first deflected slightly downwards before it increased (12, 13). Only a few groups were able to replicate this finding in fMRI however it appears to be ubiquitous in optical imaging work on animal models. The hypothesized mechanism is that before the flow response has a chance to start, a more rapid increase in oxidative metabolic rate causes the blood to become transiently less oxygenated. This transient effect is then washed away by the large increase in flow that follows.

Animal studies have demonstrated that this pre-undershoot could be enhanced by decreasing blood pressure. Of the groups that have seen the effect in humans, a handful claim that it is more closely localized to the locations of “true” neuronal activation. These studies were all in the distant past (>15 years ago) and since then, very few papers have come out revisiting the study of the elusive pre-undershoot. It certainly may be that it exists, but the precise characteristics of the stimuli and physiologic state of the subject may be critically important to produce it.

While simulations of the hemodynamic response can readily reproduce this effect (14), the ability to robustly reproduce and modulate this effect experimentally in healthy humans has proven elusive. Until these experiments are possible, this effect remains incompletely understood and not fully characterized.

#5: The post-undershoot

In contrast to the pre-undershoot, the post-undershoot is ubiquitous and has been studied extensively (15). Similar to the pre-undershoot, its origins are still widely debated. The basic observation is that following brain activation, the BOLD signal decreases and then passes below baseline for up to 40 seconds. The hypothesized reasons for this include: 1) a perseveration of elevated blood volume causing the amount of deoxyhemoglobin to remain elevated even though oxygenation and flow are back to baseline levels, 2) a perseveration of an elevated oxidative metabolic rate causing the blood oxygenation to decrease below baseline levels as the flow and total blood volume have returned to baseline states, 3) a post stimulus decrease in flow below baseline levels. A decrease in flow with steady state blood volume and oxidative metabolic rate would cause a decrease in blood oxygenation. Papers have been published arguing for, and showing evidence suggesting each of these three hypotheses, so the mechanism of the post undershoot, as common as it is, stays unresolved. It has also been suggested that if it is perhaps due to a decrease in flow, then this decrease might indicate a refractory period where neuronal inhibition is taking place (16). For now, the physiologic underpinnings of the post-undershoot remain a mystery.

#6: Long duration stimulation effects

This is a controversy that has long been resolved (17). In the early days of fMRI, investigators were performing the basic tests to determine if the response was a reliable indicator of neuronal activity and one of the tests was to determine if, with long duration steady state neuronal activation, the BOLD response remains elevated. A study by Kruger et al came out suggesting that the BOLD response habituated after 5 minutes (18). A counter study came out showing that with a flashing checkerboard on for 25 minutes, the BOLD response and flow response (measured simultaneously) remained elevated (19). It was later concluded that the stimuli in the first study was leading to some degree of attentional and neuronal habituation and not a long duration change in the relationship between the level of BOLD and the level of neuronal activity. Therefore, it is now accepted that as long as neurons are firing, and as long as the brain is in a normal physiologic state, the fMRI signal will remain elevated for the entire duration of activation.

#7: Mental Chronometry with fMRI.

A major topic of study over the entire history of fMRI has been how much temporal resolution can be extracted from the fMRI signal. The fMRI response is relatively slow, taking about two seconds to start to increase and, with sustained activation, takes about 10 seconds to reach a steady-state “on” condition.  On cessation, it takes a bit longer to return to baseline – about 10 to 12 seconds, and has a long post-stimulus undershoot lasting up to 40 seconds. In addition to this slow response, it has been shown that the spatial distribution in delay is up to four seconds due to spatial variations in the brain vasculature.

Given this sluggishness and spatial variability of the hemodynamic response, it may initially seem that there wouldn’t be any hope in sub-second temporal resolution. However, the situation is more promising than one would expect. Basic simulations demonstrate that, assuming no spatial variability in the hemodynamic response, and given a typical BOLD response magnitude of 5% and a typical temporal standard deviation of about 1%, after 11 runs of 5 minutes each, a relative delay of 50 to 100ms could be discerned from one area to the next. However, the spatial variation in the hemodynamic response, is plus or minus 2 seconds depending on where one is looking in the brain and depends mostly on what aspect of the underlying vasculature is captured with each voxel. Large veins tend to have longer delays.

Several approaches have attempted to bypass or to calibrate the hemodynamic response. One way of bypassing the slow variable hemodynamic response problem is to modulate the timing of experiment, so the relative onset delays with task timing delays can be observed. This is using the understanding that in each voxel, the hemodynamic response is extremely well behaved and repeatable. Using this approach, relative delays of 500ms to 50 ms have been discerned (20-23). While these measures are not absolute, they are useful in discriminating which areas show delay modulations with specific tasks. This approach has been applied – to 500ms accuracy – to uncover the underlying dynamics and the relative timings of specific regions involved with word rotation and word recognition (23). Multivariate decoding approaches have been able to robustly resolve sub second (and sub-TR) relative delays in the hemodynamic response(24). By slowing down the paradigm itself, Formisano et al. (25)have been able to resolve absolute timing of mental operations down to the order of a second.  The fastest brain activation on-off rate that has been able to be resolved has recently been published by Lewis et al.(26)and is in the range of 0.75 Hz. While this is not mental chronometry in the strict sense of the term, it does indicate an upper limit at which high-speed changes in neuronal activation may be extracted.

#8: Negative signal changes.

Negative BOLD signal changes were mostly ignored for the first several years of fMRI as researchers did not know precisely how to interpret them. After several years, with a growing number of negative signal change observations, the issue arose in the field, spawning several hypotheses to explain them. One hypothesis invoked a “steal” effect, where active regions received an increase in blood flow at the expense of adjacent areas which would hypothetically experience a decrease in flow. If flow decreases from adjacent areas, these areas would exhibit a decrease in BOLD signal but not actually be “deactivated.” Another hypothesis was that these were areas that were more active during rest, thus becoming “deactivated” during a task as neuronal activity was re-allocated to other regions of the brain. A third was that they represented regions that were actively inhibited by the task. While in normal healthy subjects, the evidence for the steal effects is scant, the other hypotheses are clear possibilities. In fact, the default mode network was first reported as a network that showed consistent deactivation during most cognitive tasks(27). This network deactivation was also seen in the PET literature(28). A convincing demonstration of neuronal suppression associated with negative BOLD changes was carried out by Shmuel et al (29)showing simultaneous decreased neuronal spiking and decreased fMRI signal in a robustly deactivated ring of visual cortex surrounding activation to a ring annulus. These observations seem to point to the idea that the entire brain is tonically active and able to be inhibited by activity in other areas through several mechanisms. This inhibition is manifest as a decrease in BOLD.

#9: Sources of resting state signal fluctuations

Since the discovery that the resting state signal showed temporal correlations across functionally related regions in the brain, there has been an effort to determine their precise origin as well as their evolutionary purpose. The predominant frequency of these fluctuations is in the range of 0.1 Hz, which was eye opening to the neuroscience community since, previously, most had not considered that neuronally-meaningful fluctuations in brain activity occurred on such a slow time scale. The most popular model for the source of resting state fluctuations is that spontaneously activated regions induce fluctuations in the signal. As measured with EEG, MEG, or ECoG, these spontaneous spikes or local field potentials occur across a wide range of frequencies. When this rapidly changing signal is convolved with a hemodynamic response, the resulting fluctuations approximate the power spectrum of a typical BOLD time series. Direct measures using implanted electrodes combined with BOLD imaging show a temporal correspondence of BOLD fluctuations with spiking activity(30). Recent work with simultaneous calcium imaging – a more direct measure of neuronal activation – has also shown a close correspondence both spatially and temporally with BOLD fluctuations(31), thus strongly suggesting that these spatially and temporally correlated fluctuations are in fact, neuronal. Mention of these studies is only the tip of a very large iceberg of converging evidence that resting state fluctuations are in fact related to ongoing, synchronize, spontaneous neuronal activity.

While the basic neurovascular mechanisms behind resting state fluctuations may be understood to some degree, the mystery of the origins and purpose of these fluctuations remains. To complicate the question further, it appears that there are different types of correlated resting state fluctuations. Some are related to the task being performed. Some may be related to brain “state” or vigilance. Some are completely insensitive to task or vigilance state. It has been hypothesized that the spatially broad, global fluctuations may relate more closely to changes in vigilance or arousal. Some are perhaps specific to a subject, and relatively stable across task, brain state, or vigilance state – reflecting characteristics of an individual’s brain that may change only very slowly over time or with disease. A recent study suggests that resting state networks, as compared in extremely large data sets of many subjects, reveal clear correlations to demographics, life style, and behavior (32).

Regarding the source or purpose of the fluctuations, some models simply state it’s an epiphenomenon of a network at a state of criticality – ready to go into action. The networks have to be spontaneously active to be able to transition easily into engagement. The areas that are typically most engaged together are resultantly fluctuating together during “rest.” In a sense, resting state may be the brain continually priming itself in a readiness state for future engagement. Aside from this issue there is the issue of whether or not there are central “regulators” of resting state fluctuations. Do the resting state fluctuations arise from individual nodes of circuits simply firing on their own or is there a central hub that sends out spontaneous signals to these networks to fire. There has also been growing evidence suggesting that this activity represents more than just a subconscious priming.  The default mode network, for instance, has been shown to be central to cognitive activity as rumination and self-directed attention.

Work is ongoing in trying to determine if specific circuits have signature frequency profiles that might help to differentiate them. Work is also ongoing to determine what modulates resting state fluctuations. So far, it’s clear that brain activation tasks, vigilance state, lifestyle, demographics, disease and immediately previous task performance can have an effect. There are currently no definitive conclusions as to the deep origin and purpose of resting state fluctuations.

#10: Dead Fish (false positive) Activation.

In about 2006, a poster at the Organization for Human Brain Mapping was published by Craig Bennett, mostly as a joke (I think) – but with the intent to illustrate the shaky statistical ground that fMRI was standing on with regard to the multiple comparisons problem. This study was picked up by the popular media and went a bit viral. The study showed BOLD activation in a dead salmon’s brain. Here, it was clearly suggested that fMRI based on BOLD contrast has some problems if it shows activation where there should be none. In fact, it was a clear indication of false positives that can happen by chance even if the appropriate statistical tests are not used. The basic problem in creation of statistical maps is that the maps not appropriately normalized to “multiple comparisons.” It’s known, that purely by chance, if enough comparison is made (in this case it’s one comparison every voxel), then some voxels will appear to have significantly changed in signal intensity. Bonferroni corrections and false discovery rate corrections are almost always used and are all available in current statistical packages. Bonferroni is likely too conservative as each voxel is not fully independent. False discovery rate is perhaps closer to the appropriate test. When using these, false activations are minimized, however, they can still occur for other reasons. The structure of the brain and skull, having edges which can enhance any small motion or system instability, resulting in false positives. While this poster brought out a good point and was perhaps among the most cited fMRI works in the popular literature and blogs, it failed to convey a more nuanced and important message, that no matter what statistical test is used, the reality is that the signal and the noise are not fully understood, therefore all are actually approximations of truth, subject to errors. That said, a well-designed study with clear criteria and models of activation as well as appropriately conservative statistical tests, will minimize this false positive effect. In fact, it is likely that we are missing much of what is really going on by using over-simplified models of what we should expect of the fMRI signal.

A recent paper by Gonzalez-Castillo et al (33)showed that with a more open model of what to expect from brain activation, and 9 hours of averaging, nearly all grey matter becomes “active” in some manner. Does this mean that there is no null hypothesis? Likely not, but the true nature of the signal is still not fully understood, and both false positives and false negatives permeate the literature.

#11: Voodoo correlations and double dipping.

In about 2009, Vul et al published a paper that caused a small commotion in the field in that it identified a clear error in neuroimaging data analysis. It listed the papers – some quite high profile – that used this erroneous procedure that resulted in elevated correlations. The basic problem that was identified was that studies were performing circular analysis rather than pure selective analysis(34). A circular analysis is a form of selective analysis in which biases involved with the selection are not taken into account. Analysis is “selective” when a subset of data is first selected before performing secondary analysis on the selected data. This is otherwise known as “double dipping.” Because data always contain noise, the selected subset will never be determined by true effects only. Even if the data have no true effects at all, the selected data will show tendencies that they were selected for.

So, a simple solution to this problem is to analyze the selected regions using independent data (not data that were used to select the regions). Therefore, effects and not noise will replicate. Thus, the results will reflect actual effects without bias due to the influence of noise on the selection.

This was an example of an adroit group of statisticians helping to correct a problem as it was forming in fMRI. Since this paper, the number of papers published with this erroneous approach has sharply diminished.

#12: Global signal regression for time series cleanup

The global signal is obtained simply by averaging up the MRI signal intensity in every voxel of the brain for each time point. For resting state correlation analysis, global variations of the fMRI signal are often considered nuisance effects and are commonly removed(35, 36)by regression of the global signal against the fMRI time series. However the removal of global signal has been shown artifactually cause anticorrelated resting state networks in functional connectivity analyses (37). Before this was known, papers were published showing large anticorrelated networks in the brain and interpreted as large networks that were actively inhibited by the activation of another network. If global regression was not performed, these so called “anti-correlated” networks simply showed minimal correlation with – positive or negative – the spontaneously active network. Studies have shown that the removing the global signal not only induces negative correlation, but distorts positive correlations – leading to errors in interpretation (38).  Since then the field has mostly moved away from global signal regression. However, some groups use it as it does clean up the artifactual signal to some degree.

The global signal has been studied directly. Work has shown that it is a direct measure of vigilance as assessed by EEG (39). The monitoring of the global signal may be an effective way to ensure that when the resting state data is collected, subjects are in a similar vigilance state – which can have a strong influence on brain connectivity (40). In general, the neural correlates of the global signal change fluctuations are still not fully understood, however, it appears that the field has reached a consensus that, as a pre-processing step, the global signal should not be removed. Simply removing the global signal will not only induce temporal artifacts that look like interesting effects but will remove potentially useful and neuronally relevant signal.

#13: Motion Artifacts

Functional MRI is extremely sensitive to motion, particularly in voxels that have a large difference in signal intensity relative to adjacent voxels. Typical areas that manifest motion effects are edges, sinuses and ear canals where susceptibility dropout is common. In these areas, even a motion of a fraction of a voxel can induce a large fractional signal change, leading to incorrect results. Motion can be categorized into task correlated, slow, and pulsatile. Work has been performed over the past 27 years to develop methods to avoid or eliminate in acquisition or post processing, motion induced signal changes. In spite of this effort, motion is still a major challenge today. The most difficult kind of motion to eliminate is task-correlated motion that occurs when a subject tenses up or moves during a task or strains to see a display. Other types of motion include slow settling of the head during a scan, rotation of the head, swallowing, pulsation of blood and csf, and breathing induced motion-like effects.

Typical correction for motion is carried out by the use of motion regressors that are obtained by most image registration software. An a hoc method for dealing with motion – that can be quite effective – include visual inspection of the functional images and manually choosing time series signals that clearly contain motion effects that can then be regressed out or “orthogonalized.” Other approaches include image registration and time series “scrubbing.” Scrubbing involves automated detection of “outlier” images and eliminating them from analysis. Other ways of working around motion have included paradigm designs that involve brief tasks such that any motion from the task itself is able to be identified as a rapid change whereas a change in the hemodynamic response is slow, thus allowing the signals to be separable by their temporal signatures.

In recent years, an effort has been made to proactively reduce motion effects by tracking optical sensors positioned on the head, and then feeding the position information back to the imaging gradients such that the gradients themselves are slightly adjusted to maintain a constant head position through changing the location or orientation of the imaging volume. The primary company selling such capability is KinetiCor.

A more direct strategy for dealing with motion is the implementation of more effective methods to keep the head rigid and motionless. This approach has included bite bars and plastic moldable head casts. These have some effectiveness in some subjects but run the risk of being uncomfortable – resulting in abbreviated scanning times or worse, more motion due to active repositioning during the scan due to discomfort of the subject.

Aside from the problem of motion of the head, motion of the abdomen, without any concomitant head motion, can have an effect on the MRI signal. With each breath, the lungs fill with air, altering the difference in susceptibility between the signal in the chest cavity and the outside air. This alteration has an effect on the main magnetic field that can extend all the way into the brain, leading to breathing induced image distortions and signal dropout. This problem is increased at higher fields where the effects of susceptibility differences between tissue are enhanced. Possible solutions to this problem are direct measurement of the magnetic field in the proximity to the head using a “field camera” such as the one sold by the Swiss company called Skope, and then perhaps using these dynamically measured field perturbations as regressors in post processing or by feeding this signal to the gradients and shims prior to data collection in an attempt to compensate.

In resting state fMRI, motion is even more difficult to identify and remove as slow motion or breathing artifacts may have similar temporal signatures. Also, if there are systematic differences between intrinsic motion in specific groups such as children or those with Attention Deficit Disorder (ADD), then interpretation of group differences in resting state fMRI results is particularly problematic as the degree of motion can vary with the degree to which individuals suffer from these disorders.

In high resolution studies, specifically looking at small structures at the base of the brain, motion is a major confound as with each cardiac cycle, the base of the brain physically moves. Solutions to this have included cardiac gating and simple averaging. Gating is promising, however signal changes associated with the inevitably varying TR, which would vary with the cardiac cycle length, need to be accounted for by so far imperfect post processing approaches.

A novel approach to motion elimination has been with the use of multi-echo EPI, allowing the user to differentiate BOLD effects from non-BOLD effects based on the signal fit to a T2* change model.

Another motion artifact is that which arises from through plane movement. If the motion is such that a slightly different head position causes a slightly different slice to be excited (with an RF pulse), then some protons will not experience that RF pulse and will have magnetization that is no longer in equilibrium (achieved by a constant TR). If this happens, even if the slice position is corrected, there will be some residual non-equilibrium signal remaining that will take a few TR’s to get back into equilibrium. Methods have been put forward to correct for this, modeling the T1 effects of the tissue, however these can also eliminate the BOLD signal changes, so this problem still remains.

Lastly, apparent motion can also be caused by scanner drift. As the scanner is running, gradients and gradient amplifiers and RF coils can heat up, causing a drift in the magnetic field as well as the resonance frequency of the RF coils, causing a slow shifting of the image location and quality of image reconstruction. Most vendors have implemented software to account for this, but it is not an ideal solution. It would be better to have a scanner that does not have this instability to begin with.

In general, motion is still a problem in the field and one which every user still struggles with, however it is becoming better managed as we gain greater understanding of the sources of motion, the spatial and temporal signatures of motion artifacts, and the temporal and spatial properties of the BOLD signal itself.

There’s a payoff waiting once complete elimination of motion and non-neuronal physiologic fluctuations in solved. The upper temporal signal to noise of an fMRI time series is no higher than about 120/1 due to physiologic noise setting an upper limit. If this noise source were to be eliminated, then the temporal signal to noise ratio would only be limited by coil sensitivity and field strength, perhaps allowing fMRI time series SNR values to approach 1000/1. This improvement would transform the field.

#14: Basis of the decoding signal

Processing approaches have advanced beyond univariate processing that involves comparison of all signal changes against temporal model with the assumption that there is no spatial relationship between voxels or even blobs of activation. Instead, fine-grained voxel wise patterns of activation in fMRI have been shown to carry previously unappreciated information regarding specific brain activity associated with a region. Using multivariate models which compare the voxel wise pattern of activity associated with each stimulus, detailed information related to such discriminations as visual angle and face identity, as well as object category have been differentiated(41). A model to explain these voxel-specific signal change patterns hypothesizes that while each voxel is not small enough to capture the unique pool of neurons that are selectively activated by specific stimuli, the relative population of neurons that are active in a voxel causes a modulation in the signal that, considered with the array of other uniquely activated vowels, makes a pattern that, while having limited meaningful macroscopic topographic information, conveys information as to what activity the functional area is performing. The proposed mechanism of these multi-voxel signal changes of sub voxel activations, taken as a pattern, convey useful and unique functional  information. Another proposed mechanism for these changes is that there remain subtle macroscopic changes that occur.

One study has attempted to answer the question of whether the source of pattern effect mapping is multi-voxel or sub-voxel by testing the performance of pattern effect mapping as the activation maps were spatially smoothed (42). The hypothesis was that if the information were sub-voxel and varied from voxel to voxel, then the performance of the algorithm would diminish with spatial smoothing. If the information was distributed at a macroscopic level, smoothing would improve detection. This study showed that both voxel-wise and multi-voxel information contributed in different areas to the multivariate decoding success, thus while the decoding signal is robust, the origins, are, as with many other fMRI signals, complicated and varying.

#15: Signal change but no neuronal activity?

Several years ago, Sirotin and Das reported that they observed a hemodynamic response where no neuronal activity was present. They recorded simultaneously hemodynamic and electrical signals in an animal model during repeated and expected periodic stimuli. When the stimulus was removed when the animal was expecting it, then there was no neuronal firing but a hemodynamic response remained (43). Following this controversial observation, there were some papers that disputed their claim, suggesting that, in fact, there was, in fact, very subtle electrical activity still present. The hemodynamic response is known to consistently over-estimate neuronal activity for very low level or very brief stimuli. This study appears to be an example of just such an effect, despite what the authors claimed. While there was no clear conclusion to this controversy, the study was not replicated. In general, a claim such as this is extremely difficult to make as it is nearly impossible to show that that something doesn’t exist when one is simply unable to detect it.

#16: Curious relationships to other measures

Over the years the hemodynamic response magnitude has been compared with other neuronal measures such as EEG, PET, and MEG. These measures show comparable effects, yet at times, there have been puzzling discrepancies reported. One example is the following paper (44), comparing the visual checkerboard flashing rate dependence of BOLD and of MEG. At high spatialstimulus frequencies, the flashing visual checkerboard showed similar monotonic relationships between BOLD and MEG. At low spatial frequency, the difference between BOLD and MEG was profound – showing a much stronger BOLD signal than MEG signal. The reasons for this discrepancy are still not understood yet studies like these are important for revealing differences where it is otherwise easy to assume that they are measuring very similar effects.

#17: Contrast mechanisms: spin-echo vs gradient-echo.

Spin-echo sequences are known to be sensitive to susceptibility effects from small compartments and gradient-echo sequences are known to be sensitive to susceptibility effects from compartments of all sizes(45). From this observation it has been incorrectly inferred that spin-echo sequences are be sensitive to capillaries rather than large draining veins. The mistake in this inferences is that the blood within draining veins is not taken into account. Within draining veins are small compartments – red blood cells. So, while spin-echo sequences may be less sensitive to large vessel extravascular effects, they are still sensitive to the intravascular signal in vessels where the blood signal is present.

There have been methods proposed for removing the intravascular signal, such as the application of diffusion gradients that null out any rapidly moving spins. However, spin-echo sequences have lower BOLD contrast than gradient-echo by at least a factor of 2 to 4, and with the addition of diffusion weighting, the contrast almost completely disappears.

Another misunderstanding is that spin-echo EPI is truly a spin-echo. EPI takes time to form an image, having a long “readout” window – at least 20 ms whereas with spin-echo sequences there is only a moment when the echo forms. All other times, the signal is being refocused by gradient-echoes – as with T2* sensitive gradient echo imaging. Therefore, it is nearly impossible in EPI sequences to obtain a perfect spin-echo contrast.  Most of spin-echo EPI contrast is actually T2* weighted. “Pure” spin-echo contrast is where the readout window is isolated to just the echo – only obtainable with multi-shot sequences. However, even at high field strengths, spin-echo sequences are considerably less sensitive to BOLD contrast than gradient-echo sequences.

There is hope for “pure” spin-echo sequences at very high field might be effective in eliminating large vessel effects as, at high field, blood T2 rapidly shortens, and therefore the intravascular signal contributes minimally to the functional contrast. Spin-echo sequence have been used at 7T to visualize extremely fine functional activation structures such as ocular dominance and orientation columns (46, 47).

For these reasons, spin-echo sequences have not caught on for all but a small handful of fMRI studies at very high field. If high field is used and a short readout window is employed, then spin-echo sequences may be one of sequences of choice, along with the blood volume sensitive sequence, VASO – for spatial localization of activation.

#18: Contrast mechanisms: SEEP contrast.

Over the years, after tens of thousands of observations of the fMRI response and the testing of various pulse sequence parameters, some investigators have claimed that the signal does not always behave in a manner that is BOLD-like. One interesting example appeared about 15 years ago. Here Stroman et al.(48)was investigating the spinal cord and failed to find a clear BOLD signal. On using a T2-weighted sequence they claimed to see a signal change that did not show a TE dependence and was therefore not T2-based. Rather they claimed that it was based proton density changes – but not perfusion. It was also curiously most prevalent in the spinal cord.

The origin of this signal has not been completely untangled and SEEP contrast has disappeared from the current literature. Those investigating spinal cord activation have been quite successful using standard BOLD contrast.

#19: Contrast Mechanisms: Activation-Induced Diffusion Changes.

Denis LeBihan is a pioneer in fMRI and diffusion imaging. In the early 1990’s, he was an originator of diffusion tensor mapping(49). Later, he advanced the concept of intra-voxel incoherent motion (IVIM)(50, 51). The idea behind IVIM is that with extremely low levels of diffusion weighting, a pulse sequence may become selectively sensitized to randomly, slowly flowing blood rather than free water diffusion.  The pseudo-random capillary network supports blood flow patterns that may resemble, imperfectly, rapid diffusion, and thus could be imaged as rapid diffusion using gradients sensitized to this high diffusion rate of random flow rather than pure diffusion. This concept was advanced in the late 1980’s and excited the imaging community in that it suggested that, if MRI were sensitive to capillary perfusion, it could be sensitive to activation-induced changes in perfusion. This contrast mechanism, while theoretically sound, was unable to be clearly demonstrated in practice as relative blood volume is only 2% and sensitization of diffusion weighting to capillary perfusion also, unfortunately, sensitizes the sequences to CSF pulsation and motion in the brain.

Le Bihan emerged several years later with yet another potential functional contrast that is sensitive not to hemodynamics but, hypothetically to activation-induced cell swelling. He claimed that diffusion weighting revealed that on activation, measurable decreases in diffusion coefficient occur in the brain. The proposed mechanism is that with neuronal activation, active neurons swell, thus increasing the intracellular water content. High levels of diffusion weighting are used to detect subtle activation-induced shifts in neuronal water content. A shift in water distribution from extracellular space which has slightly higher diffusion coefficient, to intracellular space which slightly lower diffusion coefficient, would cause an increase in signal in a highly diffusion weighted sequences.

While LeBihan has published these findings(52, 53), the idea has not achieved wide acceptance. First, the effect itself is relatively small and noisy, if present at all. Secondly, there have been papers published demonstrating that the mechanism behind this signal is vascular rather than neuronal (54). Lastly, and perhaps most importantly, many groups, including my own, have tried this approach, coming up with null results. If the technique gives noisy and inconsistent results, it is likely not going to compete with BOLD, regardless of how selective it is to neuronal effects. Of course, it’s always worthwhile to work on advancing such methodology!

#20: Contrast mechanisms: Neuronal Current Imaging.

In the late 90’s it was proposed that MRI is theoretically directly sensitive to electric currents produced by neuronal activity. A current carried by a wire is known to set up magnetic fields around the wire. These magnetic fields when superimposed on the primary magnetic field of the scanner can cause NMR phase shifts or, if the wires are very small and randomly distributed, an NMR phase dispersion – or a signal attenuation. In the brain, dendrites and white matter tracts behave as wires that carry current. Basic models have calculated that the field in the vicinity of these fibers can be as high as 0.2 nT(55). MEG in fact, does detect these subtle magnetic fields as they fall off near the skull. At the skull surface the magnetic fields produced are on the order of 100 fT, inferring that at the source, they are on the order of 0.1 nT.

Over the last 15 years, work has continued around the world toward the goal of using MRI to detect neuronal currents. This work includes the attempts to observe rapid activation-induced phase shifts and magnitude shifts in susceptibility weighted sequences as the superimposed field distortion would cause either a phase shift or dephasing, depending on the geometry. Using these methods, no conclusive results have been reported in vivo. Other attempted methods include “Lorentz” imaging(56). Here the hypothesis is that when current is passing through a nerve within a magnetic field, a net torque is produced, causing the nerve to move just a small amount – potentially detectable by well-timed diffusion weighting. Again, no clear results have emerged. More recent approaches are based on the hypothesis that the longitudinal relaxation properties of spins may be affected if in resonance with intrinsic resonances in the brain such as alpha (10Hz) frequencies(57, 58). Spin-locking approaches that involve adiabatic pulses at these specific frequencies aim to observe neuronal activity based changes in the resonance of the NMR tissue. In such manner, maps of predominant oscillating frequency may be made. Again, such attempts have resulted in suggestive but not conclusive results.

Many are working on methods to detect neuronal currents directly, however, by most calculations, the effect is likely an order of magnitude to small. Coupled with the fact that physiologic noise and even BOLD contrast would tend to overwhelm neuronal current effects, the challenge remains daunting. Such approaches would require extreme acquisition speed (to detect transient effects that cascade over 200 ms throughout the brain), insensitivity to BOLD effects and physiologic noise, yet likely an order of magnitude higher sensitivity overall.

#21: Contrast mechanisms: NMR phase imaging.

The idea of observing phase changes rather than magnitude changes induced by oxygenation changes is not a new one. Back in the early 90’s it was shown that large vessels show a clear NMR phase change with activation. It was suggested that this approach may allow the clean separation of large vessel from tissue effects when interpreting BOLD. In general, it is known that large, relative to a voxel size, changes in susceptibility induce a net phase shift as well.

The concept of observing phase changes has been revisited. Studies have suggested that large activated regions, while mostly showing magnitude changes, may act as a single large susceptibility perturber and induce a large shift in phase. These studies suggest that use of both phase and magnitude information would boost both sensitivity and specificity. This approach has not yet caught on in the field. Perhaps one reason for it not catching on is that vendors typically only provide magnitude reconstruction of the scanner data. Most users simply don’t have access to NMR phase information. It may also turn out that any gains are only very small, thus making the extra effort in obtaining double the data and spending twice the time on pre and post processing not worth the effort.

#22: fMRI for Lie Detection

Over the past 20 years, fMRI has provided evidence that the brain of someone who is lying is active in distinctly different ways than that which is telling the truth. There is additional prefrontal and parietal activity with story fabrication(59). There have been papers that have demonstrated this effect in many different ways. In fact, there exist studies that show not only that lie detection is possible but extraction of the hidden knowledge of the truth is also possible using fMRI(60).

Because of this success, companies touting MRI-based lie detection services have cropped up (e.g. No Lie MRI— http://noliemri.com/and CEPHOS—http://www.cephoscorp.com/). The problem with this is that lie detection has never been fully tested in motivated and at times psychopathic criminals and negative results are more common than not. Inconclusive fMRI based lie detection results, if allowed in court, could potentially bias favor on the defense because, a negative or inconclusive result would provide suggestion that the individual is innocent.

The difference between research success and success or readiness for implementation in public use illustrates much of what the rest of fMRI faces. In real world applications, there are many variables which make fMRI nearly uninterpretable. Generalizing from group studies or highly controlled studies on motivated volunteers to individual studies in patients or criminals is extremely challenging to say the least.

Nevertheless, in spite of scientific, legal, ethical, operational, and social hurdles, machine learning and classification methods may ultimately prove capable in interpreting individual subject activation in association with lie detection with actual criminals or other real-world situations. There may in fact be regions of the brain or patterns of activity that tell truth from lies regardless of whether the person is a psychopathic hardened criminal or a motivated college student volunteer. No one has done those comparison studies yet.

#23: Does Correlation Imply Connectivity?

In resting state and sometimes task based fMRI, the correlation between voxels or regions is calculated. A growing trend is to replace the word correlation with connectivity. The assumption is that a high temporal correlation between disparate regions of the brain directly implies a high level of functional connectivity. It’s also assumed that any changein correlation between these regions implies that there is a corresponding change in connectivity. As a first approximation, these statements may be considered true, however there are many situations in which they are not true.

First, other processes can lead to a high correlation between disparate regions. Bulk movement, cardiac pulsation, and respiration are three primary sources of artifactual correlation. For the most part these are well dealt with – as motion is relatively easily identified and cardiac pulsation is at a distinct frequency (1Hz) that is fortunately far from resting state correlation frequencies (0.1 Hz). However aliased cardiac frequencies and respiration-induced correlations present a bit more of a challenge to remove. Respiration also can create signal changes that show T2* changes, so multi-echo sequences optimized to separate BOLD from non-BOLD effects would be less effective in removing respiration effects. Respiration is however identifiable by the fact that it is more spatially diffuse than correlations between distinct regions. This separation based on spatial pattern and spatial diffusivity is still extremely difficult to perform robustly and without error.

Modulations in correlation can occur for a number of reasons. First, if, when looking at a pair of regions that show correlation, if both signals contain noise (as they all do), an increase in the amplitude of one signal will naturally increase the correlation value, but no “real” increase in connectivity likely occurred. Likewise, if the noise in one or both of the signals is reduced, then, again, there will be a measurable reduction in correlation but likely no change in actual functional connectivity. If the relative latency or shape of one of the responses changes, then a change in correlation will occur without a change in connectivity perhaps. Let’s say an additional frequency was added from another oscillating signal that has nothing to do with the signal driving the “connectivity” between the two regions. If this happens, then again, the correlation between the two signals will be reduced without the connectivity between the regions being altered.

All of the above issues have not been addressed in any systematic manner as we are still in the early days of just figuring out the best ways to cleanly and efficiently extract correlation data. In the future, in order to make more meaningful interpretations of these signals, we will need to control for a the potentially confounding effects mentioned above.

#24: The clustering conundrum.

In 2016, a bombshell paper by Eklund et al (61)identified an error in most of the statistical fMRI packages, including SPM, FSL, and AFNI. Quoting a part of the abstract of that paper:

In theory, we should find 5% false positives (for a significance threshold of 5%), but instead we found that the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%. These results question the validity of a number of fMRI studies and may have a large impact on the interpretation of weakly significant neuroimaging results.”

The results showed that the packages correctly inferred statistical significance when considering independent voxels, however, when considering clusters of voxels as a single activation – as most activations are considered “clusters” or spatially correlated activations –  the estimations of cluster sizes, smoothness, or the statistical threshold for what constitutes a “significantly activated” cluster were incorrect, leading to incorrectly large clusters.

The implication of their paper was that for the past 15 years, these commonly used packages have overestimated activation extent. For well-designed studies with highly significant results, this would have virtually no effect on how results are interpreted. The same conclusions would be likely have been made. Most studies’ conclusions do not rely on absolute cluster size for their conclusions. Instead they draw conclusions based on the center of mass of activation, or whether a region was activated or not. Again, these studies would not be significantly affected.

Perhaps the greatest implications might be for pooled data in large data sets. If incorrectly large clusters are averaged across thousands of studies, then perhaps errors in interpretation of the extent of activation my crop up.

Unfortunately, the paper also went on to make a highly emotive statement: “These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results.” Later in a correction, they removed the figure of 40,000 papers. Unfortunately, the media picked up on this figure with the sensational statement that most fMRI studies are “wrong,” which in itself is completely untrue by most definitions of the word “wrong.” The studies may have simply slightly overestimated the size of activation. If a paper relied on this error to reach a conclusion, it was naturally treading on thin statistical ice in the first place and would likely be in question anyway. Most activation (and most of the voxels in these clusters found) are highly significant.

After many popular articles raised concern in the public, the issue eventually died down. Most of the scientists in the field who understood the issue were completely unfazed as the slightly larger clusters did not have any influence on the conclusions drawn by their papers or papers that they relied on for guiding their work.

This brings up an issue related to statistical tests. All statistical tests involve estimates on the nature of the signal and noise, so, by definition are always somewhat “wrong.” They are close enough however. So much more goes into the proper interpretation of results. Most seasoned fMRI practitioners have gained experience in not over interpreting aspects of the results that are not extremely robust.

While the Eklund paper has performed a service to the field by alerting it to a highly propagated error, it also raised false concerns about the high-level of robustness and reproducibility of fMRI in drawing inferences about brain activation. Yes, brain activation extent has been slightly overestimated, but no, most of the papers produced using these analyses need to change in any manner, their conclusions.

#25: The issue of reproducibility

While fMRI is a robust and reproducible method, there is room for improvement. In science in general, it’s been suggested that up to 70% of published results have failed to be reproduced. The reasons for this high fraction may be in part related to what we mean by “successfully reproduced” as well as pressure to publish findings that push the limits of what may be a reasonable level of interpretation. Some might argue that this number is an indication of the health of the scientific process by which a study’s result either lives or dies based on whether it is replicated. If a study is not able to be replicated, the conclusions generally fade away from acceptance.

One researcher, Russ Poldrack, of Stanford is spearheading an effort to increase transparency and reproducibility of fMRI data as he heads the Stanford Center for Reproducible Science. The goal is to increase further the reproducibility of fMRI and therefore move the field towards a more full realization of its potential and less wasted work in the long run. He specifically is encouraging more replication papers, more shared data and code, more “data papers” contributing a valuable data set that may be re-analyzed, and an increased number of “registered studies” where the hypotheses and methods are stated up front before any data are collected. All of these approaches will make the entire process of doing science with fMRI much more transparent and able to be better leveraged by a growing number of researchers as they develop better processing methods or generate more penetrating hypotheses.

#26: Dynamic Connectivity Changes

The most recent “controversy” in the field of fMRI revolves around the interpretation of resting state scans. Typically, in the past, maps of resting state connectivity were created from entire time series lasting up to 30 minutes in length. The assumption was that brain connectivity remained somewhat constant over these long time periods. There is evidence that this assumption is not correct. In fact, it’s been shown that the connectivity profile of the brain changes fluidly over time. Today, fMRI practitioners now regard individual fMRI scans as rich 4D datasets with meaningful functional connectivity dynamics (dFC) requiring updated models able to accommodate this additional time-varying dimension. For example, individual scans today are often described in terms of a limited set of recurring, short-duration (tens of seconds), whole-brain FC configurations named FC states. Metrics describing their dwell times, ordering and frequency of transitions can then be used to quantify different aspects of empirically observed dFC. Many questions remain both about the etiology of empirically observed FC dynamics; as well as regarding the ability of models, such as FC states, to accurately capture behavioral, cognitive and clinically relevant dynamic phenomena.

Despite reports of dFC in resting humans, macaques and rodents, a consensus does not exist regarding the underlying meaning or significance of dFC while at rest. Those who hypothesize it to be neuronally relevant have explored resting dFC in the context of consciousness, development and clinical disorders. These studies have shown how the complexity of dFC decreases as consciousness levels decline; how  dynamic inter-regional interactions can be used to predict brain maturity, and how dFC derivatives (e.g., dwell times) can be diagnostically informative for conditions such as schizophrenia, mild cognitive impairment, and autism; to name a few.

Yet, many others have raised valid concerns regarding the ability of current dFC estimation methods to capture neuronally relevant dFC at rest. These concerns include lack of appropriate null models to discern real dynamics from sampling variability, improper pre-processing leading to spurious dynamics, and excessive temporal smoothing (a real concern for sliding window techniques used to estimate FC states) that hinder our ability to capture sharp and rapid transitions of interest (62). Finally, some have even stated that resting dFC is primarily a manifestation of sampling variability, residual head motion artifacts, and fluctuations in sleep state; and as such, mostly irrelevant.

One reason causing such discrepant views is that it is challenging to demonstrate the potential cognitive correlates of resting dFC, given the unconstrained cognitive nature of rest and scarcity of methods to infer the cognitive correlates of whole-brain FC patterns. When subjects are instructed to quietly rest, retrospective reports demonstrate that subjects often engage in a succession of self-paced cognitive processes including inner speech, musical experience, visual imagery, episodic memory recall, future planning, mental manipulation of numbers, and periods of heightened somatosensory sensations. Reconfigurations of FC patterns during rest could, to some extent, be a manifestation of this flow of covert self-paced cognition; even if other factors (e.g., random exploration of cognitive architectures, fluctuations in autonomic system activity and arousal levels, etc.) also contribute.

Research is ongoing in fMRI to determine the neural correlates of dynamic connectivity changes, to determine the best methods for extracting this rich data, as well as to determine how this information may be used to better understand ongoing cognition in healthy and clinical populations and individuals.

In conclusion, other interesting controversies also come to mind – such as BOLD activation in white matter, BOLD tensor mapping, the problem of reverse inference from fMRI data, the difference between mapping connectivity (ahem..correlation) vs mapping fMRI magnitude, inferring brain region causation from different brain regions during activation using fMRI, and other non-BOLD contrast mechanisms such as temperature or elasticity. Challenges also come to mind – including the robust extraction of individual similarities and differences from fMRI data, the problem of how to best parcellate brains, the creation of fMRI-based “biomarkers,” and potential utility of gradient coils.

The field of fMRI has advanced and further defined itself through many of these controversies. In my opinion, these indicate that the field is still growing and is quite robust and dynamic. Also, in the end, a consensus is usually reached – thus pushing the understanding forward. Controversies are healthy and welcomed. Let’s keep them coming!

References 

  1. Roy CS, Sherrington CS. On the regulation of the blood-supply of the brain. J Phsiol. 1890;11:85-108.
  2. Fox PT, Raichle ME. Focal physiological uncoupling of cerebral blood flow and oxidative metabolism during somatosensory stimulation in human subjects. Proc Natl Acad Sci USA. 1986;83:1140-4.
  3. Fox PT. The coupling controversy. NeuroImage. 2012;62(2):594-601.
  4. Ogawa S, Lee TM, Nayak AS, Glynn P. Oxygenation-Sensitive Contrast in Magnetic-Resonance Image of Rodent Brain at High Magnetic-Fields. Magnetic Resonance in Medicine. 1990;14(1):68-78.
  5. Menon RS. The great brain versus vein debate. NeuroImage. 2012;62(2):970-4.
  6. Ogawa S, Tank DW, Menon R, Ellermann JM, Kim SG, Merkle H, et al. Intrinsic Signal Changes Accompanying Sensory Stimulation – Functional Brain Mapping with Magnetic-Resonance-Imaging. Proceedings of the National Academy of Sciences of the United States of America. 1992;89(13):5951-5.
  7. Menon RS, Ogawa S, Tank DW, Ugurbil K. Tesla Gradient Recalled Echo Characteristics of Photic Stimulation-Induced Signal Changes in the Human Primary Visual-Cortex. Magnetic Resonance in Medicine. 1993;30(3):380-6.
  8. Yacoub E, Harel N, Uǧurbil K. High-field fMRI unveils orientation columns in humans. Proceedings of the National Academy of Sciences of the United States of America. 2008;105(30):10607-12.
  9. Huber L, Handwerker DA, Jangraw DC, Chen G, Hall A, Stuber C, et al. High-Resolution CBV-fMRI Allows Mapping of Laminar Activity and Connectivity of Cortical Input and Output in Human M1. Neuron. 2017;96(6):1253-63 e7.
  10. Birn RM, Bandettini PA. The effect of stimulus duty cycle and “off” duration on BOLD response linearity. NeuroImage. 2005;27(1):70-82.
  11. Logothetis NK, Pauls J, Augath M, Trinath T, Oeltermann A. Neurophysiological investigation of the basis of the fMRI signal. Nature. 2001;412(6843):150-7.
  12. Hu XP, Le TH, Ugurbil K. Evaluation of the early response in fMRI in individual subjects using short stimulus duration. Magnetic Resonance in Medicine. 1997;37(6):877-84.
  13. Hu X, Yacoub E. The story of the initial dip in fMRI. NeuroImage. 2012;62(2):1103-8.
  14. Buxton RB. Dynamic models of BOLD contrast. NeuroImage. 2012;62(2):953-61.
  15. van Zijl PC, Hua J, Lu H. The BOLD post-stimulus undershoot, one of the most debated issues in fMRI. NeuroImage. 2012;62(2):1092-102.
  16. Devor A, Tian P, Nishimura N, Teng IC, Hillman EM, Narayanan SN, et al. Suppressed neuronal activity and concurrent arteriolar vasoconstriction may explain negative blood oxygenation level-dependent signal. The Journal of neuroscience : the official journal of the Society for Neuroscience. 2007;27(16):4452-9.
  17. Krueger G, Granziera C. The history and role of long duration stimulation in fMRI. NeuroImage. 2012;62(2):1051-5.
  18. Frahm J, Kruger G, Merboldt KD, Kleinschmidt A. Dynamic uncoupling and recoupling of perfusion and oxidative metabolism during focal brain activation in man. Magnetic Resonance in Medicine. 1996;35(2):143-8.
  19. Bandettini PA, Kwong KK, Davis TL, Tootell RBH, Wong EC, Fox PT, et al. Characterization of cerebral blood oxygenation and flow changes during prolonged brain activation. Human brain mapping. 1997;5(2):93-109.
  20. Bandettini PA. The temporal resolution of Functional MRI. In: Moonen C, Bandettini P, editors. Functional MRI: Springer – Verlag; 1999. p. 205-20.
  21. Menon RS, Luknowsky DC, Gati JS. Mental chronometry using latency-resolved functional MRI. Proceedings of the National Academy of Sciences of the United States of America. 1998;95(18):10902-7.
  22. Menon RS, Gati JS, Goodyear BG, Luknowsky DC, Thomas CG. Spatial and temporal resolution of functional magnetic resonance imaging. Biochemistry and Cell Biology-Biochimie Et Biologie Cellulaire. 1998;76(2-3):560-71.
  23. Bellgowan PSF, Saad ZS, Bandettini PA. Understanding neural system dynamics through task modulation and measurement of functional MRI amplitude, latency, and width. Proceedings of the National Academy of Sciences of the United States of America. 2003;100(3):1415-9.
  24. Misaki M, Luh WM, Bandettini PA. Accurate decoding of sub-TR timing differences in stimulations of sub-voxel regions from multi-voxel response patterns. NeuroImage. 2013;66:623-33.
  25. Formisano E, Linden DEJ, Di Salle F, Trojano L, Esposito F, Sack AT, et al. Tracking the mind’s image in the brain I: Time-resolved fMRI during visuospatila mental imagery. Neuron. 2002;35(1):185-94.
  26. Lewis LD, Setsompop K, Rosen BR, Polimeni JR. Fast fMRI can detect oscillatory neural activity in humans. Proceedings of the National Academy of Sciences of the United States of America. 2016;113(43):E6679-E85.
  27. McKiernan K, D’Angelo B, Kucera-Thompson JK, Kaufman J, Binder J. Task-induced deactivation correlates with suspension of task-unrelated thoughts: An fMRI investigation. Journal of cognitive neuroscience. 2002:96-.
  28. Buckner RL. The serendipitous discovery of the brain’s default network. NeuroImage. 2012;62(2):1137-45.
  29. Shmuel A, Yacoub E, Pfeuffer J, Van de Moortele PF, Adriany G, Hu XP, et al. Sustained negative BOLD, blood flow and oxygen consumption response and its coupling to the positive response in the human brain. Neuron. 2002;36(6):1195-210.
  30. Shmuel A, Leopold D. Neuronal correlates of spontaneous fluctuations in fMRI signals in monkey visual cortex: implications for functional connectivity at rest. Human brain mapping. 2008;current issue.
  31. Ma Y, Shaik MA, Kozberg MG, Kim SH, Portes JP, Timerman D, et al. Resting-state hemodynamics are spatiotemporally coupled to synchronized and symmetric neural activity in excitatory neurons. Proceedings of the National Academy of Sciences of the United States of America. 2016;113(52):E8463-E71.
  32. Smith SM, Nichols TE, Vidaurre D, Winkler AM, Behrens TE, Glasser MF, et al. A positive-negative mode of population covariation links brain connectivity, demographics and behavior. Nature neuroscience. 2015;18(11):1565-7.
  33. Gonzalez-Castillo J, Saad ZS, Handwerker DA, Inati SJ, Brenowitz N, Bandettini PA. Whole-brain, time-locked activation with simple tasks revealed using massive averaging and model-free analysis. Proceedings of the National Academy of Sciences of the United States of America. 2012;109(14):5487-92.
  34. Vul E, Harris C, Winkielman P, Pashler H. Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition. Perspect Psychol Sci. 2009;4(3):274-90.
  35. Fox MD, Zhang D, Snyder AZ, Raichle ME. The global signal and observed anticorrelated resting state brain networks. Journal of neurophysiology. 2009;101(6):3270-83.
  36. Fox MD, Snyder AZ, Vincent JL, Corbetta M, Van Essen DC, Raichle ME. The human brain is intrinsically organized into dynamic, anticorrelated functional networks. Proceedings of the National Academy of Sciences of the United States of America. 2005;102(27):9673-8.
  37. Murphy K, Birn RM, Handwerker DA, Jones TB, Bandettini PA. The impact of global signal regression on resting state correlations: Are anti-correlated networks introduced? NeuroImage. 2009;44(3):893-905.
  38. Saad ZS, Gotts SJ, Murphy K, Chen G, Jo HJ, Martin A, et al. Trouble at rest: how correlation patterns and group differences become distorted after global signal regression. Brain connectivity. 2012;2(1):25-32.
  39. Wong CW, Olafsson V, Tal O, Liu TT. The amplitude of the resting-state fMRI global signal is related to EEG vigilance measures. NeuroImage. 2013;83:983-90.
  40. Liu TT, Nalci A, Falahpour M. The global signal in fMRI: Nuisance or Information? NeuroImage. 2017;150:213-29.
  41. Chen M, Han J, Hu X, Jiang X, Guo L, Liu T. Survey of encoding and decoding of visual stimulus via FMRI: an image analysis perspective. Brain imaging and behavior. 2014;8(1):7-23.
  42. Misaki M, Luh WM, Bandettini PA. The effect of spatial smoothing on fMRI decoding of columnar-level organization with linear support vector machine. Journal of Neuroscience Methods. 2013;212(2):355-61.
  43. Sirotin YB, Das A. Anticipatory haemodynamic signals in sensory cortex not predicted by local neuronal activity. Nature. 2009;457(7228):475-9.
  44. Muthukumaraswamy SD, Singh KD. Spatiotemporal frequency tuning of BOLD and gamma band MEG responses compared in primary visual cortex. NeuroImage. 2008.
  45. Bandettini PA, Wong EC, Jesmanowicz A, Hinks RS, Hyde JS. Spin-echo and gradient-echo EPI of human brain activation using BOLD contrast: A comparative study at 1.5 T. NMR in biomedicine. 1994;7(1-2):12-20.
  46. Yacoub E, Shmuel A, Pfeuffer J, Van De Moortele PF, Adriany G, Andersen P, et al. Imaging brain function in humans at 7 Tesla. Magnetic Resonance in Medicine. 2001;45(4):588-94.
  47. Duong TQ, Yacoub E, Adriany G, Hu X, Ugurbil K, Vaughan JT, et al. High-resolution, spin-echo BOLD, and CBF fMRI at 4 and 7 T. Magnetic Resonance in Medicine. 2002;48(4):589-93.
  48. Stroman PW, Krause V, Malisza KL, Frankenstein UN, Tomanek B. Extravascular proton-density changes as a Non-BOLD component of contrast in fMRI of the human spinal cord. Magnetic Resonance in Medicine. 2002;48(1):122-7.
  49. Douek P, Turner R, Pekar J, Patronas N, Le Bihan D. MR color mapping of myelin fiber orientation. Journal of computer assisted tomography. 1991;15(6):923-9.
  50. Le Bihan D, Breton E, Lallemand D, Grenier P, Cabanis E, Laval-Jeantet M. MR imaging of intravoxel incoherent motions: application to diffusion and perfusion in neurologic disorders. Radiology. 1986;161(2):401-7.
  51. Le Bihan D, Turner R, Moonen CT, Pekar J. Imaging of diffusion and microcirculation with gradient sensitization: design, strategy, and significance. Journal of magnetic resonance imaging : JMRI. 1991;1(1):7-28.
  52. Le Bihan D, Urayama SI, Aso T, Hanakawa T, Fukuyama H. Direct and fast detection of neuronal activation in the human brain with diffusion MRI. Proceedings of the National Academy of Sciences of the United States of America. 2006;103(21):8263-8.
  53. Kohno S, Sawamoto N, Urayama SI, Aso T, Aso K, Seiyama A, et al. Water-diffusion slowdown in the human visual cortex on visual stimulation precedes vascular responses. Journal of Cerebral Blood Flow and Metabolism. 2009;29(6):1197-207.
  54. Miller KL, Bulte DP, Devlin H, Robson MD, Wise RG, Woolrich MW, et al. Evidence for a vascular contribution to diffusion FMRI at high b value. Proceedings of the National Academy of Sciences of the United States of America. 2007;104(52):20967-72.
  55. Bandettini PA, Petridou N, Bodurka J. Direct detection of neuronal activity with MRI: Fantasy, possibility, or reality? Applied Magnetic Resonance. 2005;29(1):65-88.
  56. Truong TK, Avram A, Song AW. Lorentz effect imaging of ionic currents in solution. Journal of Magnetic Resonance. 2008;191(1):93-9.
  57. Buracas GT, Liu TT, Buxton RB, Frank LR, Wong EC. Imaging periodic currents using alternating balanced steady-state free precession. Magnetic resonance in medicine : official journal of the Society of Magnetic Resonance in Medicine / Society of Magnetic Resonance in Medicine. 2008;59(1):140-8.
  58. Witzel T, Lin FH, Rosen BR, Wald LL. Stimulus-induced Rotary Saturation (SIRS): a potential method for the detection of neuronal currents with MRI. NeuroImage. 2008;42(4):1357-65.
  59. Ofen N, Whitfield-Gabrieli S, Chai XJ, Schwarzlose RF, Gabrieli JD. Neural correlates of deception: lying about past events and personal beliefs. Social cognitive and affective neuroscience. 2017;12(1):116-27.
  60. Yang Z, Huang Z, Gonzalez-Castillo J, Dai R, Northoff G, Bandettini P. Using fMRI to decode true thoughts independent of intention to conceal. NeuroImage. 2014.
  61. Eklund A, Nichols TE, Knutsson H. Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proceedings of the National Academy of Sciences of the United States of America. 2016;113(28):7900-5.
  62. Shakil S, Lee CH, Keilholz SD. Evaluation of sliding window correlation performance for characterizing dynamic functional connectivity and brain states. Neuroimage. 2016;133:111-28.