Thoughts about the brain from Peter Bandettini and Eric Wong
Author: Peter Bandettini
Peter Bandettini has been working in functional brain imaging since he started his Ph.D. thesis work on fMRI method development in 1991 in the Biophysics Department at the Medical College of Wisconsin (MCW). After completing a post doc at Massachusetts General Hospital in 1996 and a brief Assistant Professorship at MCW, he became Chief of Functional Imaging Methods and Director of the Functional MRI Facility at the National Institutes of Health in Bethesda, MD. He was Editor-In-Chief of NeuroImage from 2011-2017 and has been active in both the MRI community (International Society for Magnetic Resonance in Medicine) and the Brain Imaging Methods community (Organization for Human Brain Mapping). All his views and posts are his own.
Below is my annual summary of some of the best books I read in the past year. There are a few themes that weave their way throughout: the brain, running, history, and biographies. Here I also try to rank them loosely – the first ones are my top choices. Enjoy!
The Strange Order of Things by Antonio Damasio
I heard Damasio talk a few times and was not really impressed, however, I decided to give his book a try as it seems he has similar thoughts to Mark Solms’ book “Hidden Spring,” and I’m I glad I did. He is a truly gifted writer. This book was pure brilliance – perhaps the best book I have read in the last 3 years. He describes the fundamentally homeostatic role that consciousness (a sense of self) plays, and then goes into the idea of civilization and culture as just a natural progression of that homeostatic process. Argues that culture is just a further manifestation of homeostasis and is fundamental to the maintenance of civilization, humanity’s most powerful invention.
The Man from the Future: The Visionary Life of John von Neumann by Ananyo Bhattacharya
This is a fascinating book on John von Neumann – perhaps the flat out smartest (in terms of raw horsepower) and most influential thinker on the planet during his time. This helped me appreciate all his contributions, and feel pure awe at how he was both overwhelmingly smart and quick as well as, uniquely creative.
What’s our Problem by Tim Urban
I love his “wait but why” blog. This was a fun, irreverent, but very insightful sizing up of our uniquely turbulent social/political situation today! He frames dialogue in terms of high vs low level rather than right/wrong, which is useful. The second half of his book lays out in great detail his view of Social Justice Fundamentalism as a movement that started with good intentions but has gone off the rails, as every movement can when the line is crossed in “ends justifying the means.” Good food for thought and perspective. I don’t know enough in this realm to have a well thought out perspective, but am happy to take it all in!
Draft No. 4 by John McPhee
This is a unique book on the writing process of John McPhee who wrote for the New Yorker. He shares anecdotes, stories, and advice. I loved the writing and some of the insights into his thought process on how to get it just right. I couldn’t put it down.
Today we Die a Little by Richard Askwith
Great book about the life of the Czech runner Emil Zatopek. At his peak, he was truly a beast, winning the 1952 5K, 10K, and Marathon (his first Marathon, entered on a whim). He also pioneered in his uniquely hard core manner, the concept of interval training, doing up to 100 repeat 400M in a session! Crazy! I love it! While I liked the running descriptions, the depiction of the wider context of the post WWII situation was eye truly opening.
The Idea of the Brain, by Matthew Cobb
Starts well! Brilliant and packed history of the latest in our understanding of the brain. Overall great perspective piece. It starts to fall apart a bit near the end as it comes upon more recent history, as his own biases more prominently enter in. I also didn’t appreciate his dismissal of fMRI in parts (but that’s my bias!) as some complaints were slightly unfair. Later he talks up fMRI so he redeemed himself somewhat 🙂.
Rethinking Consciousness by Michael S. A. Graziano (A)
Reading popular books by prominent scientists on consciousness is a secret (or not so secret) hobby of mine. I like Graziano’s thoughts, and while I think that there are a few areas where his construct is not so air tight, I think he’s onto something as “hard problem” disappears with his attention schema construct of our sense of self…nested, external and internal world models which we attend to.
The Future of Seeing by Dan Sodickson (A)
I was asked to review this by a publisher, and hopefully it will be coming out soon! Dan is a luminary in the field of MRI, having won the ISMRM Gold Medal for co-inventing parallel imaging approaches. He’s a brilliant physicist and radiologist. Now I know he’s also a great writer of popular books that transmit his infectious enthusiasm and deep insights. This is about the future of imaging – with a heavy emphasis on medical imaging. It was packed with information and an inspiring read! I actually listened to it, as I uploaded the pdf onto my speechify app and listened while driving to and from my National Senior Games National Meet in Pittsburgh.
Shakespeare by Bill Bryson (A)
Bill Bryson is a super entertaining, engaging, and deeply scholarly writer who exudes irreverence with every sentence. I’m personally fascinated by Shakespear as I feel he was a super genius who single-handedly influenced the English language and informed the human condition in a once-in-a-millenium way (I know..I’m not really sticking my neck out here with this opinion). This lays out what is known and speculated about him in a way only Bryson can pull off.
Embrace the Suck by Brent Gleeson (A)
This is by a former Navy Seal and is all about developing resilience. Good stuff. I listened to it on a long drive. Practical, inspiring, solid advice and engaging stories.
Talent by Tyler Cowen
This was recommended to me by Adam Thomas and it’s all about recognizing talent in the context of hiring or pretty much anything. I interview many people, and am always trying to figure out the best things to ask or look for to really get at whether they would be great for the job. This delivered some solid, actionable advice.
Never Finished by David Goggins (A)
Listened to this audiobook on my runs. It’s a followup to his first book, and while good, didn’t have the same “punch” as his first one. Goggins is both inspiring and curious. I’m not sure I resonate with what motivates him (much as to do with deep anger) but hearing about his hard core exploits is fun and inspiring.
Indestractable by Nir Eyal
We all suffer from distraction and have challenges in controlling our attention. I figured I would give it a read as it had good reviews and promised advice on helping kids become less distracted – something I’m always looking for as well. Overall, a good book with solid usable advice. Insightful but nothing fundamentally new.
Feeling and Knowing by Antonio Damasio (A)
As much as I loved Damasio’s book that I read earlier, and as much as I wanted to like this, I found this one too vague and a bit flat. Nothing really new. Perhaps it was the audio format. I had a feeling he was contracted to do this and just whipped something out quickly. Had a hard time paying attention to this one.
Modern Training and Physiology for Middle and Long-Distance Runners by John Davis
Solid timeless advice and a few good insights, also unique tidbits that I never know, including that the famous writer/thinker Joseph Campbell briefly held the Columbia University school record in the half mile!
The Slummer: Quarters Till Death by Geoffrey Simpson
Amature writing and a strange dystopian setting with undeveloped characters, but the visceral descriptions of running kept me engaged. As a runner, I could relate.
Beyond Illusions by Brad Barton
Brad ran a mind-boggling age-group world record of 4:19 at age 53 so I was interested. Some slightly interesting descriptions of his process, but a pretty below average book.
Peter Bandettini1, Bruce Fischl2,4, Richard Hoge3, Albert Gjedde5 and Alan Evans3
1 National Institute of Mental Health, Bethesda, MD
2 Martinos Center, Massachusetts General Hospital, Boston, MA
3 McGill University, Montreal, Quebec, CA
4 Department of Radiology, Harvard Medical School, Boston, MA.
5 University of Copenhagen, Copenhagen, DK
Dr. Sean Marrett, a treasured staff scientist for the NIH intramural program’s Functional MRI Facility (FMRIF) for over 20 years, passed away on December 12, 2023, after a 16-month battle with mesothelioma at the age of 62. The cruel irony is that it was discovered weeks before his planned retirement. Sean radiated “joie de vivre” more strongly than anyone we have ever known, and has touched so many within the NIH IRP and international brain imaging community.
After receiving his Bachelor’s degree in Electrical Engineering from McGill University in 1983, Sean took the position of system manager and programmer under Dr. Alan Evans in the McConnell Brain Imaging Center (BIC) at the Montreal Neurological Institute. It was immediately obvious that Sean was a brilliant mind with a passion for the scientific side of BIC life. He dove into many aspects of the work there, most notably with Drs. Keith Worsley, Evans and Peter Neelin in their legendary ‘Bunker’ office meetings on the spatial statistics of activation studies with PET and fMRI. Sean later received his Ph.D. in Neuroscience from McGill University, supervised by Dr. Albert Gjedde, on oxidative metabolism in the brain. Albert was a close friend of Sean’s, harboring a great respect and affection. According to Albert, Sean in many ways inspired the development of PET imaging in Denmark, beginning with the deposit of an early device from Montreal in Copenhagen, and a second device in Aarhus that Sean came to work with and to use to develop novel imaging methods for the mapping of oxygen metabolism and blood flow.
He carried out his post-doc at the Massachusetts General Hospital (MGH) NMR Center (now the Martinos Center) from 1997 to 2000, training under the PhD scientists Bruce Rosen, Roger Tootell, and Anders Dale during their seminal fMRI-based research on the human visual system. In his time at MGH, Sean contributed to the many projects probing the retinotopic and frequency-tuning characteristics of human early visual cortex. He also made significant contributions to the software that would go on to become the FreeSurfer suite of neuroimaging analysis tools. His burgeoning skills during his postdoctoral research years illustrated Sean’s seminal strengths: he combined a wide-ranging curiosity, a deep and broad knowledge of human neuroimaging and neurophysiology with technical skills that enabled him to enhance the research of the many scientists who were fortunate enough to interact with him.
In 2000, he joined the nascent, jointly funded NINDS/NIMH FMRIF, as a staff scientist. Sean’s influence has permeated the NIH brain imaging community. He forged the positive, open, and helpful culture that now defines the FMRIF. Over the course of 20 years, thousands have been helped by Sean. As the de facto FMRIF manager, the multitude of tasks he performed included balancing the budget, creating the computational and stimulus infrastructure, and troubleshooting innumerable issues as they came up – all while setting the tone and policy of how the FMRIF is run. He successfully navigated the siting and installation of five of our scanners, including our two recent 7T scanners. Lastly, he collaborated with many groups across several NIH institutes – helping them get the best data possible.
His career spanned the beginnings of functional brain mapping. Nearly every brain imaging scientist from back in the day knew and loved Sean. In Montreal, he was a force of nature, whether engaging in intense scientific debates or carousing with other BICers at the Thompson House student bar. At MGH his contributions helped surface-based analysis become ubiquitous in the study of human cortical function. He was also part of OHBM history, as one of the driving forces behind the OHBM Hackathons and embodied the field’s energy going forward. At national and international meetings, Sean would greet new and old colleagues with his radiant smile and good cheer – as if they were the primary person that he was looking forward to meeting – always knowing what they were working on, and deeply curious to get any updates – personal or professional. His knowledge of the literature was encyclopedic and up to date. This deep grasp of salient information went beyond literature, as he developed a reputation as having a preternatural awareness of what was going on throughout the NMR Center, Clinical Center, IRP, NIH, and brain mapping community worldwide. Of all the people I know, he was among the most intensely curious about literally everything.
In the past decade, his focus of choice was scanner hardware and pulse sequences, and all the possible combinations of resolution, sensitivity, and contrast the latest in each could produce. This was an area initially well outside of his training but in time, he mastered it. When the scanner was open, chances were that he would be there testing a sequence. His favorite meetings, aside from OHBM, were the annual ISMRM (International Society for Magnetic Resonance in Medicine) meeting and the RSNA (Radiological Society for North America) meeting where he would talk shop with as many people as he could. In particular, he loved to make his annual day trip to RSNA to take in the latest in technology, and to deeply engage with the MRI-related vendors. He knew them all on a first name basis, and he was so engaging and obviously caring that almost everyone considered him a good friend.
Sean’s defining traits were his unassuming openness and genuine interest in others as well as a deep empathy for them. Regarding his work and the people he worked with, he really cared. He had boundless energy to personally engage with everyone he met. He would also remind us that we were surrounded by amazing technology and brilliant people. What more could one want? Throughout his personal and professional life, his constant and radiant smile was that of a kid in a candy store. He was a deeply devoted husband and a very proud father to his two sons. He was also a proud Canadian – or more precisely – a proud “Québécois.” He was equally ready to delve into an intense science discussion or to share a laugh or a good story. He was the first to march unhesitatingly into the ocean that he loved, so he could play in the waves, no matter how cold. During his life, and in particular, during his last year, he traveled widely. Each new location was a source of wonder, joy and excitement. It was clear that he embraced this world with every ounce of his being.
He cherished social gatherings and celebrations, and no matter how trivial or inconsequential they may have been, he always mentioned afterwards, “That was so fun! Just wonderful!” Perhaps the most appropriate summary of his life would be his dancing. Anyone who knew Sean also knew, as truth itself, that whenever there was a dance floor, he was ALWAYS out there, drenched in sweat, radiating joy, fully in the moment, dancing as if that was all that ever mattered – and indeed, he was right.
Sean attending to the delivery of the National Institute of Mental Health Functional MRI Facility’s Siemens Terra 7T, March 30, 2022.
After a bit of a hiatus, I’m finally back to putting out in blog form what I find interesting in the world of brain imaging. I like the idea of keeping up a more regular pace in putting out incompletely finalized thoughts out there. There are a few things I want to write about. Some are controversies, some are book or reviews, some are summaries of activities in my group, some cover new areas, and some are attempts to frame areas of the field in ways that are used. I am also writing a book on the challenges of fMRI, and will be posting each chapter as it is completed in rough draft.
I thought I would start with something that happened to me earlier this week. I will frame the situation briefly. In 2017, I stepped down as Editor in Chief of the journal, NeuroImage after two very satisfying 3 year terms. Before that I was Senior editor, and before that going back to the early 2000’s, I was Handling editor. It was just a wonderful, stimulating experience overall.
After that, Michael Breakspear took over as EIC and then Steve Smith took over. My term ended before the exciting upswing in Open Access journals that allow free access to readers, but charge those submitting papers with an article processing charge (APC). Most traditional journals have embraced this, but these fees are generally pretty high – too high for many. Hence the controversy that ensued and Elsevier which owns NeuroImage struggled at first to offer an open access option, but then set an APC that many felt was too high.
Last year Steve Smith and his editorial team at NI resigned after it was clear that while Elsevier charges an APC which is about the going rate for other similar journals operated by for-profit companies, it is much higher than what costs are and prohibitive to many groups in the brain mapping community, so Steve rightly pointed out that NI was overcharging and told them the entire NI team would resign if they didn’t lower their fees. Elsevier didn’t budge, so Steve and the entire editorial team resigned and quickly moved to start the journal Imaging Neuroscience with the non-profit MIT press.
I welcomed and encouraged all of this as I feel that the landscape of academic publishing is changing and that these fees should be able to be lowered considerably – a first step towards the inevitable direction towards new models for curating and distributing scientific research – something that I’ll write more about later.
About 6 months after this happened, NI is struggling to find people to replace this team as Imaging Neuroscience is well on its way to thriving. Many kudos to Steve and his group for pulling this transition off so masterfully. Last week, I was surprised and, I have to admit, bemused, to received the following email: (modified slightly to keep the sender anonymous):
Dear Peter,
I hope this email finds you well…
(We)..are currently recruiting a new editorial team. We are looking for experienced, well-established academics with the skills and expertise to help us continue supporting the neuroscientific community by publishing high-quality neuroimaging research. In fact, Y has just joined us for his expertise in translational research and MRI acquisition methods.
Therefore, as an fMRI expert and former Editor-In-Chief for NeuroImage, would you be interested in becoming an Associate Editor for NeuroImage? I’m not sure if things have changed since you were Editor-in-Chief, but currently, we are offering Associate Editors the following:
$2000 yearly compensation for handling approximately 40 manuscripts per year
If you run a special issue, authors get a 30% APC discount, and you will have ten free publication credits to share between you and your guest editors.
Free access to NeuroImage publications, Science Direct and Scopus
If you are potentially interested, I would be happy to answer any questions over email, or if you would prefer, we could schedule a call at a time to suit you.
Looking forward to hearing from you.
With best wishes, X
This was surprising and a bit odd on several levels but rather than just reply “no thanks” I decided that it was a useful way to thrash out my thoughts a bit. I also felt the editors who joined NI should clearly understand the context of what they are doing from the perspective of a former Editor-In-Chief.
Here is my reply:
Dear X,
I appreciate your reaching out…
When I stepped down as Editor-In-Chief of Neuroimage back in 2017 after two 3 year terms and over 17 years of being associated with NI as an editor, I was very satisfied and am still happy to say that I’ve moved on to other things – one of which is being editor in chief of a small open access journal Aperture Neuro, with an APC no higher than $1000. Therefore, I will have to decline your offer. My reaction to your letter is mixed. On one hand, I appreciate your reaching out and generally want you to be successful. On the other hand, I’m bemused that you think that my 17 years of loyalty – not to NeuroImage but to the editors of NeuroImage and to the brain mapping community – is an insignificant factor in the face of the wider context of what happened last year such that I would re-start as an associate editor at a journal that my former team, my dear colleagues, and my friends all resigned from based on a principle that I agree with.
In full disclosure (and it’s all public), I’ve been in close contact with the NI team before, during, and after they have resigned. I encouraged Steve Smith (EIC at the time) to engage with Elsevier about lowering their APC, and when they would not engage in any meaningful discussion with him, I encouraged him and the entire editorial team to follow through with resigning (..as Steve had clearly told them he would if fees were not changed). While I fully understand that Elsevier is a business and it is generally good practice to set prices based on market forces, I also realize that these fees are being propped up by limited competition, captive audience, and funding sources that are, so far, agnostic to what labs pay for publishing. In the context of scientific publishing, charging APCs that are two or three times higher than what they need to be is exploiting a customer that does not yet have leverage to change anything as there are not many other high quality options (i.e. this situation is a an oligopoly of a few big publishing companies relying on well funded researchers’ need to publish in reputable journals). This is changing though. What Steve did by resigning is open up another option, thus helping to catalyze change in a positive, inevitable direction.
In general, the current publishing model made sense, to a degree, when a printed journal was published monthly. This was a high-overhead service that was extremely valuable. Now, with electronic publishing, the overhead costs are much lower and the labor by editors and reviewers has always been essentially free. The reliance is on reputation and such intangibles as impact factor. As more non-profit low cost open access publishers start establishing high-impact, reputable journals, the publishing business, as it is, will go the way of the horse and buggy or perhaps more accurately, the blackberry, which became less competitive because it didn’t change when it could have.
I personally recruited at least half the team that resigned, so feel a strong loyalty to them and fully support their decision as it helps catalyze what at least to me, is an inevitable process that Elsevier is not willing to fully adapt to yet.
While it can be argued that Elsevier’s current APC is in line with or less than that of other journals, such business models are being challenged by non-profit, low overhead cost, yet still high-quality publishing. So, my reaction to your invite is complicated in that I totally understand that Elsevier is a business and businesses want to thrive, and that you (as with most editors – and this is fine) just care about recruiting good people to help publish good articles wherever you are. It does seem that this inevitable change will have two driving forces: 1. Grass root efforts like that fostered of Steve Smith when they moved to Imaging Neuroscience, and 2. Top down changes in how funding agencies allow researchers spend their money on publishing. Regardless of the catalysts, the change does seem inevitable, and while it certainly has its flaws and challenges, it will be for the better in the long run.
I do hope that Elsevier will change sooner than later in their policies. There exist many business models that would allow more low-cost publishing in high quality journals. As an editor, I know you just care about getting the best papers through, and with that effort I wish you the best.
Best regards,
Peter
So, these are my thoughts.. I could add so much more, and will do so in later blog posts. I’m curious what you think about this. If you have any insights or agree/disagree with me, please email me.
The paper by Marek et al (Reproducible brain-wide association studies require thousands of individuals, Nature, 602, 7902, pp 654-660, 2022) came out recently, and caused a bit of a stir in the field for a couple of reasons: First, the title, while an accurate description of the findings of the paper, is bold and lacking just enough qualifiers to quell immediate questions. “Does this imply that fMRI or other measures used in BWAS are lacking intrinsic sensitivity?” “Is this a general statement about all studies now and into the future?” “Is fMRI doomed to require thousands of individuals for all studies?” The answers to all these questions is “no,” as becomes clear on reading the paper.
Secondly, I think that the reaction of many on reading the title was a sigh and a thought that this is yet another paper in the same vein as the dead salmon study, the double dipping paper, or the cluster failure paper that makes a cautionary statement about fMRI that is then wildly spun by the popular media to imply more damning impact than brain imaging experts would gather. Again, it’s not this kind of paper, however there was a bit of hyperbole in places. The Nature News article titled “Can brain scans reveal behavior? Bombshell study says not yet” discusses this in an overall reasonable manner but the need for an attention-grabbing title was unfortunate. The study was not a bombshell. The Marek study was a clear, even-handed, well-done (clearly a huge amount of work!) description of a specific type of comparison in fMRI and MRI performed in a specific way. While my reaction to the Merek paper was that of mild surprise that the reported correlation values were a bit lower than expected, I was more curious than anything, and thankful that such a study was performed to clarify precisely where the field – again, for a specific type of study performed in a specific manner – was.
I was asked by several groups to comment on it. First, I discussed my thoughts with Nature News. At the time of my discussion, I was still not certain what I thought of the paper, and was suggesting that there may be sources of error and low power that might be improved upon: such as population selection, the choice of resting state as the measure, time series noise, or even spatial normalization pipelines that might be smearing out much of the useful information. I aimed to emphasize in that discussion that it should be made clear that the Marek paper is emphatically NOT a statement about the intrinsic sensitivity of fMRI – which sensitive enough to reliably detect activation in single subjects – and even in single runs or with single events. It was more a statement on the challenges of extracting subtle differences between populations having different behaviors. While I feel that there is quite a bit that can be done to push the necessary numbers down (as a field, we are really just getting started), I can’t rule out the fact that people may just be too different in how their brains manifest differences in behavior – thus confounding attempts to capture population effects. It’s really an interesting question for future study.
I was also asked to write something for an upcoming collection of opinions on the Marek paper to be published in Aperture Neuro – a new publishing platform associated with the Organization for Human Brain Mapping. I finally submitted it a few weeks ago.
In the mean time, four of the authors (Scott Marek, Brenden Tervo-Clemmens, Damian Fair, and Nico Dosenbach) graciously agreed to be interviewed by me on the OHBM Neurosalience Podcast. This episode can be reached here. During this truly outstanding conversation, the authors further clarified the methods and impact of the paper. I pushed them on all the things that could be improved, methodologically, to bring these numbers down but was just a bit further swayed that one implication of these results may be that the variability of people, as we currently sort them based on their behavior, really might be larger than we fully appreciate. It should be emphasized that the authors main message was overall extremely positive on the potential impact and importance of these large N studies as well as the many other ways that fMRI can be used with small N or even individual subjects to assess activity or changes in activity with interventions.
I was lastly asked to write a commentary for Cell Press’s new flagship medical and translational journal, Med, which I just submitted yesterday and am adding to this blog post, below. However, before you read that, I wanted to leave you with a thought experiment that might help illustrate the challenge – at least as I see it:
It’s been shown that fMRI can track changes in brain activity or connectivity with specific interventions. Let’s say, after a month of an intervention, we clearly see a change. This is not unreasonable and has been reported often. We repeat this for 100 or 1000 subjects. In each subject, we can track a change! Now, here’s the problem. If we repurposed this study as a BWAS study by grouping all subjects together before and then after the intervention and compare the groups, the implication (as I understand Marek et al) is that we would likely not see a reliable effect that comes through, and those effects that we did see from this BWAS-style approach would lack the richness of the individual changes that we are able to see longitudinally with every one of the subjects. The implication is that each subject’s brain changed in a way that was reliably measured with fMRI, but each brain changed in a way that was just different enough so that when grouped, the effects mostly disappeared. Again, this is just a hypothetical thought experiment. I would love to see such a study done as it would shed light on specifically what it is about BWAS studies that result in effect sizes that are lower than intuition suggests.
Either way, here is the paper that I just submitted to Med. I would like to thank my coauthors, Javier Gonzalez-Castillo, Dan Handwerker, Paul Taylor, Gang Chen, and Adam Thomas for all their insights and in helping to write it. On last note, since this paper was a commentary, I was limited to 3000 words and 15 references. Otherwise it would have been much longer with many more relevant references.
The challenge of BWAS: Unknown Unknowns in Feature Space and Variance
Peter A. Bandettini1,2, Javier Gonzalez-Castillo1, Dan Handwerker1, Paul Taylor3, Gang Chen3, Adam Thomas4
1 Section on Functional Imaging Methods
2 Functional MRI Core Facility
3 Scientific and Statistical Computing Core Facility
4 Data Science and Sharing Team
National Institute of Mental Health
Bethesda, MD 20817
Abstract:
The recent paper by Marek et al. (Reproducible brain-wide association studies require thousands of individuals, Nature, 602, 7902, pp 654-660, 2022) has shown that to capture brain-behavioral phenotype associations using brain measures of cortical thickness, resting state connectivity, and task fMRI, thousands of individuals are required. For those outside the field of human brain mapping and even for some within, these results are potentially misunderstood to imply that MRI or fMRI lack sensitivity or specificity. This commentary expands and develops on what was touched upon in the Marek et al. paper and focuses a bit more fMRI. First it is argued that fMRI is exquisitely sensitive to brain activity and modulations in brain activity in individual subjects. Here, fMRI advancement over the years is described, including examples of sensitivity to robustly map activity and connectivity in individuals. Secondly, the potential underlying – yet still unknown – factors that may be determining for the need for thousands of subjects, as described in the Marek paper, are discussed. These factors may include variation in individuals’ anatomy or function that are not accounted for in the processing pipeline, sub-optimal choice of features in the data from which to differentiate individuals, or the sobering reality that the mapping between behavior (including behavior differences) and brain features, while readily tracked within individuals, may truly vary across individuals enough to confound and limit the power of group comparison approaches – even with fully optimized pipelines and feature extraction approaches. True human variability is a potentially rich area of future research – that of more fully understanding how individuals expressing similar behavior vary in anatomy and function. A final source of variance may be inaccurate grouping of populations to compare. Behavior is highly complex, and it is possible that alternative grouping schemes based on insights into brain-behavior relationships may stratify differences more readily. Alternatively, allowing self-sorting of data may inform dimensions of behavior that have not been fully appreciated. Potential ways forward to explore and correct for the unknown unknowns in feature space and unwanted variance are finally discussed.
The Emergence and Growth of fMRI:
Human behavior originates in the brain and differences in human behavior also have brain correlates. The daunting task of neuroscience is to trace differences and similarities in behavior over time scales of milliseconds to decades back to the brain which is organized across temporal and spatial scales of milliseconds to years and spatial scales of microns to centimeters. Capturing the salient features across these scales that determine behavior is perhaps the defining challenge of human neuroscience. Insights derived from this effort shape our understanding of brain organization and may provide clinical utility in diagnosis and treatment. Advances in this effort are fundamentally driven by more powerful tools coupled with more sophisticated questions, experiments, models, and analyses.
When functional MRI (fMRI) emerged, it was embraced because activation-induced signal changes are robust and repeatable. Blood oxygen level dependent (BOLD) contrast allows non-invasive mapping of neuronal activity changes in human brains with high consistency and fidelity on the scales of seconds and millimeters. Because it was able to be implemented on the already vast number of clinical MRI scanners in the world, its growth was explosive. The activation-induced hemodynamic response, while limited in many ways, has become a widely used and effective tool for indirectly mapping brain human activation. It is indirect because it relies on the spatially localized and consistent relationship between brain activation and hemodynamic changes that result in an increase in flow, volume, and oxygenation. Increases in flow are measured with techniques such as arterial spin labeling (ASL), volume with techniques such as vascular space occupancy imaging (VASO), and blood oxygenation with T2* or T2 weighted contrast (i.e. BOLD contrast). BOLD contrast is far and away the most common of the techniques because of its ease in implementation and highest functional contrast of the three.
Early on, richly featured and high-fidelity motor and sensory activation maps were produced, followed quickly by maps of cognitive processes and more subtle activation. Then resting state fMRI emerged in the late 1990’s, demonstrating that temporally correlated spontaneous fluctuations in the BOLD signal organized themselves into coarse networks across 100’s of nodes. The study of the functional significance of these networks rapidly followed, accompanied by revelations that these networks dynamically reconfigured over time, and were modulated in association with specific tasks, brain states, or measures of performance(1).
Functional MRI has flourished over three decades in a large part because of its success in creating detailed and informative maps of brain activation in individuals in single scanning sessions. At typical resolutions, the functional contrast to noise of fMRI is about 5/1, depending on many factors. This robustness has enabled fMRI to delineate, at the individual level, activity changes associated with vanishingly subtle variations in stimuli or task, learning, attention, and adaptation to name a few. Additionally, in quasi-real time, fMRI has successfully provided neuro-feedback to individuals, leading to changes in connectivity and, in some cases, behavior(2). Clinically, fMRI is increasingly used for presurgical mapping of individuals(3). There is no doubt that the method itself is sufficiently robust and sensitive to be applied to individual subjects to map detailed organization patterns as well as subtle changes with interventions.
Functional MRI has been taken further. Voxel-wise patterns of activity within regions of activation in individuals were shown to delineate subtle variations in task or stimuli. This pattern-effect mapping, known as representational similarity analysis(4), has shown continued success and growth. Because each pattern is subject and even session-specific, it currently defies multi-subject averaging; however, approaches such as hyper-alignment(5) show promise even at this level of detail.
Over time, fMRI signal has been shown to be stable, repeatable, and sensitive enough to reveal induced differences in activity as an individual brain learns, adapts, and engages. Functional MRI can consistently delineate functional activation in individual brains – going so far as to be able to allow approximate reconstruction of the original stimuli, from activation patterns associated with movie viewing or sentence reading(6,7). All these approaches rely on within-individual contrasts, thus sidestepping the less tractable problem of variance across subjects.
For “central tendency” mapping, it was determined that combining data across subjects shows the generalizability from individuals to a population. The “central tendency” effects and derived time courses are more stable but inevitably minimize or remove more subtle effects that population subsets may reveal. These approaches are negatively impacted by variation in structure and function that may be unaccounted for or defy current best practices in spatial normalization and alignment.
Over the past three decades, since fMRI and structural MRI have been able to provide individualized information, the desire has been to go beyond central tendency mapping to reveal individual differences in activation, connectivity, and function. With “standard” clinical MRI, scans of the brain, lesions, tumors, vascular, or gross structural abnormalities have been straightforward for a trained radiologist to identify; however, psychiatric and most behavioral differences have brain correlates that are much too subtle for standard clinical MRI approaches. An effort has been made over at least the past two decades to pool and average functional and/or structural images together towards the creation of reproducible and clinically useful biomarkers. No one doubts that differences between individuals or truly homogeneous groups reside in the brain; however, whether they can be seen robustly or at all at the specific temporal and spatial niche offered by structural and functional MRI remains an open question. This question remains open because the brain is organized across a wide range of temporal and spatial scales and the causal physical mechanisms that lead to trait or state differences are not currently understood. At this stage, neuroscientists and clinicians are using fMRI to determine if any signatures related to behavioral or state differences can be robustly seen at all. It may well be that distinct brain differences across many scales can lead to similar trait differences or it may be that they reside at a spatial or temporal scale – or even magnitude – that is outside of what fMRI or MRI can capture. It remains to be fully determined.
The challenge of the Marek paper:
The recent paper by Marek et al. (8) has argued that behavioral phenotype variations associated with variations in cortical thickness, activation, and resting state connectivity, which they termed Brain-Wide Associations (BWAS) as measured with MRI, are reproducible only after thousands of individuals are considered. The authors of the paper suggest that the unfortunate reality is that the effect sizes are so small that reproducible studies require about two thousand subjects, and would benefit somewhat from further reduction in time series noise and multivariate analysis approaches. It is good news that we can get an effect, but for many invested in fMRI studies with this goal, this may be cause for despair and confusion. How is it that we can map individual brains so robustly, efficiently, and precisely, yet require so many subjects to derive any meaningful result when looking for differences in this readily mapped functional and structural information?
While single subjects can produce robust activation and connectivity maps, the differences in activation or structure as they relate to differences in traits across individuals are either so subtle and/or so variable that thousands of subjects are required to see emerging (i.e., “central tendency”) effects – and these may be just the most robust effects. Put another way, if the unwanted variability across subjects were vanishingly small, then the results of Marek et al would suggest that the BWAS – related differences in measured activation, structure, or connectivity would be about three orders of magnitude smaller than the main effect that is commonly seen in individual maps (1 subject required for an activation map vs 1000 subjects required for reliable difference). Given the much more readily observed changes observed while tracking individuals longitudinally as they change state, the small difference explanation seems highly unlikely. Therefore, the need for thousands of subjects is more likely explained predominantly by the unwanted and unaccounted for variance in trait-relevant or processing pipeline-related structural, activation, or connectivity patterns.
The problem or challenge, as it exists, is not primarily with the sensitivity or specificity of fMRI or structural MRI. Rather it likely resides in the uncharacterized and tremendously large variation in observed brain-behavior relationships across individuals. The underlying brain structure-function relationships, as measured with fMRI or MRI, that may be different for, say, a depressed individual may be numerous, subtle, and idiosyncratic. The study of BWAS is an attempt to determine the most common brain-based causes from a turbulent sea of possible causes across individuals. The Marek et al study has shown that this challenge is more profound than most of us may have imagined – at least on the temporal and spatial scale that we have access to through our tools. It may also be true that those effects that we do eventually see after studying groups of thousands of subjects are but a small fraction of the dispersed effects unique to each individual – and that those that we are able to observe are not necessarily the most influential to the trait observed, as they are simply, by definition, the most commonly observed.
Marek et al have done a service to the field by pointing out concerns for a type of fMRI study that has wide-spread interest but so far, relatively few reported studies. Their work may be interpreted to suggest that, given the formidable number of subjects needed, BWAS-style studies are not a practically tenable use of fMRI. This conclusion should be tempered by an alternate view. Large databases of deeply characterized subjects may be queried in many different ways, potentially increasing their utility into the future. The authors also point out that the effect sizes shown are at least comparable to large database gene-wide association studies (GWAS). Improvement is still likely. It’s important to make sure the field of fMRI has done due diligence in being certain that it has minimized the irrelevant variance across subjects as it is manifest through our techniques for determining function and in our techniques for pooling multi-subject data.
The unknown unknowns in feature space and variance:
Is there something we are missing – hidden sources of irrelevant variance, inaccurate choices in feature space, or mischaracterization and therefore mis-grouping of behavioral phenotypes – that are suppressing the more informative features and thus reducing effect size? In the tables below, the “unknown unknowns” in understanding BWAS power and possible approaches to address them are described. Table 1 lists potential unknown confounders that may be reducing BWAS power. Within this table are some considerations on how to understand and address these unknowns. Much more could be said for any of these, and indeed work is already taking place worldwide on all these topics. Table 2 lists other considerations that are not necessarily unknowns but areas of active research that should also be considered when designing BWAS or perhaps any fMRI study.
Table 1: Potential Confounders that are not fully understood nor addressed:
Confound
Description
Resting State fMRI
What really is resting state fMRI – aperiodic bursts of synchronized activity? How much is conscious? How much is arousal? How much is breathing? How does it vary with brain state, prior tasks, time of day, etc.? How deeply can we truly interpret correlated time series signals as the correlation depends on signal phase, shape, and underlying noise – all that could change, implying a change in connectivity where there is none, or vice versa. As easy as it is to implement resting state in the scanner, without more precise ways of dissecting and interpreting the most informative aspects of this signal, other approaches might be more powerful. At the very least, external measures that help inform the analysis of resting state (e.g., eye tracking or alertness measures) are needed.
Spatial Normalization
Individual brain anatomy varies as a function of spatial scale. Transforming brains to normalized and standardized space may be removing informative features. Nonlinear warping and registration approaches have advanced over the years yet remain far from perfect. One source of imperfection is anatomical: when aligning brains with strongly varying sulcal and gyral patterns, diffeomorphic warp fields have errors in some areas. On a coarser scale, brains have regionally differing gyral and sulcal patterns as well as different functional/structural relationships. Echo planar images have additional warping due to field inhomogeneities.
Parcellation
If a standard parcellation template is applied to a cohort of normalized brains, the mismatch between the true functional delineation of each parcel in each subject’s brain and the applied parcellation may be profound, causing extreme mixing of the signal between adjacent parcels. It may also result in misidentified parcels: a subject’s region X is, in reality, mostly in region Y, so it gets binned and compared with wrong information, either washing out real effects or pointing to false ones. Effects from small parcels may be entirely washed out. Additionally, it’s likely that the typical parcels are substantially larger than most informative cortical units. A difference between may reside as a connection difference between a small sub-component at the border of one parcel, which may be mixing with the signal from other parcels, thus eliminating the effect. Such a useful feature, if it existed, would be invisible in the analysis described in Marek et al. The variation between functionally derived individual subject parcellation maps should be further explored. Misalignment, misregistration, and mis-parcellation may be substantial sources of unwanted variance.
Processing Pipelines
The Marek paper had well-controlled pipelines, however, each pipeline has many steps, well beyond the scope of this perspective piece, that, if varied, would result in perhaps different conclusions. Pipeline comparisons have shown the sensitivity to processing steps for the results produced, however, missing is the lack of “ground truth.” Every pipeline likely has shortcomings. Quality control metrics for each time series, combined with visual inspection of the data in an efficient manner is fundamental for the development of more automated methods for identifying and reducing variance in population-level studies.
Population Sorting
Psychosis and intelligence, used here to sort the populations being compared, are likely oversimplifications of highly multidimensional behavioral phenotypes that may have no one correspondence in the brain. If they are all pooled together for comparison, interesting and perhaps strong differences may be washed out. More precise and nuanced pooling of populations or even data driven population sorting (while carefully avoiding circularity of course) would perhaps improve these results significantly. Behavioral phenotypes and brain measures are high dimensional. As these manifolds are better understood, it’s likely that stronger associations will be obtained with greater efficiency.
Anna Karenina effect
This effect was first suggested by Finn et al (9)and based on the first line of the famous novel by Tolstoy: “Happy families are all alike; every unhappy family is unhappy in its own way.” It may be that the neuronal correlates of disorders are substantially more variable than the central tendencies of normal populations, reducing the effect size when attempting to discern a single network or set of networks associated with the disorder. This effect may play a role in the distributions of phenotypes even within typical non-pathologic ranges – such as intelligence.
Table 2: Other Avenues to Improvement
Approach
Description
Dynamic resting state fMRI
What really is resting state fMRI? Is it aperiodic bursts of synchronized activity that is transformed through the hemodynamic response to low frequency fluctuations? How much arises from conscious experience(10)? How much is arousal? How much is breathing? How does it vary with brain state, prior tasks, time of day, etc.? How deeply can we (or should we) interpret correlated time series signals as the correlation depends on signal phase, shape, and underlying noise – all that could change, implying a change in connectivity where there is none, or vice versa. As easy as it is to implement resting state in the scanner, without more precise ways of dissecting and interpreting the most informative aspects of this signal, other approaches might be more powerful. At the very least, external measures that help inform the analysis of resting state (e.g., eye tracking or alertness measures) are needed.
Naturalistic Stimuli
Engaging subjects in passive or minimally demanding yet time-locked tasks has been shown to produce more stable connectivity maps and opens up new options for analyses. For instance, movie watching or story listening allows model driven or cross-subject correlation analysis and helps to tease apart informative elements of ongoing brain activity(11,12). Time locked continuous engagement in a task also may be optimized to differentiate behavioral phenotypes – used as “stress tests” in similar ways as cardiac stress tests are used to identify latent pathology. Continuously engaging tasks also control for vigilance changes over time – which has been shown to be a confound.
Task fMRI
Like movies, as mentioned above, a well-chosen set of tasks may serve to better stratify effects across individuals and populations. Specific tasks could be optimized to produce a large range of fMRI responses depending on the question and associated behavioral measures. The field of fMRI has evolved a massive array of tasks, able to selectively activate a wide range of networks. With more precise control over activation magnitude and location, as well as precise monitoring of task performance with each response, selective dissection of differences might improve.
Spatial Resolution
Differences may perhaps reveal themselves more clearly at the layer or column resolution level – capable of being imaged with fMRI, however here, the problem of spatial normalization and registration becomes even more problematic and unsolved by any automated process. For example, to illustrate, an early fMRI paper demonstrated clear differences in ocular dominance column distribution in patients with amblyopia. If these data were put through the pipelines used in the Marek paper, the results would likely fall well below any statistical threshold or measure of replicability as the useful features are much finer than the spatial error inherent to spatial normalization – not to mention that ocular dominance columns are quasi-random, thus defying any current normalization scheme. We need to improve our ability to identify and use, in a principled manner, features such as these before we can make conclusive statements on effect size that is derivable with fMRI.
Time Series Variance
In these data physiological noise dominates over more well-understood thermal noise. Methods for reducing time series variance were mentioned in Marek et al. Novel acquisition approaches such as multi-echo fMRI may help, along with external measures of breathing, vigilance, and other contributors to variance. Even with these methods for measurement, robust ways of using these measures to eliminate this variance – or perhaps associate it with phenotype – requires substantial further development. It should be emphasized here that if the field is fully successful in eliminating all physiological noise from the data, then rather than having a ceiling temporal signal to noise ratio (SNR) of 100/1, the temporal SNR would only be limited by the intrinsic image SNR determined by the scanning parameters and the RF coil – thus allowing perhaps an order of magnitude improvement in temporal signal to noise.
Other fMRI and MRI Features
Correlation is but one feature of the fMRI time series. Other features such as entropy, network configuration dwell time, the sequence of network configurations over time, mutual information, and even standard deviation, may prove to be more robust and informative. The activation-elicited fMRI signal itself can be further reduced to other features such as latencies, undershoots, transients, NMR phase, and much more. Perhaps all of these contain independent information that may be leveraged in multivariate analysis to increase power. Structural features such as gyrification, fractal dimension, global T1, T2, etc… may also be more informative than gray matter thickness.
In summary, Marek et al provide a sobering snapshot of the state of BWAS using MRI and fMRI. The study of brain wide associations(13), like the study of gene-wide associations(14), does have promise however has barely just begun work towards objectively identifying and extracting the most meaningful features and identifying and removing the confounding variance from the signal – in time and space. We are at an early stage in this promising research. The Marek at al study has performed a profound service by clarifying, quantifying, and highlighting the challenge.
The study of individuals and how they change with time and natural disease progression, or interventions will continue. In fact, large population longitudinal studies in which each participant is directly compared with themselves at an earlier time, and then compared across the cohort will likely have a high yield of deep insights into brain differences and similarities(15). These studies are difficult but are worth pursuing as they avoid many of the potential pitfalls of BWAS, related to between-subject variability, as described in Marek et al.
Individual or small N fMRI will continue as insights into healthy brain organization and function are still being derived at an increasingly rapid rate as the field develops methods to extract more subtle information from the data. Individual fMRI for presurgical mapping, real time feedback, and neuromodulation guidance also continues with extremely promising progress.
Evolving fMRI from central tendency mapping to identifying differences in individuals has proven to be deeply challenging. As the field continues working to address this challenge, it will likely uncover unique sources of variance residing in every step of acquisition and analysis; as well as yet-uncovered structure in idiosyncratic brain-behavior relationships. The fMRI signal is intrinsically strong, reproducible, and robust, as has been shown over the past 30 years. To use it to compare individuals, we need to delve much more deeply into how individuals and their brains vary so we can identify and minimize the still unknown nuisance variance and maximally use the still unknown informative variance. Once we can do this, the effect sizes and replicability promise to reach a useful level with fewer required subjects. In the process of this work, new principles of brain organization may likely be derived. Perhaps before the field rushes ahead to collect more two-thousand subject cohorts, it should explore, understand, and minimize the unknown unknowns in the feature space and variance among individuals.
References
1. Newbold DJ, Laumann TO, Hoyt CR, Hampton JM, Montez DF, Raut RV, et al. Plasticity and Spontaneous Activity Pulses in Disused Human Brain Circuits. Neuron. 2020;1–10.
2. Ramot M, Kimmich S, Gonzalez-Castillo J, Roopchansingh V, Popal H, White E, et al. Direct modulation of aberrant brain network connectivity through real-time NeuroFeedback. Elife. 2017;6:e28974.
3. Silva MA, See AP, Essayed WI, Golby AJ, Tie Y. Challenges and techniques for presurgical brain mapping with functional MRI. NeuroImage Clin. 2018 Jan 1;17:794–803.
4. Kriegeskorte N, Mur M, Bandettini P. Representational similarity analysis – connecting the branches of systems neuroscience. Front Syst Neurosci. 2008 Nov;2(NOV):2007–8.
5. Haxby JV, Guntupalli JS, Nastase SA, Feilong M. Hyperalignment: Modeling shared information encoded in idiosyncratic cortical topographies. Elife. 2020;9:e56601.
6. Pereira F, Lou B, Pritchett B, Ritter S, Gershman SJ, Kanwisher N, et al. Toward a universal decoder of linguistic meaning from brain activation. Nat Commun. 2018 Mar 6;9(1):963.
7. Nishimoto S, Vu AT, Naselaris T, Benjamini Y, Yu B, Gallant JL. Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies. Curr Biol. 2011 Oct 11;21(19):1641–6.
8. Marek S, Tervo-Clemmens B, Calabro FJ, Montez DF, Kay BP, Hatoum AS, et al. Reproducible brain-wide association studies require thousands of individuals. Nature. 2022 Mar;603(7902):654–60.
9. Finn ES, Glerean E, Khojandi AY, Nielson D, Molfese PJ, Handwerker DA, et al. Idiosynchrony: From shared responses to individual differences during naturalistic neuroimaging. NeuroImage. 2020 Jul;215:116828–116828.
10. Gonzalez-Castillo J, Kam JWY, Hoy CW, Bandettini PA. How to Interpret Resting-State fMRI: Ask Your Participants. J Neurosci. 2021 Feb 10;41(6):1130–41.
11. Hasson U, Nir Y, Levy I, Fuhrmann G, Malach R. Intersubject Synchronization of Cortical Activity during Natural Vision. Science. 2004 Mar;303(5664):1634–40.
12. Finn ES. Is it time to put rest to rest? Trends Cogn Sci. 2021 Dec 1;25(12):1021–32.
13. Sui J, Jiang R, Bustillo J, Calhoun V. Neuroimaging-based Individualized Prediction of Cognition and Behavior for Mental Disorders and Health: Methods and Promises. Biol Psychiatry. 2020 Dec 1;88(11):818–28.
14. Visscher PM, Wray NR, Zhang Q, Sklar P, McCarthy MI, Brown MA, et al. 10 Years of GWAS Discovery: Biology, Function, and Translation. Am J Hum Genet. 2017 Jul 6;101(1):5–22.
15. Douaud G, Lee S, Alfaro-Almagro F, Arthofer C, Wang C, McCarthy P, et al. SARS-CoV-2 is associated with changes in brain structure in UK Biobank. Nature. 2022 Apr;604(7907):697–707.
One defining and often overlooked aspect of fMRI as a field is that it is has been riding on the back of and directly benefitting from the massive clinical MRI industry. Even though fMRI has not yet hit the clinical mainstream – as there are no widely used standard clinical practices that include fMRI, it has reaped many benefits from the clinical impact of “standard” MRI. Just about every clinical scanner can be used for fMRI with minimal modification, as most vendors have rudimentary fMRI packages that are sold. Just imagine if MRI was only useful for fMRI – how much slower fMRI methods and applications would have developed and how much more expensive and less advanced MRI scanners would be. Without a thriving clinical MRI market only a few centers would be able to afford scanners that would likely be primitive compared to the technology that exists today.
Looking back almost 40 years to the early 1980’s when the first MRI scanners were being sold, we see that the clinical impact of MRI was almost immediate and massive. For the first time, soft tissue was able to be imaged non invasively with unprecedented resolution, providing immediate clinical applications for localization of brain and body lesions. Commercial scanners, typically 1.5T, were rapidly installed in hospitals worldwide. By the late 1980’s the clinical market for MRI scanners was booming. The clinical applications continued to grow. MRI was used to image not only brain, but just about every other part of the body. As long as it had water it was able to be imaged. Sequences were developed to capture the heart in motion and even characterize trabecular bone structure. Tendons, muscles, and lungs were imaged. Importantly, the information provided by MRI was highly valuable, non-invasively obtained, and unique relative to other approaches. The clinical niches were increasing.
In 1991, fMRI came along. Two of the first three results were produced on commercially sold clinical scanners that were tricked out to allow for high speed imaging. In the case of Massachusetts General Hospital, they used a “retrofitted” (I love that word) resonant gradient system sold by ANMR. The system at MCW had a home built, sewer pipe, epoxy, and wire local head gradient coil, that, because of its extremely low inductance, could perform echo planar imaging at relatively high resolution. Only The University Minnesota’s scanner, a 4 Tesla research device, was non-commercial.
Since 1991, advancement of fMRI was initially gradual as commercial availability of EPI, almost essential for fMRI, was limited. Finally, in 1996, EPI was included on commercial scanners and to the best that I can recall, mostly marketed as a method for tracking bolus injections of gadolinium for cerebral blood volume/perfusion assessment and for freezing cardiac motion. The first demonstration for EPI that I recall was shown in 1989 by Robert Weisskoff from MGH on the their GE / retrofitted ANMR system – capturing a spectacular movie of a beating heart. EPI was great for moving organs like the heart or rapidly changing contrast like a bolus injection of Gadolinium. EPI as a pulse sequence for imaging the heart was eventually superseded by fast multi-shot, gated, “cine” methods that were more effective and higher resolution. However, thanks to EPI being sold with commercial scanners, functional MRI began to propagate more rapidly after 1996. Researchers could now negotiate for time on their hospital scanners to collect pilot fMRI data. Eventually, as research funding for fMRI grew, more centers were able to afford research-dedicated fMRI scanners. That said, the quantity of scanners today that are sold for the purposes of fMRI are such a small fraction of the clinical market (I might venture 1000 (fMRI scanners) /50,000 (clinical scanners) or 2%), that the buyers’ needs as they relate to fMRI typically don’t influence vendor product development in any meaningful way. Vendors can’t devote a large fraction of their R & D time to a research market. Almost all benefit that the field of fMRI receives from advances in what vendors provide is incidental as it likely relates to the improvement of more clinically relevant techniques. Recent examples include high field, multi-channel coil arrays, and parallel reconstruction – all beneficial to clinical MRI but also highly valued by the fMRI community. This also applies to 3T scanners back in the early 2000’s. Relative to 1.5 T, 3T provided more signal to noise and in some cases better contrast (in particular susceptibility contrast) for structural images – and therefore helped clinical applications, so that market grew, to the benefit of fMRI. Some may argue that the perceived potential of fMRI back in the early 2000’s had some influence on getting the 3T product lines going (better BOLD contrast), and perhaps it did, however, today 20 years later, even though I’m more hopeful than ever about robust daily clinical applications of fMRI, this potential still remains just over the horizon, so the prospect of a golden clinical fMRI market has lost some of its luster to vendors.
This is the current state of fMRI: benefitting from the development of clinically impactful products such as higher field strength, more sophisticated pulse sequences, recon, analysis, shimming, and RF coils, however not strongly driving the production pipelines of vendors in a meaningful way. Because fMRI is not yet a robust and widely used clinical tool, vendors are understandably reluctant to redirect their resources to further develop fMRI platforms. This can be frustrating as fMRI would tremendously benefit from increased vendor development and product dissemination.
There can be a healthy debate as to how much the fMRI research, development, and application community has influenced vendor products. While there may have been some influence, I believe it to be minimal – less than what I think that the clinical long term potential of fMRI may justify. That said, there is nothing bad or good about vendor decisions on what products they produce and support. Especially in today’s large yet highly competitive clinical market, they have to think slightly shorter term and highly strategically. We, as the fMRI community, need to up our game to incentivize either the big scanner vendors or smaller third party vendors to help catalyze its clinical implementation.
For instance, if vendors saw a large emerging market in fMRI, they would likely create a more robust fMRI-tailored platform – including a suite of fMRI pulse sequences sensitive to perfusion, blood volume changes, and of course BOLD – with multi-echo EPI being standard. They would also have a sophisticated yet clinically robust processing pipeline to make sense of resting state and activation data in ways that are easily interpretable and usable by clinicians. One could also imaging a package of promising fMRI-based “biomarkers” for a clinician or AI algorithm to incorporate in research and basic practice.
Regarding pulse sequence development, the current situation is that large academic and/or hospital centers have perhaps one or more physicist who knows the vendor pulse sequence programming language. They program and test various pulse sequences and present their data at meetings, where ideas catch on – or not. Those that show promise are eventually patented and vendors employ their programmers to incorporate these sequences, with the appropriate safety checks, into their scanner platforms. Most sequences don’t make it this far. Many are considered as, using Siemens’ terminology, “works in progress” or WIPS. These are only distributed to those centers who sign a research agreement and have the appropriate team of people to incorporate the sequence at the research scanner in their center. This approach, while effective to some degree to share sequences in a limited and focused manner, is not optimal from a pulse sequence development, dissemination and testing standpoint. It’s not what it could be. One could imagine alternatively, that vendors could create a higher level pulse sequence development platform that allows rapid iteration for creation and testing of sequences, with all checks in place so that sharing and testing is less risky. This type of environment would not only benefit standard MRI pulse sequences but would catalyze the development and dissemination of fMRI pulse sequences. There are so many interesting potential pulse sequences for fMRI – involving embedded functional contrasts, real time adaptability, and methods for noise mitigation that remain unrealized due to the bottleneck in the iteration of pulse sequence creation, testing, dissemination, application, and finally the big step of productization, not to mention FDA approval.
Functional MRI – specific hardware is also another area where growth is possible. It’s clear that local gradient coils would be a huge benefit to both DTI and fMRI, as the smaller coils can achieve higher gradients, switch faster, don’t induce as high of the nerve stimulating dB/dt, don’t heat up as easily, produce less eddy currents, and are generally more stable than whole body gradients. Because of space and patient positioning restrictions however, they would have limited day to day clinical applicability and currently have no clear path to become a robust vendor product. Another aspect of fMRI that would stand to benefit are the tools for subject interfacing – stimulus devices, head restraints, subject feedback, physiologic monitoring, eye tracking, EEG, etc.. Currently, a decked out subject interface suite is cobbled together from a variety of products and is awkward and time consuming to set up and use – at best. I can imagine the vendors creating a fully capable fMRI interface suite, that has all these tools engineered in a highly integrated manner, increasing standardization and ease of all our studies and catalyzing the propagation of fundamentally important physiological monitoring, subject interface, and multimodal integration.
Along a similar avenue, I can imagine so many clinicians who want to try fMRI but don’t have the necessary team of people to handle the entire experiment/processing pipeline for practical use. One could imagine if a clinical fMRI experimental platform and analysis suite were created and optimized through the vendors. Clinicians could test out various fMRI approaches to determine their efficacy and, importantly, work out the myriad of practical kinks unique to a clinical setting that researchers don’t have to typically deal with. Such a platform would almost certainly catalyze clinical development and implementation of fMRI.
Lastly, a major current trend is the collection and analysis of data collected across multiple scanner platforms: different vendors and even slightly different protocols. So far the most useful large data sets have been collected on a single scanner or on a small group of identical scanners or even with a single subject being repeatedly scanned on one scanner over many months. Variance across scanners and protocols appears to wreak havoc with the statistics and reproducibility, especially when looking for small effect sizes. Each vendor has proprietary reconstruction algorithms and typically only outputs the images rather than the raw unreconstructed data. Each scan setup varies as the patient cushioning, motion constraints, shimming procedures, RF coil configurations, and auto prescan (for determining the optimal flip angle) all vary not only across vendors but also potentially from subject to subject. To even start alleviating these problems it is important to have a cross vendor reconstruction platform that takes in the raw data and reconstructs the images in an identical, standardized manner. First steps of this approach have been taken in the emergence of the “Gadgetron” as well as an ISMRM standard raw data format. There have emerged some promising third party approaches to scanner independent image recon, including one via a Swiss company called Skope. One concern with third party recon is that the main vendors have put in at least 30 years of work perfecting and tweaking their pulse-sequence specific recon, and, understandably, the code is strictly proprietary – although most of the key principles behind the recon strategies are published. Third party recon engines have had to play catchup, and perhaps in the open science environment, have been on a development trajectory that is faster than that of industry. If they have not already done so, they will likely surpass the standard vendor recon in image quality and sophistication. So far, with structural imaging – but not EPI, open source recon software is likely ahead of that of vendors. While writing this I was reminded that parallel imaging, compressed sensing, model based recon, and deep learning recon were all open access code before many of them were used by industry. These need to be adopted to EPI recon to be useful for fMRI.
A primary reason why the entire field of fMRI is not all doing recon offline is because most fMRI centers don’t have the setup or even the expertise to easily port raw data to free-standing recon engines. If this very achievable technology were disseminated more completely across fMRI centers – and if it were simply easier to quickly take raw data of the scanner – the field of fMRI would make an important advance as images would likely become more artifact free, more stable, and more uniform across scanners. This platform would also be much more nimble – able to embrace the latest advances in image recon and artifact mitigation.
My group, specifically Vinai Roopchansingh, and others at the NIH and elsewhere, have worked with Gadgetron, have also been working on approaches to independent image reconstruction: including scripts for converting raw data to the ismrmrd format, an open access Jupyter notebook script running python for recon of EPI data.
Secondly, vendors could work together – in a limited capacity – to create standard research protocols that are as identical as possible – specifically constructed for sharing and pooling of data across vendors. Third, to alleviate the problem of so much variability across vendors and subjects in terms of time series instability, there should be a standard in image and time series quality metrics reporting. I can imagine such metrics as tSNR, image SNR, ghosting, outliers, signal dropout, and image contrast to be reported for starters. This would take us a long way towards immediately recognizing and mitigating deviations in time series quality and thus producing better results from pooled data sets. This metric reporting could be carried out by each vendor – tagging these on a quality metric file at the end of each time series. Vendors would likely have to work together to establish these. Currently programs that generate metrics exist (i.e. Oscar Esteban’s MRIQC), however there remains insufficient incentives and coordination to adopt them on a larger scale.
I am currently part of the OHBM standards and best practices committee, and we are discussing starting a push to more formally advise all fMRI users to report or have tagged to each time series, an agreed upon set of image quality metrics.
In general the relationship between fMRI and the big vendors currently is a bit of a Catch-22 situation. All of the above mentioned features would catalyze clinical applications of fMRI, however for vendors to take note and devote the necessary resources to these, it seems that there needs to be clinical applications in place, or at least a near certainty that a clinical market would emerge from these efforts in the near term, which cannot be guaranteed. How can vendors be incentivized to take the longer term and slightly more risky approach here – or if not this, cater slightly more closely to a smaller market? Many of these advances to help catalyze potential clinical fMRI don’t require an inordinate amount of investment, so could be initiated by either public or private grants. On the clinical side, clinicians and hospital managers could speak up to vendors on the need for testing and developing fMRI by having a rudimentary but usable pipeline. Some of these goals are simply achievable if vendors open up to work together in a limited manner on cross-scanner harmonization and standardization. This simply requires a clear and unified message from the researchers of such a need and how it may be achieved while maintaining the proprietary status of most vendor systems. FMRI is indeed an entirely different beast than structural MRI – requiring a higher level of subject and researcher/clinician engagement, on-the-fly, robust, yet flexible time series analysis, and rapid collapsing of multidimensional data that can be easily and accurately assessed and digested by a technologist and clinician – definitely not an easy task.
Over the years, smaller third party vendors have attempted to cater to the smaller fMRI research market, with mixed success. Companies have built RF coils, subject interface devices, and image analysis suites. There continues to be opportunities here as there is much more that could be done, however the delivery of products that bridge the gap between what fMRI is and what it could be from a technological standpoint requires that the big vendors “open the hood” of their scanners to some degree, allowing increased access to proprietary engineering and signal processing information. Again, since the clinical market is small, there is little, on first glance, to gain and thus no real incentive for the vendors to do this. I think that the solution is to lead the vendors to realize that there is something to gain – in the long run – if they work to nurture, through more open access platforms or modules within their proprietary platforms, the tremendous untapped intellectual resources of highly skilled and diverse fMRI community. At a very small and limited scale this already exists. I think that a key variable in many fMRI scanner purchase decisions has been the ecosystem of sharing research pulse sequences -which some vendors do better than others. This creates a virtuous circle as pulse programmers want to maximize their impact and leverage collaborations through ease of sharing – to the benefit of all users – and ultimately to the benefit of the field which will result in increasing the probability of fMRI being a clinically robust and useful technique, thus opening up a large market. Streamlining the platform for pulse sequence development and sharing, allowing raw data to be easily ported from the scanner, sharing the necessary information for the highest quality EPI image reconstruction, and working more effectively with third party vendors and with researchers with no interest in starting a business would be a great first step towards catalyzing the clinical impact of fMRI.
Overall, the relationship between fMRI and scanner vendors remains quite positive and still dynamic, with fMRI slowly getting more leverage as the research market grows, and as clinicians start taking notice of the growing number of promising fMRI results. I have had outstanding interactions and conversations with vendors over the past 30 years about what I, as an fMRI developer and researcher, would really like. They always listen and sometimes improvements to fMRI research sequences and platforms happen. Other times, they don’t. We are all definitely going in the right direction. I like to say that fMRI is one amazing clinical application away from having vendors step in and catalyze the field. To create that amazing clinical application will likely require approaches to better leverage the intellectual resources and creativity of the fMRI community – providing better tools for them to collectively find solutions to the daunting challenge of integrating fMRI into clinical practice as well as of course, more efficiently searching for that amazing clinical application. We are working in that direction and there are many reasons to be hopeful.
This year I was among the four ISMRM Gold Medal recipients for 2020. These were Ken Kwong, Robert Turner, and Kaori Togashi. It was a deep honor to win this along side my two friends: Ken Kwong, who arguably was the first to demonstrate fMRI in humans, and Bob Turner, who has been a constant pioneer in all aspects of fast imaging since even before my time and then fMRI since the beginning. I have always looked up to and respected past ISMRM gold medal winners, and am very deeply humbled to be among this highly esteemed company. I’m also grateful to Hanbing Lu for nominating me, as well as to those who wrote support letters for me. It’s also an acknowledgement by ISMRM of the importance of fMRI as a field, which while so successful in brain mapping for research purposes, has not yet fully entered into clinical utility.
While the event was virtual, there was no actual physical presentation of the Gold Medal to the recipients, however, a couple of weeks ago I came back to my office to pick up a few things after vacating it on March 16 due to Covid. At the base of the door I found a Fedex box, which I was deeply delighted to find this pleasant surprise inside:
Here is what I said for my acceptance speech, which I feel is important to share.
“I would like to thank ISMRM for this incredible honor. Throughout my career, and especially at the start, I enjoyed quite a bit of serendipity. Back in 1989, when I was starting graduate school at the Medical College of Wisconsin, I was extremely lucky to be at just the right place at the right time and wouldn’t be here accepting this without the help of my mentors, colleagues, and lab over the years.
Before starting graduate school, before fMRI, I had absolutely no idea what was ahead of me, but I did know one thing: that I wanted to image brain function with MRI…somehow. My parents instilled a sense of curiosity, and dinnertime conversations with my Dad sparked my fascination with the brain.
Jim Hyde, my advisor, set up the Biophysics Dept at MCW to excel in MRI hardware and basic research. His confidence and bold style were infused into the center’s culture.
Scott Hinks my co-advisor, helped me during a critical and uncertain time in my graduate career, and I’m grateful for his taking me on. His clear thinking set an inspiringly high standard.
Eric Wong, my dear friend, colleague and mentor, was a fellow graduate student with me at the time, and it’s to him that I have my most profound gratitude. He designed and built the local head gradient and RF coils and wrote from scratch the EPI pulse sequence and reconstruction necessary to perform our first fMRI experiments. He taught me almost everything I know about MRI, but more importantly he trained me well through his example. He constantly came up with great ideas, and one of his most common phrases was “let’s try it.” This phrase set the optimistic and proactive approach I have taken to this day. In September of 1991, one month after Ken Kowng’s jaw-dropping results shown by Tom Brady at the then called SMR meeting in San Francisco, we collected our first successful fMRI data and from then on were well positioned to help push the field. Without Eric’s work, MCW would have had no fMRI, and my career would have looked very different.
The late Andre Jesmanowicz, a professor at MCW, helped in a big way through his fundamental contribution to our paper introducing correlation analysis of fMRI time series.
My post doc experience at the Mass General Hospital lasted less than 2 years but felt like 10, in a good way, as I learned so much from the great people there. That place just hums with intellectual energy.
One of my best decisions was to accept an offer to join Leslie Ungerleider’s Laboratory of Brain and Cognition as well as to create a joint NINDS/NIMH functional MRI facility. It’s here that I have been provided with so much support. My colleague at the NIH, Alan Korestky, has been source of insight, and is perhaps my favorite NIH person to talk to. In general NIH is just teeming with great people in both MRI and neuroscience. The environment is perfect.
My neuroscientist and clinician collaborators have been essential for disseminating fMRI as they embraced new methods and findings.
I have been lucky to have an outstanding multidisciplinary team. Many have gone on to be quite successful, including Rasmus Birn, Jerzy Bodurka, Natalia Petridou, Kevin Murphy, Prantik Kundu, Niko Kriegeskorte, Carlton Chu, Emily Finn, and Renzo Huber.
My current team of staff scientists have shown outstanding commitment over the years and especially during these difficult times. These include Javier Gonzalez-Castillo, Dan Handwerker, Sean Marrett, Pete Molfese, Vinai Roopchansingh, Linqing Li, Andy Derbyshire, Francisco Pereira, and Adam Thomas.
The worldwide community of friends I have gained through this field is special to me, and a reminder that science, on so many levels, is a positive force for cohesion across countries and cultures.
Lastly, I am also so very lucky and thankful for my brilliant, adventurous, and supportive wife, Patricia, and my three precocious boys who challenge me every day.
An approach to research that has always worked well at least for me has been to be completely open with sharing ideas, not to care about credit, and perhaps most importantly, to think broadly, deeply, and simply and then proceed optimistically and boldly. To just try it. There are many possible reasons for an idea not to work, but in most cases it’s worthwhile to test it anyway.
Someday, we will figure out the brain, and I believe that fMRI will help us get there. It’s a bright future. Thank you.”
The BrainSpace Initiative is an outreach program that allows researchers to present their work, currently on non-invasive technique. It is also a meeting space to discuss papers and issues. I was invited to both be a member of the advisory committee and to give a talk. I decided to present a talk on all the work on layer fMRI has come out of my lab over the past 4 years. Here it is:
Layer fMRI, requiring high field, advanced pulse sequences, and sophisticated processing methods, has emerged in the last decade. The rate of layer fMRI papers published has grown sharply as the delineation of mesoscopic scale functional organization has shown success in providing insight into human brain processing. Layer fMRI promises to move beyond being able to simply identify where and when activation is taking place as inferences made from the activation depth in the cortex will provide detailed directional feedforward and feedback related activity. This new knowledge promises to bridge invasive measures and those typically carried out on humans. In this talk, I will describe the challenges in achieving laminar functional specificity as well as possible approaches to data analysis for both activation studies and resting state connectivity. I will highlight our work demonstrating task-related laminar modulation of primary sensory and motor systems as well as layer-specific activation in dorsal lateral prefrontal cortex with a working memory task. Lastly, I will present recent work demonstrating cortical hierarchy in visual cortex using resting state connectivity laminar profiles.
We submitted our rebuttal to Brain and received a prompt reply from the Editor-In-Chief, Dr. Kullman himself, offering us an opportunity to revise – with the main criticism that our letter contained unfounded insinuations and allegations. We tried to interpret his message as best we could and respond accordingly. To most readers it was pretty clear what he wrote and the message he intended to convey. Nevertheless, in our revision, we stayed much closer to the words of editorial itself. We also tried to bolster our response with tighter arguments and a few salient references.
Essentially our message was:
The editorial is striking in two ways: The tone is cynical and dismissive of fMRI as a method and the arguments against Brain Mapping, Discovery Science, and fMRI are outdated and weak.
Dr. Kullmann does have valid points: Many fMRI studies are completely descriptive and certainly don’t really reveal underlying mechanisms. The impact of these studies are somewhat limited but certainly not of no value. Functional MRI is challenged by spatial, temporal, and sensitivity limits as well. We try to address these points in our response
The limits that fMRI has are not fatal nor are they completely immovable. We have made breathtaking progress in the past 30 years. The limits inherent to fMRI are shared by all the brain assessment methods that we can think of. They are part of science. We make the best measurements we can using the most penetrating experimental designs and analysis methods that we can.
All techniques attempt to understand the brain at different spatial and temporal scales. The brain is indeed organized across a wide range of spatial and temporal scales, and it’s likely we need to have an understanding of all of them to truly “understand” the brain.
Discovery (i.e. non-hypothesis driven) science is growing in scope and insight as our databases grow in number and in complementary data.
Lastly, what the heck? Why would an Editor-In-Chief of a journal choose to publicly rant about an entire field?! What does it gain? Let’s have a respectful discussion about how we can make the science better.
Defending Brain Mapping, fMRI, and Discovery Science: A Rebuttal to Editorial (Brain, Volume 143, Issue 4, April 2020, Page 1045) Revision 1
Vince Calhoun1 and Peter Bandettini2
1Tri-institutional Center for Translational
Research in Neuroimaging and Data Science: Georgia State University, Georgia
Institute of Technology, Emory University, Atlanta, Georgia, USA.
2National Institute of Mental Health
In his editorial in Brain (Volume 143,
Issue 4, April 2020, Page 1045), Dr. Dimitri Kullmann presents an emotive and
uninformed set of criticisms about research where “…the route to clinical application or to
improved understanding of disease mechanisms is very difficult to infer…” The editorial starts with a criticism about a
small number of submissions, then it quickly pivots to broadly criticize
discovery science, brain mapping, and the entire fMRI field: “Such manuscripts disproportionately report on
functional MRI in groups of patients without a discernible hypothesis. Showing
that activation patterns or functional connectivity motifs differ significantly
is, on its own, insufficient justification to occupy space in Brain.”
The description of activity patterns and their differences between
populations and even individuals is fundamental in characterizing and understanding
how the healthy brain is organized, how it changes, and how it varies with
disease – often leading directly to advances in clinical diagnosis and treatment (Matthews et al., 2006). The first such demonstrations were over 20 years ago with
presurgical mapping of individual patients (Silva et al., 2018). Functional MRI is perfectly capable of obtaining results in
individual subjects(Dubois and Adolphs, 2016). These maps are windows into the systems level organization of
the brain that inform hypotheses that are generated within this specific
spatial and temporal scale. The brain is clearly organized across a wide range
of temporal and spatial scales – with no one scale emerging yet as the “most”
informative(Lewis et al., 2015).
Dr. Kullmann implies in the above statement that the only hypotheses-driven
studies are legitimate. This view dismisses out of hand the value of discovery
science, which casts a wide and effective net in gathering and making sense of
large amounts of data that are being collected and pooled(Poldrack et al., 2013). In this age of large neuroscience data repositories, discovery
science research can be deeply informative (Miller et al., 2016). Both hypotheses-driven
and discovery science have importance and significance.
Finally, in his opening salvo, he sets up his
attack on fMRI: “Given that functional MRI is ∼30 years old and continues to divert many
talented young researchers from careers in other fields of translational
neuroscienceit is worth reiterating
two of the most troubling limitations of the method..” The author, who is also the editor-in-chief of
Brain, sees fMRI research as problematic not only because a disproportionally
large number of studies from it are reporting group differences and are not
hypothesis-driven, but also because it has been diverting all the good young
talent from more promising approaches. The petty
lament about diverted young talent reveals a degree of cynicism of the natural
and fair process by which the best science reveals itself and attracts good
people. It implies that young scientists are somehow being misled to waste
their brain power on fMRI rather than naturally gravitating towards the best science.
His “most troubling limitations of the
method” are two hackneyed criticisms of fMRI that suggest for the past 30
years, he has not been following the fMRI literature published worldwide and in
his own journal. Kullman’s
two primary criticisms about fMRI are: “First, the fundamental relationship between the
blood oxygenation level-dependent (BOLD) signal and neuronal computations
remains a complete mystery.” and “Second, effect sizes are quasi-impossible to infer, leading
to an anomaly in science where statistical significance remains the only metric
reported.”
Both of these criticisms, to the degree that they
are valid, can apply to all neuroscience methods to various degrees. The first
criticism is partially true, as the relationship between ANY measure of
neuronal firing or related physiology and neuronal computations IS still
pretty much a complete mystery. While theoretical neuroscience is making rapid
progress, we still do not know what a neuronal computation would look like no
matter what measurement we observe. However, the relationship between neuronal
activity and fMRI signal changes is far from a complete mystery, rather it
has been extensively studied (Logothetis, 2003;
Ma et al., 2016). While this relationship is imperfectly understood,
literally hundreds of papers have established the relationship between
localized hemodynamic changes and neuronal activity, measured using a multitude
of other modalities. Nearly all cross-modal verification has provided strong
confirmation that where and when neuronal activity changes,
hemodynamic changes occur – in proportion to the degree of neuronal activity.
While inferences about brain connectivity from
measures of temporal correlation have been supported by electrophysiologic
measures, they have inherent assumptions about the degree to which synchronized
neuronal activity is driving the fMRI-based connectivity as well as a degree of
uncertainty about what is meant by “connectivity.” It has never been implied
that functional connectivity gives an unbiased estimation of information
transfer across regions. Furthermore, this issue has little to do with fMRI.
Functional connectivity – as implied by temporal co-variance – is a commonly
used metric in all neurophysiology studies.
Functional MRI – based measures of
“connectivity” have been demonstrated to clearly and consistently show
correspondence with differences in behavior and traits of populations and
individuals(Finn et al., 2015; Finn et al., 2018; Finn et al.,
2020). These data, while not fully understood, and thus not yet
perfectly interpretable, are beginning to inform systems-level network models
with increasing levels of sophistication(Bertolero and
Bassett, 2020).
Certainly, issues related to spatially and
temporally confounding effects of larger vascular and other factors continue to
be addressed. Sound experimental design, analysis, and interpretation can take
these factors into account, allowing useful and meaningful information on
functional organization, connectivity, and dynamics to be derived. Acquisition
and processing strategies involving functional contrast manipulations and
normalization approaches have effectively mitigated these vascular confounds (Menon, 2012). Most of these approaches have been known for
over 20 years, yet until recently we didn’t have hardware that would enable us
to use these methods broadly and robustly.
In contrast to what is claimed in the editorial,
high field allows substantial reduction of large blood vessel and “draining
vein” effects thanks to higher sensitivity at high field enabling scientists to
use contrast manipulations more exclusively sensitive to small vessel and
capillary effects(Polimeni and
Uludag, 2018). Hundreds of ultra-high resolution fMRI studies are
revealing cortical depth dependent activation that shows promise in informing
feedback vs. feedforward connections(Huber et al., 2017; Huber et al., 2018; Finn et al.,
2019; Huber et al., 2020).
Regarding the second criticism involving effect
sizes. In stark contrast to the criticism in Dr. Kullmann’s editorial, effect
sizes in fMRI are quite straight-forward to compute using standard approaches
and are very often reported. In fact, you can estimate prediction accuracy
relative to the noise ceiling. What is challenging is that there are many
different fMRI-related variables that could be utilized. One might compare
voxels, regions, patterns of activation, connectivity measures, or dynamics
using an array of functional contrasts including blood flow, oxygenation, or
blood volume. In fact, you can fit models under one set of conditions and test
them under another set of conditions if you want to look at generalization.
Thus, there are many different types of effects, depending on what is of
interest. Rather than a weakness, this is a powerful strength of fMRI in that
it is so rich and multi-dimensional.
The challenge of properly characterizing and
modeling the meaningful signal as well as the noise is an ongoing area of
research that is, shared by virtually every other brain assessment technique.
In fMRI, the challenge is particularly acute because of the wealth and
complexity of potential neuronal and physiological information provided. Clinical
research in neuroscience generally suffers most from limitations of statistical
analysis and predictive modeling because of the limited size of the available
clinical data sets and the enormous individual variability in patients and
healthy subjects. Again, this is a limitation for all measures, including fMRI.
Singling out these issues as if they were specific to fMRI is indicative of a
narrow and biased perspective. Dr. Kullmann is effectively stating that indeed
fMRI is different from all the rest – a particularly efficient generator of a
disproportionately high fraction of poor and useless studies. This perspective
is cynical and wrong and ignores that ALL modalities have their limits and
associated bad science, ALL modalities have their range of questions that they
can appropriately ask.
Dr. Kullmann’s editorial oddly backpedals near
the end. He does admit that: “This is not to dismiss the potential
importance of the method when used with care and with a priori hypotheses, and
in rare cases functional MRI has found a clinical role. One such application is
in diagnosing consciousness in patients with cognitive-motor dissociation.”
He then goes on to praise one researcher, Dr. Adrian Owen, who has pioneered
fMRI use in clinical settings with “locked in” patients. The work he
refers to in this article and the work of Dr. Owen are both outstanding,
however, the perspective verbalized by Dr. Kullmann here is breathtaking as
there are literally thousands of similar quality papers and hundreds of
similarly accomplished and pioneering researchers in fMRI.
In summary, we argue that location and timing of
brain activity on the scales that fMRI allows is useful for both understanding
the brain and aiding clinical practice. One just has to take a more in-depth
view of the literature and growth of fMRI over the past 30 years to appreciate
the impact it has had. His implication that most fMRI users are misguided
appears to dismiss the flawed yet powerful process of peer review in deciding
in the long run what the most fruitful research methods are. His specific
criticisms of fMRI are incorrect as they bring up legitimate challenges but
completely fail to appreciate how the field has dealt – and continues to
effectively deal with them. These two criticisms also fail to acknowledge that
limits in interpreting any measurements are common to all other brain
assessment techniques – imaging or otherwise. Lastly, his highlighting of a
single researcher and study in this issue of Brain is myopic as he
appears to imply that these are the extreme exceptions – inferred from his
earlier statements – rather than simply examples of a high fraction of
outstanding fMRI papers. He mentions the value of hypothesis driven studies
without appreciating the growing literature of discovery science studies.
Functional MRI is a tool and not a catalyst for
categorically mediocre science. How it is used is determined by the skill of
the researcher. The literature is filled with examples of how fMRI has been
used with inspiring skill and insight to penetrate fundamental questions of
brain organization and reveal subtle, meaningful, and actionable differences between
clinical populations and individuals. Functional MRI is advancing in
sophistication at a very rapid rate, allowing us to better ask fundamental
questions about the brain, more deeply interpret its data, as well as to
advance its clinical utility. Any argument that an entire modality should be
categorically dismissed in any manner is troubling and should in principle be
strongly rebuffed.
Bertolero MA, Bassett DS. On the Nature of Explanations Offered by Network
Science: A Perspective From and for Practicing Neuroscientists. Top Cogn Sci
2020.
Dubois J, Adolphs
R. Building a Science of Individual Differences from fMRI. Trends Cogn Sci
2016; 20(6): 425-43.
Finn ES, Corlett
PR, Chen G, Bandettini PA, Constable RT. Trait paranoia shapes inter-subject
synchrony in brain activity during an ambiguous social narrative. Nat Commun
2018; 9(1): 2043.
Finn ES, Glerean E,
Khojandi AY, Nielson D, Molfese PJ, Handwerker DA, et al. Idiosynchrony: From shared responses to individual
differences during naturalistic neuroimaging. NeuroImage 2020; 215: 116828.
Finn ES, Huber L,
Jangraw DC, Molfese PJ, Bandettini PA. Layer-dependent activity in human
prefrontal cortex during working memory. Nat Neurosci 2019; 22(10): 1687-95.
Finn ES, Shen X,
Scheinost D, Rosenberg MD, Huang J, Chun MM,
et al. Functional connectome fingerprinting: identifying individuals using
patterns of brain connectivity. Nat Neurosci 2015; 18(11): 1664-71.
Huber L, Finn ES,
Chai Y, Goebel R, Stirnberg R, Stocker T,
et al. Layer-dependent functional connectivity methods. Prog Neurobiol
2020: 101835.
Huber L, Handwerker
DA, Jangraw DC, Chen G, Hall A, Stüber C,
et al. High-Resolution CBV-fMRI Allows Mapping of Laminar Activity and
Connectivity of Cortical Input and Output in Human M1. Neuron 2017; 96(6):
1253-63.e7.
Huber L, Ivanov D,
Handwerker DA, Marrett S, Guidi M, Uludağ K,
et al. Techniques for blood volume fMRI with VASO: From low-resolution
mapping towards sub-millimeter layer-dependent applications. NeuroImage 2018;
164: 131-43.
Lewis CM, Bosman
CA, Fries P. Recording of brain activity across spatial scales. Curr Opin
Neurobiol 2015; 32: 68-77.
Logothetis NK. The
underpinnings of the BOLD functional magnetic resonance imaging signal. J
Neurosci 2003; 23(10): 3963-71.
Ma Y, Shaik MA,
Kozberg MG, Kim SH, Portes JP, Timerman D,
et al. Resting-state hemodynamics are spatiotemporally coupled to
synchronized and symmetric neural activity in excitatory neurons. Proc Natl
Acad Sci U S A 2016; 113(52): E8463-E71.
Matthews PM, Honey
GD, Bullmore ET. Applications of fMRI in translational medicine and clinical
practice. Nat Rev Neurosci 2006; 7(9): 732-44.
Menon RS. The great
brain versus vein debate. NeuroImage 2012; 62(2): 970-4.
Miller KL,
Alfaro-Almagro F, Bangerter NK, Thomas DL, Yacoub E, Xu J, et al. Multimodal population brain imaging in the UK Biobank
prospective epidemiological study. Nat Neurosci 2016; 19(11): 1523-36.
Poldrack RA, Barch
DM, Mitchell JP, Wager TD, Wagner AD, Devlin JT, et al. Toward open sharing of task-based fMRI data: the OpenfMRI
project. Front Neuroinform 2013; 7: 12.
Polimeni JR, Uludag
K. Neuroimaging with ultra-high field MRI: Present and future. NeuroImage 2018;
168: 1-6.
Silva MA, See AP,
Essayed WI, Golby AJ, Tie Y. Challenges and techniques for presurgical brain
mapping with functional MRI. Neuroimage Clin 2018; 17: 794-803.
This blog post was initiated by Dr. Vince Calhoun, director of the Tri-institutional Center for Translational Research in Neuroimaging and Data Science and of Georgia State University, Georgia Institute of Technology, and Emory University. Vince shot me an email asking if I saw this editorial in Brain by Dimitri Kullman (Brain, Volume 143, Issue 4, April 2020, Page 1045) https://academic.oup.com/brain/article/143/4/1045/5823483. He also made the suggestion that we write something together as a counterpoint. I heartily agreed. While there are many valid criticisms of fMRI and brain mapping in general, this particular editorial struck me as uninformed, myopic and cynical – thus requiring a response. I usually err on the side of giving the benefit of the doubt when reading or hearing of a different opinion, but my first visceral reaction to reading this article was simply: “Wow…” Vince and I quickly got to work and within a week submitted the below counterpoint to Brain.
Rebuttal
to Editorial (Brain, Volume 143, Issue 4,
April 2020, Page 1045)
Vince Calhoun1 and Peter Bandettini2
1Tri-institutional
Center for Translational Research in Neuroimaging and Data Science: Georgia
State University, Georgia Institute of Technology, Emory University, Atlanta,
Georgia, USA.
2National Institute of Mental Health
In his editorial in Brain (Volume 143, Issue 4, April
2020, Page 1045), Dr. Dimitri Kullmann takes several cheap shots at fMRI as a
field and at most of the research findings that it produces. He argues that
fMRI-based findings describing functional differences in activation or
connectivity have no place in Brain and that fMRI functional contrast is
fundamentally flawed. He rants that fMRI is drawing away talented young
researchers whose time and energy would be better spent using other modalities.
This salvo misses the mark however, as it is woefully uninformed and incorrect.
Dr. Kullmann seems to equate brain mapping itself with flawed
and non-hypothesis driven research: “Showing that activation patterns or
functional connectivity motifs differ significantly is, on its own,
insufficient justification to occupy space in Brain.” There is no need to
argue the utility of brain mapping, as the thousands of outstanding papers in
the literature speak for themselves. One just has to attend the Organization
for Human Brain Mapping or Society for Neuroscience meetings to appreciate the
traction that has been made by fMRI in generating insight into brain
organization of healthy and clinical subjects.
Dimitri Kullmann’s central premise is that somehow the
science performed with fMRI, to a greater degree than other modalities, is ineffective
in penetrating meaningful neuroscience questions or leading to clinical
applications – something akin to doing astronomy with a microscope. He states
two reasons. The first: “… the fundamental relationship between the blood
oxygenation level-dependent (BOLD) signal and neuronal computations remains a
complete mystery. As a direct consequence, it is extremely difficult to
conclude that functional connectivity as measured by functional MRI genuinely
measures information exchange between brain regions.” This is partially
true, as the relationship between ANY measure of neuronal firing or related
physiology and neuronal computations IS a complete mystery. We really do
not know what a neuronal computation would even look like no matter what is
measured. However, the relationship between neuronal activity and fMRI
signal changes is far from a complete mystery, rather it has been extensively
studied. While this relationship is imperfectly understood, literally hundreds
of papers have established the relationship between localized hemodynamic
changes and neuronal activity, measured using a multitude of other modalities.
Nearly all cross-modal verification has provided strong confirmation that where
and when neuronal activity changes, hemodynamic changes occur – in proportion
to the degree of neuronal activity. Certainly, issues related to spatial and
temporally confounding effects of larger vascular and other factors are still
being addressed, yet, sound experimental design, analysis, and interpretations
can take these limits into account, allowing useful information to be derived. Additionally,
multiple functional contrast manipulations and normalization approaches have
reduced these vascular confounds. In contrast to what is claimed in the
editorial, high field in fact does allow mitigation of large blood vessels
thanks to higher sensitivity that enables scientists to use contrast
manipulations less sensitive to large vein effects. Hundreds of ultra-high
resolution fMRI studies are revealing cortical depth dependent activation that shows
promise in informing feedback vs. feedforward connections.
The second of his reasons: “…effect sizes are
quasi-impossible to infer, leading to an anomaly in science where statistical
significance remains the only metric reported.” Effect sizes in fMRI are in
fact quite straight-forward to compute using standard approaches and are very
often reported. What is challenging is that there are many different
fMRI-related variables that could be utilized. One might compare voxels,
regions, patterns of activation, connectivity measures, or dynamics using an
array of functional contrasts including blood flow, oxygenation, or blood
volume. Thus, there are many different types of effects, depending on what is of
interest. Rather than a weakness, this is a powerful strength of fMRI in that
it is so rich and multi-dimensional.
The challenge of properly characterizing and modeling the meaningful
signal as well as the noise is an ongoing point of research that is, in fact, shared
by virtually every other brain assessment technique. In fMRI, the challenge is
particularly acute because of the wealth and complexity of potential neuronal
and physiological information provided. Singling out these issues as if they were
specific to fMRI is indicative of a very narrow and perhaps biased perspective.
Dr. Kullmann is effectively stating that indeed fMRI is different from all the
rest – a particularly efficient generator of a disproportionately high fraction
of poor and useless studies. This perspective is cynical and wrong and ignores
that ALL modalities have their limits and associated bad science, ALL modalities
have their range of questions that they can appropriately ask.
Dr. Kullmann’s editorial oddly backpedals near the end. He
does admit that: “This is not to dismiss the potential importance of the
method when used with care and with a priori hypotheses, and in rare cases
functional MRI has found a clinical role. One such application is in diagnosing
consciousness in patients with cognitive-motor dissociation.” He then goes
on to praise one researcher, Dr. Adrian Owen, who has pioneered fMRI use in clinical
settings with “locked in” patients. The work he refers to in this
article and the work of Dr. Owen are both outstanding, however, the perspective
verbalized by Dr. Kullmann here is breathtaking as there are literally
thousands of similar quality papers and hundreds of similarly accomplished and
pioneering researchers in fMRI.
An additional point to emphasize in this age of big
neuroscience data is that the editorial also expresses a cynicism against
science that generates results that it cannot fully seal into a tight-fitting
story. Describing a unique activation or connectivity pattern with a specific
paradigm or demonstrating differences between populations or even individuals,
while not always groundbreaking, usually advances our understanding of the
brain, and can lead to clinical insights or even advances in clinical practice.
Dr. Kullmann implies that the only legitimate use of fMRI in a study is in an
hypothesis driven study. This view dismisses out of hand the value of discovery
science, which casts a wide and effective net in gathering and making sense of
large amounts of data. Both hypothesis driven and discovery science have
importance and significance.
In summary, Dr. Kullmann argues that studies that compare
activity or connectivity maps, as many fMRI studies do have no place in Brain.
He claims that fMRI attracts too many talented researchers at the expense of
better science performed with other tools. He describes two aspects of fMRI:
the vascular origin of the signal and reporting on statistical measures, as
being fatal flaws of the technique. However, he states that there are very rare
exceptions – certain rare people are doing fMRI well.
We argue that location and timing of brain activity on the
scales that fMRI allows is informative and useful information for both
understanding the brain and clinical practice. One just has to take a more in
depth view of the literature and growth fMRI over the past 30 years to
appreciate the impact it has had. His cynicism that most fMRI users are
misguided appears to dismiss the flawed yet powerful process of peer review.
His specific criticisms of fMRI are incorrect as they bring up legitimate
challenges but completely fail to appreciate how the field has dealt – and
continues to effectively deal with them. These two criticisms also fail to acknowledge
that limits in interpreting the measurements are inherent to all other brain
assessment techniques – imaging or otherwise. Lastly, his highlighting of a
single researcher and study in this issue of Brain is myopic as he
appears to imply that these are the extreme exceptions – inferred from his
earlier statements – rather than simply examples of a high fraction of
outstanding fMRI papers. He mentions the value of hypothesis driven studies
without appreciating the vast literature of hypothesis driven fMRI studies nor
acknowledging the power of discovery science.
Functional MRI is a
tool and not a catalyst for categorically mediocre science. How it is used is
determined by the skill of the researcher. The literature is filled with
examples of how fMRI has been used with inspiring skill and insight to
penetrate fundamental questions of brain organization and reveal subtle,
meaningful, and actionable differences between clinical populations and
individuals. Functional MRI is advancing in sophistication at a very rapid
rate, allowing us to better ask fundamental questions about the brain, more
deeply interpret its data, as well as to advance its clinical utility. Any
argument that an entire modality should be categorically dismissed in any
manner is troubling and should in principle be strongly rebuffed.
For decades, the scientific community has witnessed a growing trend towards online collaboration, publishing, and communication. The next natural step, started over the past decade, has been the emergence of virtual lectures, workshops, and conferences. My first virtual workshop took place back in about 2011 when I was asked to co-moderate a virtual session about 10 talks on MRI methods and neurophysiology. It was put on jointly by the International Society for Magnetic Resonance in Medicine (ISMRM) and the Organization for Human Brain Mapping (OHBM) and considered an innovative experiment at the time. I recall running it from a hotel room with spotty internet in Los Angeles as I was also participating in an in-person workshop at UCLA at the same time. It went smoothly, as the slides displayed well, speakers came through clearly, and, at the end of each talk, participants were able to ask questions by text which I could read to the presenter. It was easy, perhaps a bit awkward and new, but definitely worked and was clearly useful.
Since then, the virtual trend has picked up momentum. In the past couple of years, most talks that I attended at the NIH were streamed simultaneously using Webex. Recently, innovative use of twitter has allowed virtual conferences consisting of twitter feeds. An example of such twitter-based conferences is #BrainTC, which was started in 2017 and is now putting these on annually.
Using the idea started with #BrainTC, Aina Puce spearheaded OHBMEquinoX or OHBMx. This “conference” took place on the Spring Equinox involving sequential tweets from speakers and presenters from around the world. It started in Asia and Australia and worked its way around with the sun during this first day of spring where the sun was directly above the equator and the entire planet had precisely the same number of hours of sunlight.
Recently, conferences with live
streaming talks have been assembled in record time, with little cost overhead,
providing a virtual conference experience to audiences numbering in the 1000’s
at extremely low or even no registration cost. An outstanding recent example of
a successful online conference is neuromatch.io.
An insightful
blog post summarized logistics of putting this on.
Today, the pandemic has thrown
in-person conference planning, at least for the spring and summer of 2020, into
chaos. The two societies with which I am most invested, ISMRM and OHBM, have
taken different solutions to cancellations in their meetings. ISMRM has chosen
to delay their meeting to August. ISMRM’s delay will hopefully be enough time
for the current situation to return to normal, however, given the uncertainty
of the precise timeline, even this delayed in-person meeting may have to be
cancelled. OHBM has chosen to make this year’s conference virtual and are
currently scrambling to organize it – aiming for the same start date in June
that they had originally planned.
What we will see in June with OHBM
will be a spectacular, ambitious, and extremely educational experiment. While
we will be getting up to date on the science, most of us will also be having
our first foray into a multi-day, highly attended, highly multi-faceted
conference that was essentially organized in a couple of months.
Virtual conferences, now catalyzed
by COVID-19 constraints, are here to stay. These are the very early days.
Formats and capabilities of virtual conferences will be evolving for quite some
time. Now is the time to experiment with everything, embracing all the
available online technology as it evolves. Below is an incomplete list of the
advantages, disadvantages, and challenges of virtual conferences, as I see
them.
What are the advantages of a virtual conference?
1. Low
meeting cost. There is no overhead cost to rent a venue. Certainly, there are
some costs in hosting websites however these are a fraction of the price of
renting conference halls.
2. No
travel costs. No travel costs or time and energy are incurred for travel for
the attendees and of course a corresponding reduction in carbon emissions from
international travel. Virtual conferences allow an increased inclusivity to
those who cannot afford to travel to conferences, potentially opening up access
to a much more diverse audience – resulting in corresponding benefits to
everyone.
3. Flexibility.
Because there is no huge venue cost the meeting can last as long or short as
necessary and can take place for 2 hours a day or several hours interspersed
throughout the day to accommodate those in other time zones. It can last the
normal 4 or 5 days or can be extended for three weeks if necessary. There will
likely be many discussions on what the optimal virtual conference timing and
spacing should be. We are in the very early days here.
5. Ease
of access to information within the conference. With, hopefully, a
well-designed website, session attendance can be obtained with a click of a
finger. Poster viewing and discussing, once the logistics are fully worked out,
might be efficient and quick. Ideally, the poster “browsing”
experience will be preserved. Information on poster topics, speakers, and
perhaps a large number of other metrics will be cross referenced and
categorized such that it’s easy to plan a detailed schedule. One might even be
able to explore a conference long after it is completed, selecting the most
viewed talks and posters, something like searching articles using citations as
a metric. Viewers might also be able to rate each talk or poster that they see,
adding to usable information to search.
6. Ease
of preparation and presentation. You can present from your home and prepare up
to the last minute in your home.
7. Direct
archival. It should be trivial to directly archive the talks and posters for
future viewing, so that if one doesn’t need real-time interaction or misses the
live feed, one can participate in the conference any time in the future at
their own convenience. This is a huge advantage that is certainly also possible
even for in-person conferences, but has not yet been achieved in a way that
quite represents the conference itself. With a virtual conference, there can be
a one-to-one conference “snapshot” preservation of precisely all the
information contained in the conference as it’s already online and available.
What are the disadvantages of a virtual conference?
1. Socialization.
To me the biggest disadvantage is the lack of directly experiencing all the
people. Science is a fundamentally human pursuit. We are all human, and what we
communicate by our presence at a conference is much more than the science. It’s
us, our story, our lives and context. I’ve made many good friends at
conferences and look forward to seeing them and catching up every year. We have
a shared sense of community that only comes from discussing something in front
of a poster or over a beer or dinner. This is the juice of science. At our core
we are all doing what we can towards trying to figure stuff out and creating
interesting things. Here we get a chance to share it with others in real time
and gauge their reaction and get their feedback in ways so much more meaningful
than that provided virtually. One can also look at it in terms of information.
There is so much information that is transferred during in-person meetings that
simply cannot be conveyed with virtual meetings. These interactions are what
makes the conference experience real, enjoyable, and memorable, which all feeds
into the science.
2. Audience
experience. Related to 1, is the experience of being part of a massive
collective audience. There is nothing like being in a packed auditorium of 2000
people as a leader of the field presents their latest work or their unique
perspective. I recall the moment I first saw the first preliminary fMRI results
presented by Tom Brady at ISMRM. My jaw dropped and I looked at Eric Wong,
sitting next to me, in amazement. After the meeting, there was a group of
scientists huddled in a circle outside the doors talking excitedly about the
results. FMRI was launched into the world and everyone felt it and shared that
experience. These are the experiences that are burnt into people’s memories and
which fuel their excitement.
3. No
room for randomness. This could be built into a virtual conference, however at
an in-person conference, one of the joys is to experience first-hand, the
serendipitous experiences – the bit of randomness. Chance meetings of
colleagues or passing by a poster that you didn’t anticipate. This randomness
is everywhere at a conference venue perhaps more important than we realize.
There may be clever ways to engineer a degree of randomness into a virtual
conference experience, however.
4. No
travel. At least to me, one of the perks of science is the travel. Physically
traveling to another lab, city, country, or continent is a deeply immersive
experience that enriches our lives and perspectives. On a regular basis, while
it can turn into a chore at times, is almost always worth it. The education and
perspective that a scientist gets about our world community is immense and
important.
5. Distraction.
Going to a conference is a commitment. The problem I always have when a
conference is in my own city is that as much as I try to fully commit to it, I
am only half there. The other half is attending to work, family, and the many
other mundane and important things that rise up and demand my attention for no
other reason than I am still here in my home and dealing with work. Going to a
conference separates one from that life, as much as can be done in this
connected world. Staying in a hotel or AirBnB is a mixed bag – sometimes
delightful and sometimes uncomfortable. However, once at the conference, you
are there. You assess your new surroundings, adapt, and figure out a slew of
minor logistics. You immerse yourself in the conference experience, which is,
on some level, rejuvenating – a break from the daily grind. A virtual
conference is experienced from your home or office and can be filled with the
distraction of your regular routine pulling you back. The information might be
coming at you but the chances are that you are multi-tasking and interrupted.
The engagement level during virtual sessions, and importantly, after the sessions
are over, is less. Once you leave the virtual conference you are immediately
surrounded by your regular routine. This lack of time away from work and home
life I think is also a lost chance to ruminate and discuss new ideas outside of
the regular context.
What are the challenges?
1. Posters.
Posters are the bread and butter of “real” conferences. I’m perhaps a bit old
school in that I think that electronic posters presented at “real” conferences
are absolutely awful. There’s no way to efficiently “scan” electronic
posters as you are walking by the lineup of computer screens. You have to know
what you’re looking for and commit fully to looking at it. There’s a visceral
efficiency and pleasure of walking up and down the aisles of posters, scanning,
pausing, and reading enough to get the gist, or stopping for extended times to
dig in. Poster sessions are full of randomness and serendipity. We find
interesting posters that we were not even looking for. Here we see colleagues
and have opportunities to chat and discuss. Getting posters right in virtual
conferences will likely be one of the biggest challenges. I might suggest
creating a virtual poster hall with full, multi-panel posters as the key
element of information. Even the difference between clicking on a title vs
scrolling through the actual posters in full multi-panel glory will make a
massive difference in the experience. These poster halls, with some thought,
can be constructed for the attendee to search and browse. Poster presentations
can be live with the attendee being present to give an overview or ask
questions. This will require massive parallel streaming but can be done. An
alternative is to have the posters up, a pre-recorded 3 minute audio
presentation, and then a section for questions and answers – with the poster
presenter being present live to answer in text questions that may arise and
having the discussion text preserved with the poster for later viewing.
2. Perspective.
Keeping the navigational overhead low and whole meeting perspective high. With
large meetings, there is a of course a massive amount of information that is
transferred that no one individual can take in. Meetings like SFN, with 30K
people, are overwhelming. OHBM and ISMRM, with 3K to 7K people, are also
approaching this level. The key to making these meetings useful is creating a
means by which the attendee can gain a perspective and develop a strategy for
delving in. Simple to follow schedules with enough information but not too
much, customized schedule-creation searches based on a wide rage of keywords
and flags for overlap are necessary. The room for innovation and flexibility is
likely higher at virtual conferences than at in-person conferences, as there
are less constraints on temporal overlap.
3. Engagement.
Fully engaging the listener is always a challenge, with a virtual conference
it’s even more so. Sitting at a computer screen and listening to a talk can get
tedious quickly. Ways to creatively engage the listener – real time feedback,
questions to the audience, etc.. might be useful to try. Also, conveying
effectively with clever graphics the size or relative interests of the audience
might also be useful in creating this crowd experience.
4. Socializing.
Neuromatch.io included a socializing aspect to their conference. There might be
separate rooms of specific scientific themes for free discussion, perhaps led
by a moderator. There might also be simply rooms for completely theme-less
socializing or discussion about any aspect of the meeting. Nothing will compare
to real meetings in this regard, but there are some opportunities to
potentially exploit the ease of accessing information about the meeting
virtually to be used to enrich these social gatherings.
5. Randomness.
As I mentioned above, randomness and serendipity play a large role in making a
meeting successful and worth attending. Defining a schedule and sticking to it
is certainly one way of attacking a meeting, but others might want to randomly
sample and browse and randomly run into people. It might be possible for this
to be done in the meeting scheduling tool but designing opportunities for
serendipity in the website experience itself should be given careful thought.
One could decide on a time when they view random talks or posters or meet
random people based on a range of keywords.
6. Scalability.
It would be useful to have virtual conferences constructed of scalable elements
such as poster sessions, keynotes, discussion, proffered talks, that could
start to become standardized to increase ease of access and familiarity across
conferences of different sizes from 20 to 200,000 as it’s likely that virtual
meeting sizes will vary more widely yet will be generally larger than “real”
meetings.
7. Costs
vs. Charges? This will be of course determined on its own in a bottom up manner
based on regular economic principles, however, in these early days, it’s useful
to for meeting organizers to work through a set of principles of what to charge
or if to make a profit at all. It is possible that if the web-elements of
virtual meetings are open access, many of costs could disappear. However, for
regular meetings of established societies there will be always be a need to
support the administration to maintain the infrastructure.
Beyond Either-Or:
Once the unique advantages of
virtual conferences are realized, I imagine that even as in-person conferences
start up again, there will remain a virtual component, allowing a much higher
number and wider range of participants. These conferences will perhaps
simultaneously offer something to everyone – going well beyond simply keeping
talks and posters archived for access – as is the current practice today.
While I have helped organize
meetings for almost three decades, I have not yet been part of organizing a
virtual meeting, so in this area, I don’t have much experience. I am certain
that most thoughts expressed here have been thought through and discussed many
times already. I welcome any discussion on points that I might have wrong or
aspects I may have missed.
Virtual conferences are certainly
going to be popping up at an increasing rate, throwing open a relatively
unexplored wide open space for creativity with the new constraints and
opportunities of this venue. I am very
much looking forward to seeing them evolve and grow – and helping as best I can
in the process.