The past decade has brought us jaw-dropping insights about the hidden workings of our brains, in part thanks to a popular brain scan technique called fMRI. But a major new study has revealed that fMRI interpretation has a serious flaw, one that could mean that much of what we’ve learned about our brains this way might need a second look.
On TV and in movies, we’ve all seen doctors stick an X-ray up on the lightbox and play out a dramatic scene: “What’s that dark spot, doctor?” “Hm…”
In reality, though, a modern medical scan contains so much data, no single pair of doctor’s eyes could possibly interpret it. The brain scan known as fMRI, for functional magnetic resonance imaging, produces a massive data set that can only be understood by custom data analysis software. Armed with this analysis, neuroscientists have used the fMRI scan to produce a series of paradigm-shifting discoveries about our brains.
Now, an unsettling new report, which is causing waves in the neuroscience community, suggests that fMRI’s custom software can be deeply flawed — calling into question many of the most exciting findings in recent neuroscience.
The problem researchers have uncovered is simple: the computer programs designed to sift through the images produced by fMRI scans have a tendency to suggest differences in brain activity where none exist. For instance, humans who are resting, not thinking about anything in particular, not doing anything interesting, can deliver spurious results of differences in brain activity. It’s even been shown to indicate brain activity in a dead salmon, whose stilled brain lit up an MRI as if it were somehow still dreaming of a spawning run.
The report throws into question the results of some portion of the more than 40,000 studies that have been conducted using fMRI, studies that plumb the brainy depths of everything from free will to fear. And scientists are not quite sure how to recover.
“It’s impossible to know how many fMRI studies are wrong, since we do not have access to the original data,” says computer scientist Anders Eklund of Linkoping University in Sweden, who conducted the analysis.
How it should have worked: Start by signing up subjects. Scan their brains while they rest inside an MRI machine. Then scan their brains again when exposed to pictures of spiders, say. Those subjects who are afraid of spiders will have blood rush to those regions of the brain involved in thinking and feeling fear, because such thoughts or feelings are suspected to require more oxygen. With the help of a computer program, the MRI machine then registers differences in hemoglobin, the iron-rich molecule that makes blood red and carries oxygen from place to place. (That’s the functional in fMRI.) The scan then looks at whether those hemoglobin molecules are still carrying oxygen to a given place in the brain, or not, based on how the molecules respond to the powerful magnetic fields. Scan enough brains and see how the fearful differ from the fearless, and perhaps you can identify the brain regions or structures associated with thinking or feeling fear.
That’s the theory, anyway. In order to detect such differences in brain activity, it would be best to scan a large number of brains, but the difficulty and expense often make this impossible. A single MRI scan can cost around $2,600, according to a 2014 NerdWallet analysis. Further, the differences in the blood flow are often tiny. And then there’s the fact that computer programs have to sift the through images of the 1,200 or so cubic centimeters of gelatinous tissue that make up each individual brain and compare them to others, a big data analysis challenge.
Eklund’s report shows that the assumptions behind the main computer programs used to sift such big fMRI data have flaws, as turned up by nearly 3 million random evaluations of the resting brain scans of 499 volunteers from Cambridge, Massachusetts; Beijing; and Oulu, Finland. One program turned out to have a 15-year-old coding error (which has now been fixed) that caused it to detect too much brain activity. This highlights the challenge of researchers working with computer code that they are not capable of checking themselves, a challenge not confined just to neuroscience.
The brain is even more complicated than we thought.Worse, Eklund and his colleagues found that all the programs assume that brains at rest have the same response to the jet-engine roar of the MRI machine itself as well as whatever random thoughts and feelings occur in the brain. Those assumptions appear to be wrong. The brain at rest is “actually a bit more complex,” Eklund says.
More specifically, the white matter of the brain appears to be underrepresented in fMRI analyses while another specific part of the brain — the posterior cingulate, a region in the middle of the brain that connects to many other parts — shows up as a “hot spot” of activity. As a result, the programs are more likely to single it out as showing extra activity even when there is no difference. “The reason for this is still unknown,” Eklund says.
Overall, the programs had a false positive rate — detecting a difference where none actually existed — of as much as 70 percent.
Unknown unknowns: This does not mean all fMRI studies are wrong. Co-author and statistician Thomas Nichols of the University of Warwick calculates that some 3,500 studies may be affected by such false positives, and such false positives can never be eliminated entirely. But a survey of 241 recent fMRI papers found 96 that could have even worse false-positive rates than those found in this analysis.
“The paper makes an important criticism,” says Nancy Kanwisher, a neuroscientist at MIT (TED Talk: A neural portrait of the human mind), though she points out that it does not undermine those fMRI studies that do not rely on these computer programs.
Nonetheless, it is worrying. “I think the fallout has yet to be fully evaluated. It appears to apply to quite a few studies, certainly the studies done in a generic way that is the bread-and-butter of fMRI,” says Douglas Greve, a neuroimaging specialist at Massachusetts General Hospital. What’s needed is more scrutiny, Greve suggests.
Another argument for open data. Eklund and his colleagues were only able to discover this methodological flaw thanks to the open sharing of group brain scan data by the 1,000 Functional Connectomes Project. Unfortunately, such sharing of brain scan data is more the exception than the norm, which hinders other researchers attempting to re-create the experiment and replicate the results. Such replication is a cornerstone of the scientific method, ensuring that findings are robust. Eklund, for one, therefore encourages neuroimagers to “share their fMRI data, so that other researchers can replicate their findings and re-analyze the data several years later.” Only then can scientists be sure that the undiscovered activity of the human brain is truly revealed … and that dead salmon are not still dreaming.
ABOUT THE AUTHOR
David Biello is an award-winning journalist writing most often about the environment and energy. His book “The Unnatural World” publishes November 2016. It’s about whether the planet has entered a new geologic age as a result of people’s impacts and, if so, what we should do about this Anthropocene. He also hosts documentaries, such as “Beyond the Light Switch” and the forthcoming “The Ethanol Effect” for PBS. He is the science curator for TED.
Source: The problem with fMRI |