FYI.

This story is over 5 years old.

Sports

Could Claims That '20 Years of Brain Studies Are Wrong' Be Wrong?

We talked to experts about a PNAS study on fMRIs that went viral, and what it means for what we know about brain injuries in sports.
Photo by Flickr user Neil Conway/CC BY 2.0

Last week, the science internet briefly lit up when a study in the journal PNAS purportedly raised doubts about an entire method of brain research. The study is about a specific type of brain scan, called a functional MRI (or fMRI), which measures brain activation in response to stimuli or tasks. You've probably heard about fMRI studies before. Perhaps when the Daily Mail ran a piece about its use to examine dog's brains when they recognize people.

Advertisement

The study went viral because it had a real hook in its abstract, as far as scientific abstracts go: "These results question the validity of some 40,000 fMRI studies and may have a large impact on the interpretation of neuroimaging results."

The news headlines seized on this. "Could Brain Research From The Past 15 Years Really Be Wrong?" "20 Years' Worth of FMRI Studies May Be Bunk, Thanks to a Software Error." "A Bug In fMRI Software Could Invalidate 15 Years of Brain Research."

Read More: Documents Reveal NFL Doctors Used Donations To Influence 2012 Government Led CTE Discussions

Big if true, as the kids on the internet say. We have learned an awful lot about the brain in the past few decades through fMRIs, including tidbits about traumatic brain injuries, or TBI. Understanding how repeated concussions lead to long-term damage, such as the neurodegenerative disease Chronic Traumatic Encephalopathy (CTE) found in dozens of former football players, is important to reducing the dangers of contact sports, and techniques such as fMRI analysis are one of the several tools researchers use. But before we can evaluate the real impact of PNAS study on that body of knowledge, we first must understand exactly what the study was saying.

Jeff Ware, a fellow at the University of Pennsylvania who specializes in neuroimaging, explained that the study's findings are not terribly surprising for anyone who works in the field, because fMRI is complex and involves physics, electronics, neuroscience, computer science, and statistics, among other disciplines. fMRI measures changes of blood flow in the brain, an "indirect measure of neural activity," as Ware put it. His colleague, Douglas Smith, told me that unlike muscles, which clearly experience increased blood flow when in use, some areas of the brain have more blood flow when used, others not so much.

Advertisement

Researchers wouldn't use an fMRI to diagnose anything in a patient because it's so indirect. It takes extensive analysis of many different scans on many different people to get any kind of usable data. So researchers have to either take a ton of separate measurements into account or corroborate findings with other types of evidence (preferably, they'd do both).

Because of this complexity, software often does much of the data crunching for them. Different softwares and researchers use different types of analyses to get their results. According to Ware, there is no universally accepted approach for turning all those raw fMRI scans into digestible information.

Which brings us back to the PNAS study. It found that a certain type of statistical inference used by some fMRI studies, called "cluster correction," was invalid. Fair enough, but did it really invalidate 20 years and 40,000 studies on brain research as many headlines suggested?

Brain imaging is used to make sense of data generated by fMRI scans. Photo by National Institute of Mental Health, National Institutes of Health/CC BY 2.0

I first contacted Ware, who specializes in neuroimaging for TBI, to find out if the study raises any concerns about what we have learned regarding head trauma in sports. He basically said no, not really, because fMRI studies for TBI had already shifted to a different approach that didn't use cluster correction.

But, he said, there is a type of neuroimaging used a lot for TBI, called diffusion tensor imaging (DTI) that has similar problems to fMRI. DTI measures water diffusion in the brain, particularly in white matter, to see if the structure of the brain at the microscopic level is disrupted. It could, in theory, be a way to see the structural damage of TBI before severe symptoms develop. At the very least, this could enable doctors to understand the severity an athlete's injury quickly and more completely than they do now.

Advertisement

The signal-to-noise ratio is very low, so researchers need lots of scans and data before seeing anything of value, just like with fMRI. A 2014 study found that, despite previous results, DTI might not work so well with mild TBI. Of course, this doesn't invalidate DTI as a scientific approach. It just raises questions about one form of statistical analysis when DTI is used on one type of condition. It would be silly to report on this particular study as invalidating DTI.

Yet, this is basically what happened with the fMRI study. A few days after the headlines took off, one of the paper's co-authors, Thomas Nichols, wrote a blog post expressing regret for the "results question the validity of some 40,000 fMRI studies" thing. After some calculations, he wrote the number of papers affected was likely much closer to 3,500, about nine percent of the entire literature and 11 percent containing original data. But Nichols was careful to point out that this doesn't mean all those 3,500 studies are "wrong." His explanation was fairly technical, but the short version is: it depends on a lot of stuff.

The study likely doesn't go viral if the original abstract used the 3,500 number instead of 40,000. Also, it's not the first time fMRI analytical approaches have received some reality checks. A 2012 paper) showed small amounts of head motion during scanning can affect results, and a more recent paper detailed that the order in which the data is processed can affect whether or not differences are detected in mild TBI patients. This is why researchers often look for corroborating evidence in the brain's structure to support any fMRI findings.

Advertisement

For those in the neuroimaging field, the PNAS study was useful, even necessary.

Still, Ware believes that the study's importance to the general public is "very small…. fMRI still does not play any role in the diagnosis or monitoring of any psychological or psychiatric condition." It didn't raise any profound doubts about fMRI as a technology, just a specific way of interpreting its data.

Rather than being a shock finding, the PNAS study exhibits a fairly normal corrective action in the scientific world. "The problem in biomedical science is it's really easy to make a mistake," Smith told me, emphasizing that even seasoned scientists need these periodic reminders to correct themselves. Researchers push gently on the current boundaries and see what gives.

Science is very much a two steps forward, one and nine-tenths steps back kind of business; a general study shows a promising development, leading to more studies that narrow in on what has truly been learned, and so on. The PNAS paper—which Ware hailed as "excellent" and "important and crucial"—is not a smack-down on 20 years of science. It is science. We just have to wait it out.

Want to read more stories like this from VICE Sports? Subscribe to our daily newsletter.