Posts Tagged neuroimaging
[Abstract] Test-retest reliability of prefrontal transcranial Direct Current Stimulation (tDCS) effects on functional MRI connectivity in healthy subjects
• Prefrontal non-invasive brain stimulation targeting specific brain circuits has the potential to be applied in therapeutic settings but reliability, validity and generalisability have to be evaluated.
• This is the first study investigating the test-retest reliability of prefrontal tDCS-induced resting-state functional-connectivity (RS fcMRI) modulations.
• Analyses of individual RS-fcMRI responses to active tDCS across three single sessions revealed no to low reliability, whereas reliability of RS-fcMRI baselines and RS-fcMRI responses to sham tDCS was low to moderate.
• Our pilot data can be used to plan future imaging studies investigating rs-fcMRI effects of prefrontal tDCS.
Transcranial Direct Current Stimulation (tDCS) of the prefrontal cortex (PFC) can be used for probing functional brain connectivity and meets general interest as novel therapeutic intervention in psychiatric and neurological disorders. Along with a more extensive use, it is important to understand the interplay between neural systems and stimulation protocols requiring basic methodological work. Here, we examined the test-retest (TRT) characteristics of tDCS-induced modulations in resting-state functional-connectivity MRI (RS fcMRI). Twenty healthy subjects received 20 minutes of either active or sham tDCS of the dorsolateral PFC (2 mA, anode over F3 and cathode over F4, international 10–20 system), preceded and ensued by a RS fcMRI (10 minutes each). All subject underwent three tDCS sessions with one-week intervals in between. Effects of tDCS on RS fcMRI were determined at an individual as well as at a group level using both ROI-based and independent-component analyses (ICA). To evaluate the TRT reliability of individual active-tDCS and sham effects on RS fcMRI, voxel-wise intra-class correlation coefficients (ICC) of post-tDCS maps between testing sessions were calculated. For both approaches, results revealed low reliability of RS fcMRI after active tDCS (ICC(2,1) = −0.09 – 0.16). Reliability of RS fcMRI (baselines only) was low to moderate for ROI-derived (ICC(2,1) = 0.13 – 0.50) and low for ICA-derived connectivity (ICC(2,1) = 0.19 – 0.34). Thus, for ROI-based analyses, the distribution of voxel-wise ICC was shifted to lower TRT reliability after active, but not after sham tDCS, for which the distribution was similar to baseline. The intra-individual variation observed here resembles variability of tDCS effects in motor regions and may be one reason why in this study robust tDCS effects at a group level were missing. The data can be used for appropriately designing large scale studies investigating methodological issues such as sources of variability and localisation of tDCS effects.
Functional magnetic resonance images reflect input signals of nerve cells.
The development of magnetic resonance imaging (MRI) is a success story for basic research. Today medical diagnostics would be inconceivable without it. But the research took time to reach fruition: it has been nearly half a century since physicists first began their investigations that ultimately led to what became known as nuclear magnetic resonance. In 2001, Nikos K. Logothetis and his colleagues at the Max Planck Institute for Biological Cybernetics in Tübingen devised a new methodological approach that greatly deepened our understanding of the principles of functional MRI.
The great advantage of functional magnetic resonance imaging (fMRI) is that it requires no major interventions in the body. In fMRI, the human body is exposed to the action of electromagnetic waves. As far as we know today, the process is completely harmless, despite the fact that fMRI equipment generates magnetic fields that are about a million times stronger than the natural magnetic field of the earth.
The physical phenomenon underlying fMRI is known as nuclear magnetic resonance, and the path to its discovery was paved with several Nobel prizes. The story begins in the first half of the 20th century with the description of the properties of atoms. The idea of using nuclear magnetic resonance as a diagnostic tool was mooted as early as the 1950s. But the method had to be refined before finally being realised in the form of magnetic resonance imaging.
Today, MRI not only produces images of the inside of our bodies; it also provides information on the functional state of certain tissues. The breakthrough for fMRI came in the 1980s when researchers discovered that MRI can also be used to detect changes in the oxygen saturation of blood, a principle known as BOLD (blood oxygen level dependent) imaging. There is a 20 percent difference between the magnetic sensitivity of oxygenated arterial blood and that of deoxygenated venous blood. Unlike oxygenated haemoglobin, deoxygenated haemoglobin amplifies the strength of a magnetic field in its vicinity. This difference can be seen on an MRI image.
Resuscitation of the brain after a 15-minute cardiac arrest in fMRI: The pictorial representation provides information about the degree of damage of the brain as well as a detailed analysis of the recovery curve. The top three rows are examples of successful and the bottom row for an unsuccessful resuscitation. The comparison with the concentration images of ATP, glucose and lactate shows that the MR images are in fact closely related to the biochemical changes. Based on such studies, the course of cerebral infarction and the success of various therapeutic measures can be documented. Credit Max Planck Institute.
fMRI has given us new insights into the brain, especially in neurobiology. However, the initial phase of euphoria was followed by a wave of scepticism among scientists, who questioned how informative the “coloured images” really are. Although fMRI can in fact generate huge volumes of data, there is often a lack of background information or basic understanding to permit a meaningful interpretation. As a result, there is a yawning gap between fMRI measurements of brain activity and findings in animals based on electrophysiological recordings.
This is due mainly to technical considerations: interactions between the strong MRI field and currents being measured at the electrodes made it impossible to apply the two methods simultaneously to bridge the gap between animal experiments and findings in humans.
fMRT shows input signals
In 2001, Nikos Logothetis and his colleagues at the Max Planck Institute for Biological Cybernetics in Tübingen were the first to overcome this barrier. With the help of special electrodes and sophisticated data processing, they showed unambiguously that BOLD fMRI actually does measure changes in the activity of nerve cells. They also discovered that BOLD signals correlate to the arrival and local processing of data in an area of the brain rather than to output signals that are transmitted to other areas of the brain. Their paper was a milestone in our understanding of MRI and has been cited over 2500 times worldwide.
Their novel experimental setup enabled the Tübingen scientists to study various aspects of nerve cell activity and to distinguish between action potentials and local field potentials. Action potentials are electrical signals that originate from single nerve cells or a relatively small group of nerve cells. They are all-or-nothing signals that occur only if the triggering stimulus exceeds a certain threshold. Action potentials therefore reflect output signals. These signals are detected by electrodes located in the immediate vicinity of the nerve cells. By contrast, local field potentials generate slowly varying electrical potentials that reflect signals entering and being processed in a larger group of nerve cells.
Applying these three methods simultaneously, the Max Planck researchers examined the responses to a visual stimulus in the visual cortex of anaesthetized monkeys. Comparison of the measurements showed that fMRI data relate more to local field potentials than to single-cell and multi-unit potentials. This means that changes in blood oxygen saturation are not necessarily associated with output signals from nerve cells; instead, they reflect the arrival and processing of signals received from other areas of the brain.
Another important discovery the Tübingen researchers made was that, because of the large variability of vascular reactions, BOLD fMRI data have a much lower signal-to-noise ratio than electrophysiological recordings. Because of this, conventional statistical analyses of human fMRI data underestimate the extent of activity in the brain. In other words, the absence of an fMRI signal in an area of the brain does not necessarily mean that no information is being processed there. Doctors need to take this into account when interpreting fMRI data.
[ARTICLE] Parietal operculum and motor cortex activities predict motor recovery in moderate to severe stroke – Full Text
While motor recovery following mild stroke has been extensively studied with neuroimaging, mechanisms of recovery after moderate to severe strokes of the types that are often the focus for novel restorative therapies remain obscure. We used fMRI to: 1) characterize reorganization occurring after moderate to severe subacute stroke, 2) identify brain regions associated with motor recovery and 3) to test whether brain activity associated with passive movement measured in the subacute period could predict motor outcome six months later.
Because many patients with large strokes involving sensorimotor regions cannot engage in voluntary movement, we used passive flexion-extension of the paretic wrist to compare 21 patients with subacute ischemic stroke to 24 healthy controls one month after stroke. Clinical motor outcome was assessed with Fugl-Meyer motor scores (motor-FMS) six months later. Multiple regression, with predictors including baseline (one-month) motor-FMS and sensorimotor network regional activity (ROI) measures, was used to determine optimal variable selection for motor outcome prediction. Sensorimotor network ROIs were derived from a meta-analysis of arm voluntary movement tasks. Bootstrapping with 1000 replications was used for internal model validation.
During passive movement, both control and patient groups exhibited activity increases in multiple bilateral sensorimotor network regions, including the primary motor (MI), premotor and supplementary motor areas (SMA), cerebellar cortex, putamen, thalamus, insula, Brodmann area (BA) 44 and parietal operculum (OP1-OP4). Compared to controls, patients showed: 1) lower task-related activity in ipsilesional MI, SMA and contralesional cerebellum (lobules V-VI) and 2) higher activity in contralesional MI, superior temporal gyrus and OP1-OP4. Using multiple regression, we found that the combination of baseline motor-FMS, activity in ipsilesional MI (BA4a), putamen and ipsilesional OP1 predicted motor outcome measured 6 months later (adjusted-R2 = 0.85; bootstrap p < 0.001). Baseline motor-FMS alone predicted only 54% of the variance. When baseline motor-FMS was removed, the combination of increased activity in ipsilesional MI-BA4a, ipsilesional thalamus, contralesional mid-cingulum, contralesional OP4 and decreased activity in ipsilesional OP1, predicted better motor outcome (djusted-R2 = 0.96; bootstrap p < 0.001).
In subacute stroke, fMRI brain activity related to passive movement measured in a sensorimotor network defined by activity during voluntary movement predicted motor recovery better than baseline motor-FMS alone. Furthermore, fMRI sensorimotor network activity measures considered alone allowed excellent clinical recovery prediction and may provide reliable biomarkers for assessing new therapies in clinical trial contexts. Our findings suggest that neural reorganization related to motor recovery from moderate to severe stroke results from balanced changes in ipsilesional MI (BA4a) and a set of phylogenetically more archaic sensorimotor regions in the ventral sensorimotor trend. OP1 and OP4 processes may complement the ipsilesional dorsal motor cortex in achieving compensatory sensorimotor recovery.
The past decade has brought us jaw-dropping insights about the hidden workings of our brains, in part thanks to a popular brain scan technique called fMRI. But a major new study has revealed that fMRI interpretation has a serious flaw, one that could mean that much of what we’ve learned about our brains this way might need a second look.
On TV and in movies, we’ve all seen doctors stick an X-ray up on the lightbox and play out a dramatic scene: “What’s that dark spot, doctor?” “Hm…”
In reality, though, a modern medical scan contains so much data, no single pair of doctor’s eyes could possibly interpret it. The brain scan known as fMRI, for functional magnetic resonance imaging, produces a massive data set that can only be understood by custom data analysis software. Armed with this analysis, neuroscientists have used the fMRI scan to produce a series of paradigm-shifting discoveries about our brains.
Now, an unsettling new report, which is causing waves in the neuroscience community, suggests that fMRI’s custom software can be deeply flawed — calling into question many of the most exciting findings in recent neuroscience.
The problem researchers have uncovered is simple: the computer programs designed to sift through the images produced by fMRI scans have a tendency to suggest differences in brain activity where none exist. For instance, humans who are resting, not thinking about anything in particular, not doing anything interesting, can deliver spurious results of differences in brain activity. It’s even been shown to indicate brain activity in a dead salmon, whose stilled brain lit up an MRI as if it were somehow still dreaming of a spawning run.
The report throws into question the results of some portion of the more than 40,000 studies that have been conducted using fMRI, studies that plumb the brainy depths of everything from free will to fear. And scientists are not quite sure how to recover.
“It’s impossible to know how many fMRI studies are wrong, since we do not have access to the original data,” says computer scientist Anders Eklund of Linkoping University in Sweden, who conducted the analysis.
How it should have worked: Start by signing up subjects. Scan their brains while they rest inside an MRI machine. Then scan their brains again when exposed to pictures of spiders, say. Those subjects who are afraid of spiders will have blood rush to those regions of the brain involved in thinking and feeling fear, because such thoughts or feelings are suspected to require more oxygen. With the help of a computer program, the MRI machine then registers differences in hemoglobin, the iron-rich molecule that makes blood red and carries oxygen from place to place. (That’s the functional in fMRI.) The scan then looks at whether those hemoglobin molecules are still carrying oxygen to a given place in the brain, or not, based on how the molecules respond to the powerful magnetic fields. Scan enough brains and see how the fearful differ from the fearless, and perhaps you can identify the brain regions or structures associated with thinking or feeling fear.
That’s the theory, anyway. In order to detect such differences in brain activity, it would be best to scan a large number of brains, but the difficulty and expense often make this impossible. A single MRI scan can cost around $2,600, according to a 2014 NerdWallet analysis. Further, the differences in the blood flow are often tiny. And then there’s the fact that computer programs have to sift the through images of the 1,200 or so cubic centimeters of gelatinous tissue that make up each individual brain and compare them to others, a big data analysis challenge.
Eklund’s report shows that the assumptions behind the main computer programs used to sift such big fMRI data have flaws, as turned up by nearly 3 million random evaluations of the resting brain scans of 499 volunteers from Cambridge, Massachusetts; Beijing; and Oulu, Finland. One program turned out to have a 15-year-old coding error (which has now been fixed) that caused it to detect too much brain activity. This highlights the challenge of researchers working with computer code that they are not capable of checking themselves, a challenge not confined just to neuroscience.
The brain is even more complicated than we thought.Worse, Eklund and his colleagues found that all the programs assume that brains at rest have the same response to the jet-engine roar of the MRI machine itself as well as whatever random thoughts and feelings occur in the brain. Those assumptions appear to be wrong. The brain at rest is “actually a bit more complex,” Eklund says.
More specifically, the white matter of the brain appears to be underrepresented in fMRI analyses while another specific part of the brain — the posterior cingulate, a region in the middle of the brain that connects to many other parts — shows up as a “hot spot” of activity. As a result, the programs are more likely to single it out as showing extra activity even when there is no difference. “The reason for this is still unknown,” Eklund says.
Overall, the programs had a false positive rate — detecting a difference where none actually existed — of as much as 70 percent.
Unknown unknowns: This does not mean all fMRI studies are wrong. Co-author and statistician Thomas Nichols of the University of Warwick calculates that some 3,500 studies may be affected by such false positives, and such false positives can never be eliminated entirely. But a survey of 241 recent fMRI papers found 96 that could have even worse false-positive rates than those found in this analysis.
“The paper makes an important criticism,” says Nancy Kanwisher, a neuroscientist at MIT (TED Talk: A neural portrait of the human mind), though she points out that it does not undermine those fMRI studies that do not rely on these computer programs.
Nonetheless, it is worrying. “I think the fallout has yet to be fully evaluated. It appears to apply to quite a few studies, certainly the studies done in a generic way that is the bread-and-butter of fMRI,” says Douglas Greve, a neuroimaging specialist at Massachusetts General Hospital. What’s needed is more scrutiny, Greve suggests.
Another argument for open data. Eklund and his colleagues were only able to discover this methodological flaw thanks to the open sharing of group brain scan data by the 1,000 Functional Connectomes Project. Unfortunately, such sharing of brain scan data is more the exception than the norm, which hinders other researchers attempting to re-create the experiment and replicate the results. Such replication is a cornerstone of the scientific method, ensuring that findings are robust. Eklund, for one, therefore encourages neuroimagers to “share their fMRI data, so that other researchers can replicate their findings and re-analyze the data several years later.” Only then can scientists be sure that the undiscovered activity of the human brain is truly revealed … and that dead salmon are not still dreaming.
ABOUT THE AUTHOR
David Biello is an award-winning journalist writing most often about the environment and energy. His book “The Unnatural World” publishes November 2016. It’s about whether the planet has entered a new geologic age as a result of people’s impacts and, if so, what we should do about this Anthropocene. He also hosts documentaries, such as “Beyond the Light Switch” and the forthcoming “The Ethanol Effect” for PBS. He is the science curator for TED.
Source: The problem with fMRI |
What is brain plasticity and why is it important following a brain injury?
Brain plasticity is the phenomenon by which the brain can rewire and reorganize itself in response to changing stimulus input. Brain plasticity is at play when one is learning new information (at school) or learning a new language and occurs throughout one’s life.
Brain plasticity is particularly important after a brain injury, as the neurons in the brain are damaged after a brain injury, and depending on the type of brain injury, plasticity may either include repair of damaged brain regions or reorganization/rewiring of different parts of the brain.
How much is known about the level of injury the brain can recover from? Over what time period does the brain adapt to an injury?
A lot is known about brain plasticity immediately after an injury. Like any other injury to the body, after an initial negative reaction to the injury, the brain goes through a massive healing process, where the brain tries to repair itself after the injury. Research tells us exactly what kinds of repair processes occur hours, days and weeks after the injury.
What is not well understood is how recovery continues to occur in the long term. So, there is a lot research showing that the brain is plastic, and undergoes recovery even months after the brain damage, but what promotes such recovery and what hinders such recovery is not well understood.
It is well understood that some rehabilitative training promotes brain injury and most of the current research is focused on this topic.
What techniques are used to study brain plasticity?
Human brain plasticity has mostly been studied using non-invasive imaging methods, because these techniques allow us to measure the gray matter (neurons), white matter (axons) at a somewhat coarse level. MRI and fMRI techniques provide snapshots and video of the brain in function, and that allows us to capture changes in the brain that are interpreted as plasticity.
Also, more recently, there are invasive stimulation methods such as transcranial direct current stimulation or transcranial magnetic stimulation which allow providing electric current or magnetic current to different parts of the brain and such stimulation causes certain changes in the brain.
How has our understanding advanced over recent years?
One of the biggest shifts in our understanding of brain plasticity is that it is a lifelong phenomenon. We used to previously think that the brain is plastic only during childhood and once you reach adulthood, the brain is hardwired, and no new changes can be made to it.
However, we now know that even the adult brain can be modified and reorganized depending on what new information it is learning. This understanding has a profound impact on recovery from brain injury because it means that with repeated training/instruction, even the damaged brain is plastic and can recover.
What role do you see personalized medicine playing in brain therapy in the future?
One reason why rehabilitation after brain injury is so complex is because no two individuals are alike. Each individual’s education and life experiences have shaped their brain (due to plasticity!) in unique ways, so after a brain injury, we cannot expect that recovery in two individuals will be occur the same way.
Personalized medicine allows the ability to tailor treatment for each individual taking into account their strengths and weaknesses and providing exactly the right kind of therapy for that person. Therefore, one size treatment does not fit all, and individualized treatments prescribed to the exact amount of dosage will become a reality.
What is ‘automedicine’ and do you think this could become a reality?
I am not sure we understand what automedicine can and cannot do just yet, so it’s a little early to comment on the reality. Using data to improve our algorithms to precisely deliver the right amount of rehabilitation/therapy will likely be a reality very soon, but it is not clear that it will eliminate the need for doctors or rehabilitation professionals.
What do you think the future holds for people recovering from strokes and brain injuries and what’s Constant Therapy’s vision?
The future for people recovering from strokes and brain injuries is more optimistic than it has ever been for three important reasons. First, as I pointed above, there is tremendous amount of research showing that the brain is plastic throughout life, and this plasticity can be harnessed after brain injury also.
Second, recent advances in technology allow patients to receive therapy at their homes at their convenience, empowering them to take control of their therapy instead of being passive consumers.
Finally, the data that is collected from individuals who continuously receive therapy provides a rich trove of information about how patients can improve after rehabilitation, what works and what does not work.
Constant Therapy’s vision incorporates all these points and its goal to provide effective, efficient and reasonable rehabilitation to patients recovering from strokes and brain injury.
Where can readers find more information?
- Please see Constant Therapy for more information.
About Dr Swathi Kiran
Swathi Kiran is Professor in the Department of Speech and Hearing Sciences at Boston University and Assistant in Neurology/Neuroscience at Massachusetts General Hospital. Prior to Boston University, she was at University of Texas at Austin. She received her Ph.D from Northwestern University.
Her research interests focus around lexical semantic treatment for individuals with aphasia, bilingual aphasia and neuroimaging of brain plasticity following a stroke.
She has over 70 publications and her work has appeared in high impact journals across a variety of disciplines including cognitive neuroscience, neuroimaging, rehabilitation, speech language pathology and bilingualism.
She is a fellow of the American Speech Language and Hearing Association and serves on various journal editorial boards and grant review panels including at National Institutes of Health.
Her work has been continually funded by the National Institutes of Health/NIDCD and American Speech Language Hearing Foundation awards including the New Investigator grant, the New Century Scholar’s Grant and the Clinical Research grant. She is the co-founder and scientific advisor for Constant Therapy, a software platform for rehabilitation tools after brain injury.
Explore the relationship between acupuncture and cognitive therapy with change in cognitive domains following traumatic brain injury. The secondary objective was to evaluate the potential relationship between acupuncture and cognitive therapy with volume activation in select brain areas as shown by functional MRI (fMRI).
[ARTICLE] Opportunities for Guided Multichannel Non-invasive Transcranial Current Stimulation in Poststroke Rehabilitation – Full Text HTML
Stroke is a leading cause of serious long-term disability worldwide. Functional outcome depends on stroke location, severity and early intervention. Conventional rehabilitation strategies have limited effectiveness, and new treatments still fail to keep pace, in part due to a lack of understanding of the different stages in brain recovery and the vast heterogeneity in the post-stroke population. Innovative methodologies for restorative neurorehabilitation are required to reduce long-term disability and socioeconomic burden. Neuroplasticity is involved in post-stroke functional disturbances, and also during rehabilitation. Tackling post-stroke neuroplasticity by non-invasive brain stimulation is regarded as promising, but efficacy might be limited because of rather uniform application across patients despite individual heterogeneity of lesions, symptoms and other factors. Transcranial direct current stimulation (tDCS) induces and modulates neuroplasticity, and has been shown to be able to improve motor and cognitive functions. tDCS is suited to improve post-stroke rehabilitation outcomes, but effect sizes are often moderate and suffer from variability. Indeed, the location, extent and pattern of functional network connectivity disruption should be considered when determining the optimal location sites for tDCS therapies. Here, we present potential opportunities for neuroimaging-guided tDCS-based rehabilitation strategies after stroke that could be personalized. We introduce innovative multimodal intervention protocols based on multichannel tDCS montages, neuroimaging methods and real-time closed-loop systems to guide therapy. This might help to overcome current treatment limitations in post-stroke rehabilitation and increase our general understanding of adaptive neuroplasticity leading to neural reorganization after stroke.
PHILADELPHIA – Epilepsy affects more than 65 million people worldwide. One-third of these patients have seizures that are not controlled by medications. In addition, one-third have brain lesions, the hallmark of the disease, which cannot be located by conventional imaging methods.
Researchers at the Perelman School of Medicine at the University of Pennsylvania have piloted a new method using advanced noninvasive neuroimaging to recognize the neurotransmitter glutamate, thought to be the culprit in the most common form of medication-resistant epilepsy. Their work is published today in Science Translational Medicine.
Glutamate is an amino acid which transmits signals from neuron to neuron, telling them when to fire. Glutamate normally docks with the neuron, gives it the signal to fire and is swiftly cleared. In patients with epilepsy, stroke and possibly ALS, the glutamate is not cleared, leaving the neuron overwhelmed with messages and in a toxic state of prolonged excitation.
In localization-related epilepsy, the most common form of medication-resistant epilepsy, seizures are generated in a focused section of the brain; in 65 percent of patients, this occurs in the temporal lobe. Removal of the seizure-generating region of the temporal lobe, guided by preoperative MRI, can offer a cure. However, a third of these patients have no identified abnormality on conventional imaging studies and, therefore, more limited surgical options.