Posts Tagged neuroimaging

[ARTICLE] Biomarkers of stroke recovery: Consensus-based core recommendations from the Stroke Recovery and Rehabilitation Roundtable – Full Text

In practical terms, biomarkers should improve our ability to predict long-term outcomes after stroke across multiple domains. This is beneficial for: (a) patients, caregivers and clinicians; (b) planning subsequent clinical pathways and goal setting; and (c) identifying whom and when to target, and in some instances at which dose, with interventions for promoting stroke recovery.2 This last point is particularly important as methods for accurate prediction of long-term outcome would allow clinical trials of restorative and rehabilitation interventions to be stratified based on the potential for neurobiological recovery in a way that is currently not possible when trials are performed in the absence of valid biomarkers. Unpredictable outcomes after stroke, particularly in those who present with the most severe impairment3 mean that clinical trials of rehabilitation interventions need hundreds of patients to be appropriately powered. Use of biomarkers would allow incorporation of accurate information about the underlying impairment, and thus the size of these intervention trials could be considerably reduced,4 with obvious benefits. These principles are no different in the context of stroke recovery as compared to general medical research.5

Interventions fall into two broad mechanistic categories: (1) behavioural interventions that take advantage of experience and learning-dependent plasticity (e.g. motor, sensory, cognitive, and speech and language therapy), and (2) treatments that enhance the potential for experience and learning-dependent plasticity to maximise the effects of behavioural interventions (e.g. pharmacotherapy or non-invasive brain stimulation).6 To identify in whom and when to intervene, we need biomarkers that reflect the underlying biological mechanisms being targeted therapeutically.

Our goal is to provide a consensus statement regarding the evidence for SRBs that are helpful in outcome prediction and therefore identifying subgroups for stratification to be used in trials.7 We focused on SRBs that can investigate the structure or function of the brain (Table 1). Four functional domains (motor, somatosensation, cognition, and language (Table 2)) were considered according to recovery phase post stroke (hyperacute: <24 h; acute: 1 to 7 days; early subacute: 1 week to 3 months; late subacute: 3 months to 6 months; chronic: > 6 months8). For each functional domain, we provide recommendations for biomarkers that either are: (1) ready to guide stratification of subgroups of patients for clinical trials and/or to predict outcome, or (2) are a developmental priority (Table 3). Finally, we provide an example of how inclusion of a clinical trial-ready biomarker might have benefitted a recent phase III trial. As there is generally limited evidence at this time for blood or genetic biomarkers, we do not discuss these, but recommend they are a developmental priority.912 We also recognize that many other functional domains exist, but focus here on the four that have the most developed science. […]

Continue —> Biomarkers of stroke recovery: Consensus-based core recommendations from the Stroke Recovery and Rehabilitation RoundtableInternational Journal of Stroke – Lara A Boyd, Kathryn S Hayward, Nick S Ward, Cathy M Stinear, Charlotte Rosso, Rebecca J Fisher, Alexandre R Carter, Alex P Leff, David A Copland, Leeanne M Carey, Leonardo G Cohen, D Michele Basso, Jane M Maguire, Steven C Cramer, 2017

, , , , , , , , , ,

Leave a comment

[WEB SITE] Research provides insights for why some epilepsy patients continue to experience postoperative seizures

New research from the University of Liverpool, published in the journal Brain, has highlighted the potential reasons why many patients with severe epilepsy still continue to experience seizures even after surgery.

Epilepsy continues to be a serious health problem and is the most common serious neurological disorder. Medically intractable temporal lobe epilepsy (TLE) remains the most frequent neurosurgically treated epilepsy disorder.

Many people with this condition will undergo a temporal lobe resection which is a surgery performed on the brain to control seizures. In this procedure, brain tissue in the temporal lobe is resected, or cut away, to remove the seizure focus.

Unfortunately, approximately one in every two patients with TLE will not be rendered completely seizure free after temporal lobe surgery, and the reasons underlying persistent postoperative seizures have not been resolved.

Reliable biomarkers

Understanding the reasons why so many patients continue to experience postoperative seizures, and identifying reliable biomarkers to predict who will continue to experience seizures, are crucial clinical and scientific research endeavours.

Researchers from the University’s Institute of Translational Medicine, led by Neuroimaging Lead Dr Simon Keller and collaborating with Medical University Bonn (Germany), Medical University of South Carolina (USA) and King’s College London, performed a comprehensive diffusion tensor imaging (DTI) study in patients with TLE who were scanned preoperatively, postoperatively and assessed for postoperative seizure outcome.

Diffusion tensor imaging (DTI) is a MRI-based neuroimaging technique that provides insights into brain network connectivity.

The results of these scans allowed the researchers to examine regional tissue characteristics along the length of temporal lobe white matter tract bundles. White matter is mainly composed of axons of nerve cells, which form connections between various grey matter areas of the brain, and carry nerve impulses between neurons allowing communication between different brain regions.

Through their analysis the researchers could determine how abnormal the white matter tracts were before surgery and how the extent of resection had affected each tract from the postoperative MRI scans.

Surgery outcomes

The researchers identified preoperative abnormalities of two temporal lobe white matter tracts that are not included in standardised temporal lobe surgery in patients who had postoperative seizures but not in patients with no seizures after surgery.

The two tracts were in the ‘fornix’ area on the same side as surgery, and in the white matter of the ‘parahippocampal’ region on the opposite side of the brain.

The tissue characteristics of these white matter tracts enabled researchers to correctly identify those likely to have further seizures in 84% of cases (sensitivity) and those unlikely to have further seizures in 89% of cases (specificity). This is significantly greater than current estimates.

The researchers also found that a particular temporal lobe white matter tract called the ‘uncinate fasciculus’ was abnormal – and potentially involved in the generation of seizures – in patients with excellent and suboptimal postoperative outcomes.

However, it was found that significantly more of this tract was surgically resected/removed in the patients with an excellent outcome.

New insights

Dr Simon Keller, said: “There is scarce information on the prediction of postoperative seizure outcome using preoperative imaging technology, and this study is the first to rigorously investigate the tissue characteristics of temporal lobe white matter tracts with respect to future seizure classifications.

“Although there is some way to go before this kind of data can influence routine clinical practice, these results may have the potential to be developed into imaging prognostic markers of postoperative outcome and provide new insights for why some patients with temporal lobe epilepsy continue to experience postoperative seizures.”

Source: Research provides insights for why some epilepsy patients continue to experience postoperative seizures

, , , , , ,

Leave a comment

[WEB SITE] Brain surgery helps remove scar tissue causing seizures in epilepsy patients

By the time epilepsy patient Erika Fleck came to Loyola Medicine for a second opinion, she was having three or four seizures a week and hadn’t been able to drive her two young children for five years.

“It was no way to live,” she said.

Loyola epileptologist Jorge Asconapé, MD, recommended surgery to remove scar tissue in her brain that was triggering the seizures. Neurosurgeon Douglas Anderson, MD, performed the surgery, called an amygdalohippocampectomy. Ms. Fleck hasn’t had a single seizure in the more than three years since her surgery.

“I’ve got my life back,” she said. “I left my seizures at Loyola.”

Surgery can be an option for a minority of patients who do not respond to medications or other treatments and have epileptic scar tissue that can be removed safely. In 60 to 70 percent of surgery patients, seizures are completely eliminated, and the success rate likely will improve as imaging and surgical techniques improve, Dr. Anderson said.

Traditionally, patients would have to try several medications with poor results for years or decades before being considered for surgery, according to the Epilepsy Foundation. “More recently, surgery is being considered sooner,” the foundation said. “Studies have shown that the earlier surgery is performed, the better the outcome.” (Ms. Fleck is a service coordinator for the Epilepsy Foundation North/Central Illinois Iowa and Nebraska.)

Dr. Asconapé said Ms. Fleck was a perfect candidate for surgery because the scar tissue causing her seizures was located in an area of the brain that could be removed without damaging critical structures.

Ms. Fleck experienced complex partial seizures, characterized by a deep stare, unresponsiveness and loss of control for a minute or two. An MRI found the cause: A small area of scar tissue in a structure of the brain called the hippocampus. The subtle lesion had been overlooked at another center.

Epilepsy surgery takes about three hours, and patients typically are in the hospital for two or three days. Like all surgery, epilepsy surgery entails risks, including infection, hemorrhage, injury to other parts of the brain and slight personality changes. But such complications are rare, and they pose less risk to patients than the risk of being injured during seizures, Dr. Asconapé said.

Loyola has been designated a Level Four Epilepsy Center by the National Association of Epilepsy Centers. Level Four is the highest level of specialized epilepsy care available. Level Four centers have the professional expertise and facilities to provide the highest level of medical and surgical evaluation and treatment for patients with complex epilepsy.

Loyola’s comprehensive, multidisciplinary Epilepsy Center offers a comprehensive multidisciplinary approach to epilepsy and seizure disorders for adults and children as young as two years old. Pediatric and adult epileptologist consultation and state-of-the-art neuroimaging and electrodiagnostic technology are used to identify and assess complex seizure disorders by short- and long-term monitoring.

Source: Loyola University Health System

Source: Brain surgery helps remove scar tissue causing seizures in epilepsy patients

, , , , , , , , , ,

Leave a comment

[OPINION ARTICLE] Can Functional Magnetic Resonance Imaging Generate Valid Clinical Neuroimaging Reports? – Full Text

imageRoland Beisteiner* Study Group Clinical fMRI, High Field MR Center, Department of Neurology, Medical University of Vienna, Vienna, Austria

A highly critical issue for applied neuroimaging in neurology—and particularly for functional neuroimaging—concerns the question of validity of the final clinical result. Within a clinical context, the question of “validity” often equals the question of “instantaneous repeatability,” because clinical functional neuroimaging is done within a specific pathophysiological framework. Here, not only every brain is different but also every pathology is different, and most importantly, individual pathological brains may rapidly change in short time.

Within the brain mapping community, the problem of validity and repeatability of functional neuroimaging results has recently become a major issue. In 2016, the Committee on Best Practice in Data Analysis and Sharing from the Organization for Human Brain Mapping (OHBM) created recommendations for replicable research in neuroimaging, focused on magnetic resonance imaging and functional magnetic resonance imaging (fMRI). Here, “replication” is defined as “Independent researchers use independent data and … methods to arrive at the same original conclusion.” “Repeatability” is defined as repeated investigations performed “with the same method on identical test/measurement items in the same test or measuring facility by the same operator using the same equipment within short intervals of time” (ISO 3534-2:2006 3.3.5). An intermediate position between replication and repeatability is defined for “reproducibility”: repeated investigations performed “with the same method on identical test/measurement items in different test or measurement facilities with different operators using different equipment” (ISO 3534-2:2006 3.3.10). Further definitions vary depending on the focus, be it the “measurement stability,” the “analytical stability,” or the “generalizability” over subjects, labs, methods, or populations.

The whole discussion was recently fueled by an PNAS article published by Eklund et al. (1), which claims that certain results achieved with widely used fMRI software packages may generate false-positive results, i.e., show brain activation where is none. More specifically, when looking at activation clusters defined by the software as being significant (clusterwise inference), the probability of a false-positive brain activation is not 5% but up to 70%. This was true for group as well as single subject data (2). The reason lies in an “imperfect” model assumption about the distribution of the spatial autocorrelation of functional signals over the brain. A squared exponential distribution was assumed but found not to be correct for the empirical data. This article received heavy attention and discussion in scientific and public media and a major Austrian newspaper titled “Doubts about thousands of brain research studies.” A recent PubMed analysis indicates already 69 publications citing the Eklund work. Critical comments by Cox et al. (3)—with focusing on the AFNI software results—criticize the authors for “their emphasis on reporting the single worst result from thousands of simulation cases,” which “greatly exaggerated the scale of the problem.” Other groups extended the work. With regard to the fact that “replicability of individual studies is an acknowledged limitation,” Eickhoff et al. (4) suggest that “Coordinate-based meta-analysis offers a practical solution to this limitation.” They claim that meta-analyses allow “filtering and consolidating the enormous corpus of functional and structural neuroimaging results” but also describe “errors in multiple-comparison corrections” in GingerALE, a software package for coordinate-based meta-analysis. One of their goals is to “exemplify and promote an open approach to error management.” More generally and probably also triggered by the Eklund paper, Nissen et al. (5) discuss the current situation that “Science is facing a ‘replication crisis’.” They focus on the publicability of negative results and model “the community’s confidence in a claim as a Markov process with successive published results shifting the degree of belief.” Important findings are, that “unless a sufficient fraction of negative results are published, false claims frequently can become canonized as fact” and “Should negative results become easier to publish … true and false claims would be more readily distinguished.”

As a consequence of this discussion, public skepticism about the validity of clinical functional neuroimaging arose. At first sight, this seems to be really bad news for clinicians. However, at closer inspection, it turns out that particularly the clinical neuroimaging community has already long been aware of the problems with standard (“black box”) analyses of functional data recorded from compromised patients with largely variable pathological brains. Quite evidently, methodological assumptions as developed for healthy subjects and implemented in standard software packages may not always be valid for distorted and physiologically altered brains. There are specific problems for clinical populations and particularly for defining the functional status of an individual brain (as opposed to a “group brain” in group studies). With task-based fMRI—the most important clinical application—the major problems may be categorized in “patient problems” and “methodological problems.”

Critical patient problems concern:

  • – Patient compliance may change quickly and considerably.
  • – The patient may “change” from 1 day to the other (altered vigilance, effects of pathology and medication, mood changes—depression, exhaustion).
  • – The clinical state may “change” considerably from patient to patient (despite all having the same diagnosis). This is primarily due to location and extent of brain pathology and compliance capabilities.

Critical methodological problems concern:

  • – Selection of clinically adequate experimental paradigms (note paresis, neglect, aphasia).
  • – Performance control (particularly important in compromised patients).
  • – Restriction of head motion (in patients artifacts may be very large).
  • – Clarification of the signal source (microvascular versus remote large vessel effects).
  • – Large variability of the contrast to noise ratio from run to run.
  • – Errors with inter-image registration of brains with large pathologies.
  • – Effects of data smoothing, definition of adequate functional regions of interest, and definition of essential brain activations.
  • – Difficult data interpretation requires specific clinical fMRI expertise and independent validation of the local hardware and software performance (preferably with electrocortical stimulation).

All these problems have to be recognized and specific solutions have to be developed depending on the question at hand—generation of an individual functional diagnosis or performance of a clinical group study. To discuss such problems and define solutions, clinical functional neuroimagers have already assembled early (Austrian Society for fMRI,1 American Society of Functional Neuroradiology2) and just recently the Alpine Chapter from the OHBM3 was established with a dedicated focus on applied neuroimaging. Starting in the 1990s (6), this community published a considerable number of clinical methodological investigations focused on the improvement of individual patient results and including studies on replication, repeatability, and reproducibility [compare (7)]. Early examples comprise investigations on fMRI signal sources (8), clinical paradigms (9), reduction of head motion artifacts (10), and fMRI validation studies (11, 12). Of course the primary goal of this clinical research is improvement of the validity of the final clinical result. One of the suggested clinical procedures focuses particularly on instantaneous replicability as a measure of validity [Risk Map Technique (1315); see Figure 1] with successful long-term clinical use. This procedure was developed for presurgical fMRI and minimizes methodological assumptions to stay as close to the original data as possible. This is done by avoiding data smoothing and normalization procedures and minimization of head motion artifacts by helmet fixation (avoiding artifacts instead of correcting them). It is interesting to note that in the Eklund et al. (1) analysis it was also the method with minimal assumptions (a non-parametric permutation), which was the only one that achieved correct (nominal) results. The two general ideas of the risk map technique are (a) to use voxel replicability as a criterion for functionally most important voxels (extracting activation foci = voxels with largest risk for a functional deficit when lesioned) and (b) to consider regional variability of brain conditions (e.g., close to tumor) by variation of the hemodynamic response functions (step function/HRF/variable onset latencies) and thresholds. The technique consists only of few steps, which can easily be realized by in house programming: (i) Record up to 20 short runs of the same task type to allow checking of repeatability. (ii) Define a reference function (e.g., step function with a latency of 1 TR). (iii) Calculate Pearson correlation r for every voxel and every run. (iv) Color code voxels according to their reliability at a given correlation threshold (e.g., r > 0.5): yellow voxels >75%, orange voxels >50%, red voxels >25% of runs need to be active. (v) Repeat (i)–(iv) with different reference functions (to our experience, a classical HRF and two step functions with different latencies are sufficient to evaluate most patients) and at different correlation thresholds (e.g., r > 0.2 to r > 0.9). The clinical fMRI expert performs a comprehensive evaluation of all functional maps with consideration of patient pathology, patient performance, and the distribution and level of artifacts [compare descriptions in Ref. (13, 15)]. The final clinical result is illustrated in Figure 1, and a typical interpretation would be: most reliable activation of the Wernicke area is found with a step function of 1 TR latency and shown at Pearson correlation r > 0.5. It is important to note that risk maps extract the most active voxel(s) within a given brain area and judgment of a “true” activation extent is not possible. However, due to the underlying neurophysiological principles [gradients of functional representations (16)], it is questionable whether “true” activation extents of fMRI activations can be defined with any technique.

 

Figure 1. Example for a missing language activation (Wernicke activity, white arrow) with a “black box” standard analysis (right, SPM12 applying motion regression and smoothing, voxelwise inference FWE <0.05, standard k = 25) using an overt language design [described in Ref. (17)]. Wernicke activity is detectable with the clinical risk map analysis (left) based on activation replicability (yellow = most reliabel voxels). Patient with left temporoparietal tumor.

The importance to check individual patient data from various perspectives instead of relying on a “standard statistical significance value,” which may not correctly reflect the individual patients signal situation, has also been stressed by other authors [e.g., Ref. (18)]. Of course, clinical fMRI—as all other applied neuroimaging techniques—requires clinical fMRI expertise and particularly pathophysiological expertise to be able to conceptualize where to find what, depending on the pathologies of the given brain. One should be aware that full automatization is currently not possible neither for a comparatively simple analysis of a chest X-ray nor for applied neuroimaging. In a clinical context, error estimations still need to be supported by the fMRI expert and cannot be done by an algorithm alone. As a consequence, the international community started early with offering dedicated clinical methodological courses (compare http://oegfmrt.org or http://ohbmbrainmappingblog.com/blog/archives/12-2016). Meanwhile, there are enough methodological studies that enable an experienced clinical fMRI expert to safely judge the possibilities and limitations for a valid functional report in a given patient with his/her specific pathologies and compliance situation. Of course, this also requires adequate consideration of the local hard- and software. Therefore and particularly when considering the various validation studies, neither for patients nor for doctors there is a need to raise “doubts about clinical fMRI studies” but instead good reason to “keep calm and scan on.”4

Author Contributions

The author confirms being the sole contributor of this work and approved it for publication.

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The methodological developments have been supported by the Austrian Science Fund (KLI455, KLI453, P23611) and Cluster Grants of the Medical University of Vienna and the University of Vienna, Austria.

Footnotes

References

1. Eklund A, Nichols TE, Knutsson H. Cluster failure: why fMRI inferences for spatial extent have inflated false-positive rates. Proc Natl Acad Sci U S A (2016) 113(28):7900–5. doi: 10.1073/pnas.1602413113

PubMed Abstract | CrossRef Full Text | Google Scholar

2. Eklund A, Andersson M, Josephson C, Johannesson M, Knutsson H. Does parametric fMRI analysis with SPM yield valid results? An empirical study of 1484 rest datasets. Neuroimage (2012) 61(3):565–78. doi:10.1016/j.neuroimage.2012.03.093

PubMed Abstract | CrossRef Full Text | Google Scholar

3. Cox RW, Chen G, Glen DR, Reynolds RC, Taylor PA. FMRI clustering in AFNI: false-positive rates redux. Brain Connect (2017) 7(3):152–71. doi:10.1089/brain.2016.0475

PubMed Abstract | CrossRef Full Text | Google Scholar

4. Eickhoff SB, Laird AR, Fox PM, Lancaster JL, Fox PT. Implementation errors in the GingerALE Software: description and recommendations. Hum Brain Mapp (2017) 38(1):7–11. doi:10.1002/hbm.23342

PubMed Abstract | CrossRef Full Text | Google Scholar

5. Nissen SB, Magidson T, Gross K, Bergstrom CT. Publication bias and the canonization of false facts. Elife (2016) 5:e21451. doi:10.7554/eLife.21451

PubMed Abstract | CrossRef Full Text | Google Scholar

6. Special Issue Radiologe (1995) 35(4).

Google Scholar

7. Stippich C, editor. Clinical Functional MRI. 2nd ed. Berlin Heidelberg: Springer (2015).

Google Scholar

8. Gomiscek G, Beisteiner R, Hittmair K, Mueller E, Moser E. A possible role of inflow effects in functional MR-imaging. Magn Reson Mater Phys Biol Med (1993) 1:109–13. doi:10.1007/BF01769410

CrossRef Full Text | Google Scholar

9. Hirsch J, Ruge MI, Kim KH, Correa DD, Victor JD, Relkin NR, et al. An integrated functional magnetic resonance imaging procedure for preoperative mapping of cortical areas associated with tactile, motor, language, and visual functions. Neurosurgery (2000) 47:711–21. doi:10.1097/00006123-200009000-00037

PubMed Abstract | CrossRef Full Text | Google Scholar

10. Edward V, Windischberger C, Cunnington R, Erdler M, Lanzenberger R, Mayer D, et al. Quantification of fMRI artifact reduction by a novel plaster cast head holder. Hum Brain Mapp (2000) 11(3):207–13. doi:10.1002/1097-0193(200011)11:3<207::AcID-HBM60>3.0.CO;2-J

PubMed Abstract | CrossRef Full Text | Google Scholar

11. Beisteiner R, Gomiscek G, Erdler M, Teichtmeister C, Moser E, Deecke L. Comparing localization of conventional functional magnetic resonance imaging and magnetoencephalography. Eur J Neurosci (1995) 7:1121–4. doi:10.1111/j.1460-9568.1995.tb01101.x

CrossRef Full Text | Google Scholar

12. Medina LS, Aguirre E, Bernal B, Altman NR. Functional MR imaging versus Wada test for evaluation of language lateralization: cost analysis. Radiology (2004) 230:49–54. doi:10.1148/radiol.2301021122

PubMed Abstract | CrossRef Full Text | Google Scholar

13. Beisteiner R, Lanzenberger R, Novak K, Edward V, Windischberger C, Erdler M, et al. Improvement of presurgical evaluation by generation of FMRI risk maps. Neurosci Lett (2000) 290:13–6. doi:10.1016/S0304-3940(00)01303-3

CrossRef Full Text | Google Scholar

14. Roessler K, Donat M, Lanzenberger R, Novak K, Geissler A, Gartus A, et al. Evaluation of preoperative high magnetic field motor functional- MRI (3 Tesla) in glioma patients by navigated electrocortical stimulation and postoperative outcome. J Neurol Neurosurg Psychiatry (2005) 76:1152–7. doi:10.1136/jnnp.2004.050286

CrossRef Full Text | Google Scholar

15. Beisteiner R. Funktionelle Magnetresonanztomographie. 2nd ed. In: Lehrner J, Pusswald G, Fertl E, Kryspin-Exner I, Strubreither W, editors. Klinische Neuropsychologie. New York: Springer Verlag Wien (2010). p. 275–91.

Google Scholar

16. Beisteiner R, Gartus A, Erdler M, Mayer D, Lanzenberger R, Deecke L. Magnetoencephalography indicates finger motor somatotopy. Eur J Neurosci (2004) 19(2):465–72. doi:10.1111/j.1460-9568.2004.03115.x

PubMed Abstract | CrossRef Full Text | Google Scholar

17. Foki T, Gartus A, Geissler A, Beisteiner R. Probing overtly spoken language at sentential level – a comprehensive high-field BOLD-FMRI protocol reflecting everyday language demands. Neuroimage (2008) 39:1613–24. doi:10.1016/j.neuroimage.2007.10.020

CrossRef Full Text | Google Scholar

18. Tyndall AJ, Reinhardt J, Tronnier V, Mariani L, Stippich C. Presurgical motor, somatosensory and language fMRI: technical feasibility and limitations in 491 patients over 13 years. Eur Radiol (2017) 27(1):267–78. doi:10.1007/s00330-016-4369-4

CrossRef Full Text | Google Scholar

Source: Frontiers | Can Functional Magnetic Resonance Imaging Generate Valid Clinical Neuroimaging Reports? | Neurology

, ,

Leave a comment

[Abstract] Test-retest reliability of prefrontal transcranial Direct Current Stimulation (tDCS) effects on functional MRI connectivity in healthy subjects

Highlights

• Prefrontal non-invasive brain stimulation targeting specific brain circuits has the potential to be applied in therapeutic settings but reliability, validity and generalisability have to be evaluated.

 

• This is the first study investigating the test-retest reliability of prefrontal tDCS-induced resting-state functional-connectivity (RS fcMRI) modulations.

 

• Analyses of individual RS-fcMRI responses to active tDCS across three single sessions revealed no to low reliability, whereas reliability of RS-fcMRI baselines and RS-fcMRI responses to sham tDCS was low to moderate.

 

• Our pilot data can be used to plan future imaging studies investigating rs-fcMRI effects of prefrontal tDCS.

Abstract

Transcranial Direct Current Stimulation (tDCS) of the prefrontal cortex (PFC) can be used for probing functional brain connectivity and meets general interest as novel therapeutic intervention in psychiatric and neurological disorders. Along with a more extensive use, it is important to understand the interplay between neural systems and stimulation protocols requiring basic methodological work. Here, we examined the test-retest (TRT) characteristics of tDCS-induced modulations in resting-state functional-connectivity MRI (RS fcMRI). Twenty healthy subjects received 20 minutes of either active or sham tDCS of the dorsolateral PFC (2 mA, anode over F3 and cathode over F4, international 10–20 system), preceded and ensued by a RS fcMRI (10 minutes each). All subject underwent three tDCS sessions with one-week intervals in between. Effects of tDCS on RS fcMRI were determined at an individual as well as at a group level using both ROI-based and independent-component analyses (ICA). To evaluate the TRT reliability of individual active-tDCS and sham effects on RS fcMRI, voxel-wise intra-class correlation coefficients (ICC) of post-tDCS maps between testing sessions were calculated. For both approaches, results revealed low reliability of RS fcMRI after active tDCS (ICC(2,1) = −0.09 – 0.16). Reliability of RS fcMRI (baselines only) was low to moderate for ROI-derived (ICC(2,1) = 0.13 – 0.50) and low for ICA-derived connectivity (ICC(2,1) = 0.19 – 0.34). Thus, for ROI-based analyses, the distribution of voxel-wise ICC was shifted to lower TRT reliability after active, but not after sham tDCS, for which the distribution was similar to baseline. The intra-individual variation observed here resembles variability of tDCS effects in motor regions and may be one reason why in this study robust tDCS effects at a group level were missing. The data can be used for appropriately designing large scale studies investigating methodological issues such as sources of variability and localisation of tDCS effects.

 

Source: Test-retest reliability of prefrontal transcranial Direct Current Stimulation (tDCS) effects on functional MRI connectivity in healthy subjects

, , , , , , , ,

Leave a comment

[WEB SITE] Understanding the Human Brain – Neuroscience News

Functional magnetic resonance images reflect input signals of nerve cells.

The development of magnetic resonance imaging (MRI) is a success story for basic research. Today medical diagnostics would be inconceivable without it. But the research took time to reach fruition: it has been nearly half a century since physicists first began their investigations that ultimately led to what became known as nuclear magnetic resonance. In 2001, Nikos K. Logothetis and his colleagues at the Max Planck Institute for Biological Cybernetics in Tübingen devised a new methodological approach that greatly deepened our understanding of the principles of functional MRI.

The great advantage of functional magnetic resonance imaging (fMRI) is that it requires no major interventions in the body. In fMRI, the human body is exposed to the action of electromagnetic waves. As far as we know today, the process is completely harmless, despite the fact that fMRI equipment generates magnetic fields that are about a million times stronger than the natural magnetic field of the earth.

The physical phenomenon underlying fMRI is known as nuclear magnetic resonance, and the path to its discovery was paved with several Nobel prizes. The story begins in the first half of the 20th century with the description of the properties of atoms. The idea of using nuclear magnetic resonance as a diagnostic tool was mooted as early as the 1950s. But the method had to be refined before finally being realised in the form of magnetic resonance imaging.

Today, MRI not only produces images of the inside of our bodies; it also provides information on the functional state of certain tissues. The breakthrough for fMRI came in the 1980s when researchers discovered that MRI can also be used to detect changes in the oxygen saturation of blood, a principle known as BOLD (blood oxygen level dependent) imaging. There is a 20 percent difference between the magnetic sensitivity of oxygenated arterial blood and that of deoxygenated venous blood. Unlike oxygenated haemoglobin, deoxygenated haemoglobin amplifies the strength of a magnetic field in its vicinity. This difference can be seen on an MRI image.

Resuscitation of the brain after a 15-minute cardiac arrest in fMRI: The pictorial representation provides information about the degree of damage of the brain as well as a detailed analysis of the recovery curve. The top three rows are examples of successful and the bottom row for an unsuccessful resuscitation. The comparison with the concentration images of ATP, glucose and lactate shows that the MR images are in fact closely related to the biochemical changes. Based on such studies, the course of cerebral infarction and the success of various therapeutic measures can be documented. Credit Max Planck Institute.

fMRI has given us new insights into the brain, especially in neurobiology. However, the initial phase of euphoria was followed by a wave of scepticism among scientists, who questioned how informative the “coloured images” really are. Although fMRI can in fact generate huge volumes of data, there is often a lack of background information or basic understanding to permit a meaningful interpretation. As a result, there is a yawning gap between fMRI measurements of brain activity and findings in animals based on electrophysiological recordings.

This is due mainly to technical considerations: interactions between the strong MRI field and currents being measured at the electrodes made it impossible to apply the two methods simultaneously to bridge the gap between animal experiments and findings in humans.

fMRT shows input signals

In 2001, Nikos Logothetis and his colleagues at the Max Planck Institute for Biological Cybernetics in Tübingen were the first to overcome this barrier. With the help of special electrodes and sophisticated data processing, they showed unambiguously that BOLD fMRI actually does measure changes in the activity of nerve cells. They also discovered that BOLD signals correlate to the arrival and local processing of data in an area of the brain rather than to output signals that are transmitted to other areas of the brain. Their paper was a milestone in our understanding of MRI and has been cited over 2500 times worldwide.

Their novel experimental setup enabled the Tübingen scientists to study various aspects of nerve cell activity and to distinguish between action potentials and local field potentials. Action potentials are electrical signals that originate from single nerve cells or a relatively small group of nerve cells. They are all-or-nothing signals that occur only if the triggering stimulus exceeds a certain threshold. Action potentials therefore reflect output signals. These signals are detected by electrodes located in the immediate vicinity of the nerve cells. By contrast, local field potentials generate slowly varying electrical potentials that reflect signals entering and being processed in a larger group of nerve cells.

Applying these three methods simultaneously, the Max Planck researchers examined the responses to a visual stimulus in the visual cortex of anaesthetized monkeys. Comparison of the measurements showed that fMRI data relate more to local field potentials than to single-cell and multi-unit potentials. This means that changes in blood oxygen saturation are not necessarily associated with output signals from nerve cells; instead, they reflect the arrival and processing of signals received from other areas of the brain.

Another important discovery the Tübingen researchers made was that, because of the large variability of vascular reactions, BOLD fMRI data have a much lower signal-to-noise ratio than electrophysiological recordings. Because of this, conventional statistical analyses of human fMRI data underestimate the extent of activity in the brain. In other words, the absence of an fMRI signal in an area of the brain does not necessarily mean that no information is being processed there. Doctors need to take this into account when interpreting fMRI data.

NOTES ABOUT THIS NEUROIMAGING RESEARCH

Contact: Christina Beck – Max Planck Institute
Source: Max Planck Institute press release
Image Source: The image is credited to Max Planck Institute and is adapted from the press release

Source: Understanding the Human Brain – Neuroscience News

, , , , ,

Leave a comment

[ARTICLE] Parietal operculum and motor cortex activities predict motor recovery in moderate to severe stroke – Full Text

Abstract

While motor recovery following mild stroke has been extensively studied with neuroimaging, mechanisms of recovery after moderate to severe strokes of the types that are often the focus for novel restorative therapies remain obscure. We used fMRI to: 1) characterize reorganization occurring after moderate to severe subacute stroke, 2) identify brain regions associated with motor recovery and 3) to test whether brain activity associated with passive movement measured in the subacute period could predict motor outcome six months later.

Because many patients with large strokes involving sensorimotor regions cannot engage in voluntary movement, we used passive flexion-extension of the paretic wrist to compare 21 patients with subacute ischemic stroke to 24 healthy controls one month after stroke. Clinical motor outcome was assessed with Fugl-Meyer motor scores (motor-FMS) six months later. Multiple regression, with predictors including baseline (one-month) motor-FMS and sensorimotor network regional activity (ROI) measures, was used to determine optimal variable selection for motor outcome prediction. Sensorimotor network ROIs were derived from a meta-analysis of arm voluntary movement tasks. Bootstrapping with 1000 replications was used for internal model validation.

During passive movement, both control and patient groups exhibited activity increases in multiple bilateral sensorimotor network regions, including the primary motor (MI), premotor and supplementary motor areas (SMA), cerebellar cortex, putamen, thalamus, insula, Brodmann area (BA) 44 and parietal operculum (OP1-OP4). Compared to controls, patients showed: 1) lower task-related activity in ipsilesional MI, SMA and contralesional cerebellum (lobules V-VI) and 2) higher activity in contralesional MI, superior temporal gyrus and OP1-OP4. Using multiple regression, we found that the combination of baseline motor-FMS, activity in ipsilesional MI (BA4a), putamen and ipsilesional OP1 predicted motor outcome measured 6 months later (adjusted-R2 = 0.85; bootstrap p < 0.001). Baseline motor-FMS alone predicted only 54% of the variance. When baseline motor-FMS was removed, the combination of increased activity in ipsilesional MI-BA4a, ipsilesional thalamus, contralesional mid-cingulum, contralesional OP4 and decreased activity in ipsilesional OP1, predicted better motor outcome (djusted-R2 = 0.96; bootstrap p < 0.001).

In subacute stroke, fMRI brain activity related to passive movement measured in a sensorimotor network defined by activity during voluntary movement predicted motor recovery better than baseline motor-FMS alone. Furthermore, fMRI sensorimotor network activity measures considered alone allowed excellent clinical recovery prediction and may provide reliable biomarkers for assessing new therapies in clinical trial contexts. Our findings suggest that neural reorganization related to motor recovery from moderate to severe stroke results from balanced changes in ipsilesional MI (BA4a) and a set of phylogenetically more archaic sensorimotor regions in the ventral sensorimotor trend. OP1 and OP4 processes may complement the ipsilesional dorsal motor cortex in achieving compensatory sensorimotor recovery.

Fig. 2

Fig. 2. Four axial slices representative showing stroke lesion extent in 21 patients (FLAIR images).

Continue —> Parietal operculum and motor cortex activities predict motor recovery in moderate to severe stroke

, , , , , ,

Leave a comment

[WEB SITE] Much of what we know about the brain may be wrong: The problem with fMRI

The past decade has brought us jaw-dropping insights about the hidden workings of our brains, in part thanks to a popular brain scan technique called fMRI. But a major new study has revealed that fMRI interpretation has a serious flaw, one that could mean that much of what we’ve learned about our brains this way might need a second look.

On TV and in movies, we’ve all seen doctors stick an X-ray up on the lightbox and play out a dramatic scene: “What’s that dark spot, doctor?” “Hm…”

In reality, though, a modern medical scan contains so much data, no single pair of doctor’s eyes could possibly interpret it. The brain scan known as fMRI, for functional magnetic resonance imaging, produces a massive data set that can only be understood by custom data analysis software. Armed with this analysis, neuroscientists have used the fMRI scan to produce a series of paradigm-shifting discoveries about our brains.

Now, an unsettling new report, which is causing waves in the neuroscience community, suggests that fMRI’s custom software can be deeply flawed — calling into question many of the most exciting findings in recent neuroscience.

The problem researchers have uncovered is simple: the computer programs designed to sift through the images produced by fMRI scans have a tendency to suggest differences in brain activity where none exist. For instance, humans who are resting, not thinking about anything in particular, not doing anything interesting, can deliver spurious results of differences in brain activity. It’s even been shown to indicate brain activity in a dead salmon, whose stilled brain lit up an MRI as if it were somehow still dreaming of a spawning run.

The report throws into question the results of some portion of the more than 40,000 studies that have been conducted using fMRI, studies that plumb the brainy depths of everything from free will to fear. And scientists are not quite sure how to recover.

“It’s impossible to know how many fMRI studies are wrong, since we do not have access to the original data,” says computer scientist Anders Eklund of Linkoping University in Sweden, who conducted the analysis.

How it should have worked: Start by signing up subjects. Scan their brains while they rest inside an MRI machine. Then scan their brains again when exposed to pictures of spiders, say. Those subjects who are afraid of spiders will have blood rush to those regions of the brain involved in thinking and feeling fear, because such thoughts or feelings are suspected to require more oxygen. With the help of a computer program, the MRI machine then registers differences in hemoglobin, the iron-rich molecule that makes blood red and carries oxygen from place to place. (That’s the functional in fMRI.) The scan then looks at whether those hemoglobin molecules are still carrying oxygen to a given place in the brain, or not, based on how the molecules respond to the powerful magnetic fields. Scan enough brains and see how the fearful differ from the fearless, and perhaps you can identify the brain regions or structures associated with thinking or feeling fear.

That’s the theory, anyway. In order to detect such differences in brain activity, it would be best to scan a large number of brains, but the difficulty and expense often make this impossible. A single MRI scan can cost around $2,600, according to a 2014 NerdWallet analysis. Further, the differences in the blood flow are often tiny. And then there’s the fact that computer programs have to sift the through images of the 1,200 or so cubic centimeters of gelatinous tissue that make up each individual brain and compare them to others, a big data analysis challenge.

Eklund’s report shows that the assumptions behind the main computer programs used to sift such big fMRI data have flaws, as turned up by nearly 3 million random evaluations of the resting brain scans of 499 volunteers from Cambridge, Massachusetts; Beijing; and Oulu, Finland. One program turned out to have a 15-year-old coding error (which has now been fixed) that caused it to detect too much brain activity. This highlights the challenge of researchers working with computer code that they are not capable of checking themselves, a challenge not confined just to neuroscience.

An FMRI scan during working memory tasks.

The brain is even more complicated than we thought.Worse, Eklund and his colleagues found that all the programs assume that brains at rest have the same response to the jet-engine roar of the MRI machine itself as well as whatever random thoughts and feelings occur in the brain. Those assumptions appear to be wrong. The brain at rest is “actually a bit more complex,” Eklund says.

More specifically, the white matter of the brain appears to be underrepresented in fMRI analyses while another specific part of the brain — the posterior cingulate, a region in the middle of the brain that connects to many other parts — shows up as a “hot spot” of activity. As a result, the programs are more likely to single it out as showing extra activity even when there is no difference. “The reason for this is still unknown,” Eklund says.

Overall, the programs had a false positive rate — detecting a difference where none actually existed — of as much as 70 percent.

Unknown unknowns: This does not mean all fMRI studies are wrong. Co-author and statistician Thomas Nichols of the University of Warwick calculates that some 3,500 studies may be affected by such false positives, and such false positives can never be eliminated entirely. But a survey of 241 recent fMRI papers found 96 that could have even worse false-positive rates than those found in this analysis.

“The paper makes an important criticism,” says Nancy Kanwisher, a neuroscientist at MIT (TED Talk: A neural portrait of the human mind), though she points out that it does not undermine those fMRI studies that do not rely on these computer programs.

Nonetheless, it is worrying. “I think the fallout has yet to be fully evaluated. It appears to apply to quite a few studies, certainly the studies done in a generic way that is the bread-and-butter of fMRI,” says Douglas Greve, a neuroimaging specialist at Massachusetts General Hospital. What’s needed is more scrutiny, Greve suggests.

Another argument for open data. Eklund and his colleagues were only able to discover this methodological flaw thanks to the open sharing of group brain scan data by the 1,000 Functional Connectomes Project. Unfortunately, such sharing of brain scan data is more the exception than the norm, which hinders other researchers attempting to re-create the experiment and replicate the results. Such replication is a cornerstone of the scientific method, ensuring that findings are robust. Eklund, for one, therefore encourages neuroimagers to “share their fMRI data, so that other researchers can replicate their findings and re-analyze the data several years later.” Only then can scientists be sure that the undiscovered activity of the human brain is truly revealed … and that dead salmon are not still dreaming.

ABOUT THE AUTHOR

David Biello is an award-winning journalist writing most often about the environment and energy. His book “The Unnatural World” publishes November 2016. It’s about whether the planet has entered a new geologic age as a result of people’s impacts and, if so, what we should do about this Anthropocene. He also hosts documentaries, such as “Beyond the Light Switch” and the forthcoming “The Ethanol Effect” for PBS. He is the science curator for TED.

Source: The problem with fMRI |

 

, , , , , , ,

Leave a comment

[WEB SITE] Brain plasticity after injury: an interview with Dr Swathi Kiran

What is brain plasticity and why is it important following a brain injury?

Brain plasticity is the phenomenon by which the brain can rewire and reorganize itself in response to changing stimulus input. Brain plasticity is at play when one is learning new information (at school) or learning a new language and occurs throughout one’s life.

Brain plasticity is particularly important after a brain injury, as the neurons in the brain are damaged after a brain injury, and depending on the type of brain injury, plasticity may either include repair of damaged brain regions or reorganization/rewiring of different parts of the brain.

MRI brain injury

How much is known about the level of injury the brain can recover from? Over what time period does the brain adapt to an injury?

A lot is known about brain plasticity immediately after an injury. Like any other injury to the body, after an initial negative reaction to the injury, the brain goes through a massive healing process, where the brain tries to repair itself after the injury. Research tells us exactly what kinds of repair processes occur hours, days and weeks after the injury.

What is not well understood is how recovery continues to occur in the long term. So, there is a lot research showing that the brain is plastic, and undergoes recovery even months after the brain damage, but what promotes such recovery and what hinders such recovery is not well understood.

It is well understood that some rehabilitative training promotes brain injury and most of the current research is focused on this topic.

What techniques are used to study brain plasticity?

Human brain plasticity has mostly been studied using non-invasive imaging methods, because these techniques allow us to measure the gray matter (neurons), white matter (axons) at a somewhat coarse level. MRI and fMRI techniques provide snapshots and video of the brain in function, and that allows us to capture changes in the brain that are interpreted as plasticity.

Also, more recently, there are invasive stimulation methods such as transcranial direct current stimulation or transcranial magnetic stimulation which allow providing electric current or magnetic current to different parts of the brain and such stimulation causes certain changes in the brain.

How has our understanding advanced over recent years?

One of the biggest shifts in our understanding of brain plasticity is that it is a lifelong phenomenon. We used to previously think that the brain is plastic only during childhood and once you reach adulthood, the brain is hardwired, and no new changes can be made to it.

However, we now know that even the adult brain can be modified and reorganized depending on what new information it is learning. This understanding has a profound impact on recovery from brain injury because it means that with repeated training/instruction, even the damaged brain is plastic and can recover.

What role do you see personalized medicine playing in brain therapy in the future?

One reason why rehabilitation after brain injury is so complex is because no two individuals are alike. Each individual’s education and life experiences have shaped their brain (due to plasticity!) in unique ways, so after a brain injury, we cannot expect that recovery in two individuals will be occur the same way.

Personalized medicine allows the ability to tailor treatment for each individual taking into account their strengths and weaknesses and providing exactly the right kind of therapy for that person. Therefore, one size treatment does not fit all, and individualized treatments prescribed to the exact amount of dosage will become a reality.

Senior couple tablet

What is ‘automedicine’ and do you think this could become a reality?

I am not sure we understand what automedicine can and cannot do just yet, so it’s a little early to comment on the reality. Using data to improve our algorithms to precisely deliver the right amount of rehabilitation/therapy will likely be a reality very soon, but it is not clear that it will eliminate the need for doctors or rehabilitation professionals.

What do you think the future holds for people recovering from strokes and brain injuries and what’s Constant Therapy’s vision?

The future for people recovering from strokes and brain injuries is more optimistic than it has ever been for three important reasons. First, as I pointed above, there is tremendous amount of research showing that the brain is plastic throughout life, and this plasticity can be harnessed after brain injury also.

Second, recent advances in technology allow patients to receive therapy at their homes at their convenience, empowering them to take control of their therapy instead of being passive consumers.

Finally, the data that is collected from individuals who continuously receive therapy provides a rich trove of information about how patients can improve after rehabilitation, what works and what does not work.

Constant Therapy’s vision incorporates all these points and its goal to provide effective, efficient and reasonable rehabilitation to patients recovering from strokes and brain injury.

Where can readers find more information?

About Dr Swathi Kiran

DR SWATHI KIRANSwathi Kiran is Professor in the Department of Speech and Hearing Sciences at Boston University and Assistant in Neurology/Neuroscience at Massachusetts General Hospital. Prior to Boston University, she was at University of Texas at Austin. She received her Ph.D from Northwestern University.

Her research interests focus around lexical semantic treatment for individuals with aphasia, bilingual aphasia and neuroimaging of brain plasticity following a stroke.

She has over 70 publications and her work has appeared in high impact journals across a variety of disciplines including cognitive neuroscience, neuroimaging, rehabilitation, speech language pathology and bilingualism.

She is a fellow of the American Speech Language and Hearing Association and serves on various journal editorial boards and grant review panels including at National Institutes of Health.

Her work has been continually funded by the National Institutes of Health/NIDCD and American Speech Language Hearing Foundation awards including the New Investigator grant, the New Century Scholar’s Grant and the Clinical Research grant. She is the co-founder and scientific advisor for Constant Therapy, a software platform for rehabilitation tools after brain injury.

Source: Brain plasticity after injury: an interview with Dr Swathi Kiran

, , , , , , , , , ,

Leave a comment

[Poster] Acupuncture as a Treatment to Improve Cognitive Function After Brain Injury: A Case Study

Explore the relationship between acupuncture and cognitive therapy with change in cognitive domains following traumatic brain injury. The secondary objective was to evaluate the potential relationship between acupuncture and cognitive therapy with volume activation in select brain areas as shown by functional MRI (fMRI).

Source: Acupuncture as a Treatment to Improve Cognitive Function After Brain Injury: A Case Study – Archives of Physical Medicine and Rehabilitation

, , , , ,

Leave a comment

%d bloggers like this: