Posts Tagged machine learning

[Abstract + References] Evaluation of an Activity Tracker to Detect Seizures Using Machine Learning

Abstract

Currently, the tracking of seizures is highly subjective, dependent on qualitative information provided by the patient and family instead of quantifiable seizure data. Usage of a seizure detection device to potentially detect seizure events in a population of epilepsy patients has been previously done. Therefore, we chose the Fitbit Charge 2 smart watch to determine if it could detect seizure events in patients when compared to continuous electroencephalographic (EEG) monitoring for those admitted to an epilepsy monitoring unit. A total of 40 patients were enrolled in the study that met the criteria between 2015 and 2016. All seizure types were recorded. Twelve patients had a total of 53 epileptic seizures. The patient-aggregated receiver operating characteristic curve had an area under the curve of 0.58 [0.56, 0.60], indicating that the neural network models were generally able to detect seizure events at an above-chance level. However, the overall low specificity implied a false alarm rate that would likely make the model unsuitable in practice. Overall, the use of the Fitbit Charge 2 activity tracker does not appear well suited in its current form to detect epileptic seizures in patients with seizure activity when compared to data recorded from the continuous EEG.

References

1.Sander, JW . The epidemiology of epilepsy revisited. Curr Opin Neurol. 2003;16(2):165–170.
Google Scholar | Crossref | Medline
2.Jory, C, Shankar, R, Coker, D, McLean, B, Hanna, J, Newman, C. Safe and sound? A systematic literature review of seizure detection methods for personal use. Seizure. 2016;36:4–15.
Google Scholar | Crossref | Medline
3.Beniczky, S, Jeppesen, J. Non-electroencephalography-based seizure detection. Curr Opin Neurol. 2019;32(2):198–204.
Google Scholar | Crossref | Medline
4.Ryvlin, P, Ciumas, C, Wisniewski, I, Beniczky, S. Wearable devices for sudden unexpected death in epilepsy prevention. Epilepsia. 2018;59(suppl 1):61–66.
Google Scholar | Crossref | Medline
5.Poh, MZ, Loddenkemper, T, Reinsberger, C, et al. Convulsive seizure detection using a wrist-worn electrodermal activity and accelerometry biosensor. Epilepsia. 2012;53(5):e93–e97.
Google Scholar | Crossref | Medline
6.Feehan, LM, Geldman, J, Sayre, EC, et al. Accuracy of Fitbit devices: systematic review and narrative syntheses of quantitative data. JMIR mHealth uHealth. 2018;6(8):e10527.
Google Scholar | Crossref | Medline
7.Fisher, RS, Cross, JH, French, JA, et al. Operational classification of seizure types by the International League Against Epilepsy: position paper of the ILAE Commission for Classification and Terminology. Epilepsia. 2017;58(4):522–530.
Google Scholar | Crossref | Medline
8.Fisher, RS, Cross, JH, D’Souza, C, et al. Instruction manual for the ILAE 2017 operational classification of seizure types. Epilepsia. 2017;58(4):531–542.
Google Scholar | Crossref | Medline
9.Scheffer, IE, Berkovic, S, Capovilla, G, et al. ILAE classification of the epilepsies: position paper of the ILAE Commission for Classification and Terminology. Epilepsia. 2017;58(4):512–521.
Google Scholar | Crossref | Medline
10.Lee, J, Finkelstein, J. Consumer sleep tracking devices: a critical review. Stud Health Technol Inform. 2015;210:458–460.
Google Scholar | Medline
11.Montgomery-Downs, HE, Insana, SP, Bond, JA. Movement toward a novel activity monitoring device. Sleep Breath. 2012;16(3):913–917.
Google Scholar | Crossref | Medline | ISI
12.Moreno-Pino, F, Porras-Segovia, A, Lopez-Esteban, P, Artes, A, Baca-Garcia, E. Validation of Fitbit charge 2 and Fitbit alta HR against polysomnography for assessing sleep in adults with obstructive sleep apnea. J Clin Sleep Med. 2019;15(11):1645–1653.
Google Scholar | Crossref | Medline
13.Regalia, G, Onorati, F, Lai, M, Caborni, C, Picard, RW. Multimodal wrist-worn devices for seizure detection and advancing research: focus on the Empatica wristbands. Epilepsy Res. 2019;153:79–82.
Google Scholar | Crossref | Medline
14.Van Ness, PC . Are seizure detection devices ready for prime time? Epilepsy Curr. 2019;19(1):36–37.
Google Scholar | SAGE Journals
15.Halford, JJ, Sperling, MR, Nair, DR, et al. Detection of generalized tonic-clonic seizures using surface electromyographic monitoring. Epilepsia. 2017;58(11):1861–1869.
Google Scholar | Crossref | Medline
16.Patterson, AL, Mudigoudar, B, Fulton, S, et al. SmartWatch by SmartMonitor: assessment of seizure detection efficacy for various seizure types in children, a large prospective single-center study. Pediatr Neurol. 2015;53(4):309–311.
Google Scholar | Crossref | Medline
17.van Andel, J, Leijten, F, van Delden, H, van Thiel, G. What makes a good home-based nocturnal seizure detector? A value sensitive design. PLoS One. 2015;10(4):e0121446.
Google Scholar | Crossref | Medline
18.Bruno, E, Simblett, S, Lang, A, et al. Wearable technology in epilepsy: the views of patients, caregivers, and healthcare professionals. Epilepsy Behav. 2018;85:141–149.
Google Scholar | Crossref | Medline
19.Patel, AD, Moss, R, Rust, SW, et al. Patient-centered design criteria for wearable seizure detection devices. Epilepsy Behav. 2016;64(pt A):116–121.
Google Scholar | Crossref | Medline
20.Kurada, AV, Srinivasan, T, Hammond, S, Ulate-Campos, A, Bidwell, J. Seizure detection devices for use in antiseizure medication clinical trials: a systematic review. Seizure. 2019;66:61–69.
Google Scholar | Crossref | Medline
21.Benedetto, S, Caldato, C, Bazzan, E, Greenwood, DC, Pensabene, V, Actis, P. Assessment of the Fitbit charge 2 for monitoring heart rate. PLoS One. 2018;13(2):e0192691.
Google Scholar | Crossref | Medline
22.Gutierrez, EG, Crone, NE, Kang, JY, Carmenate, YI, Krauss, GL. Strategies for non-EEG seizure detection and timing for alerting and interventions with tonic-clonic seizures. Epilepsia. 2018;59(suppl 1):36–41.
Google Scholar | Crossref | Medline
23.Beniczky, S, Ryvlin, P. Standards for testing and clinical validation of seizure detection devices. Epilepsia. 2018;59(suppl 1):9–13.
Google Scholar | Crossref | Medline

, , , , , , ,

Leave a comment

[Abstract] A Wearable Hand Rehabilitation System with Soft Gloves

Abstract

Hand paralysis is one of the most common complications in stroke patients, which severely impacts their daily lives. This paper presents a wearable hand rehabilitation system that supports both mirror therapy and task-oriented therapy. A pair of gloves, i.e., a sensory glove and a motor glove, was designed and fabricated with a soft, flexible material, providing greater comfort and safety than conventional rigid rehabilitation devices. The sensory glove worn on the non-affected hand, which contains the force and flex sensors, is used to measure the gripping force and bending angle of each finger joint for motion detection. The motor glove, driven by micromotors, provides the affected hand with assisted driving-force to perform training tasks. Machine learning is employed to recognize the gestures from the sensory glove and to facilitate the rehabilitation tasks for the affected hand. The proposed system offers 16 kinds of finger gestures with an accuracy of 93.32%, allowing patients to conduct mirror therapy using fine-grained gestures for training a single finger and multiple fingers in coordination. A more sophisticated task-oriented rehabilitation with mirror therapy is also presented, which offers six types of training tasks with an average accuracy of 89.4% in real-time.

via A Wearable Hand Rehabilitation System with Soft Gloves – IEEE Journals & Magazine

, , , , , , , , , , , , , , ,

Leave a comment

[Abstract] Machine Learning for Brain Stroke: A Review

Machine Learning (ML) delivers an accurate and quick prediction outcome and it has become a powerful tool in health settings, offering personalized clinical care for stroke patients. An application of ML and Deep Learning in health care is growing however, some research areas do not catch enough attention for scientific investigation though there is real need of research. Therefore, the aim of this work is to classify state-of-arts on ML techniques for brain stroke into 4 categories based on their functionalities or similarity, and then review studies of each category systematically. A total of 39 studies were identified from the results of ScienceDirect web scientific database on ML for brain stroke from the year 2007 to 2019. Support Vector Machine (SVM) is obtained as optimal models in 10 studies for stroke problems. Besides, maximum studies are found in stroke diagnosis although number for stroke treatment is least thus, it identifies a research gap for further investigation. Similarly, CT images are a frequently used dataset in stroke. Finally SVM and Random Forests are efficient techniques used under each category. The present study showcases the contribution of various ML approaches applied to brain stroke.

via Machine Learning for Brain Stroke: A Review – ScienceDirect

, , , , ,

Leave a comment

[Abstract] Development and Clinical Evaluation of Web-based Upper-limb Home Rehabilitation System using Smartwatch and Machine-learning model for Chronic Stroke Survivors: Development, Usability, and Comparative Study

ABSTRACT

Background:

Human activity recognition (HAR) technology has been advanced with the development of wearable devices and the machine learning (ML) algorithm. Although previous researches have shown the feasibility of HAR technology for home rehabilitation, there has not been enough evidence based on clinical trial.

Objective:

We intended to achieve two goals: (1) To develop a home-based rehabilitation (HBR) system, which can figure out the home rehabilitation exercise of patient based on ML algorithm and smartwatch; (2) To evaluate clinical outcomes for patients with chronic stroke using the HBR system.

Methods:

We used off-the-shelf smartwatch and the convolution neural network (CNN) of ML algorithm for developing our HBR system. It was designed to be able to share the time data of home exercise of individual patient with physical therapist. To figure out the most accurate way for detecting exercise of chronic stroke patients, we compared accuracy results with dataset of personal/total data and accelerometer only/gyroscope/accelerometer combined with gyroscope data. Using the system, we conducted a preliminary study with two groups of stroke survivors (22 participants in HBR group and 10 participants in a control group). The exercise compliance was periodically checked by phone calls in both groups. To measure clinical outcomes, we assessed the Wolf motor function test (WMFT), Fugl-meyer assessment of upper extremity (FMA-UE), grip power test, Beck’s depression index and range of motion (ROM) of the shoulder joint at 0 (baseline), 6 (mid-term), 12 weeks (final) and 18 weeks(6 weeks after the final assessment without HBR system).

Results:

The ML model created by personal data(99.9%) showed greater accuracy than total data(95.8%). The movement detection accuracy was the highest in accelerometer combined with gyroscope data (99.9%) compared to gyroscope(96.0%) or accelerometer alone(98.1%). With regards to clinical outcomes, drop-out rates of control and experimental group were 4/10 (40%) and 5/22 (22%) at 12 weeks and 10/10 (100%) and 10/22 (45%) at 18 weeks, respectively. The experimental group (N=17) showed a significant improvement in WMFT score (P=.02) and ROM (P<.01). The control group (N=6) showed a significant change only in shoulder internal rotation (P=.03).

Conclusions:

This research found that the homecare system using the commercial smartwatch and ML model can facilitate the participation of home training and improve the functional score of WMFT and shoulder ROM of flexion and internal rotation for the treatment of patients with chronic stroke. We recommend our HBR system strategy as an innovative and cost-effective homecare treatment modality. Clinical Trial: Preliminary study (Phase I)


 Download PDF

, , , , , , , , ,

Leave a comment

[Abstract + References] Demystification of AI-driven medical image interpretation: past, present and future

Abstract

The recent explosion of ‘big data’ has ushered in a new era of artificial intelligence (AI) algorithms in every sphere of technological activity, including medicine, and in particular radiology. However, the recent success of AI in certain flagship applications has, to some extent, masked decades-long advances in computational technology development for medical image analysis. In this article, we provide an overview of the history of AI methods for radiological image analysis in order to provide a context for the latest developments. We review the functioning, strengths and limitations of more classical methods as well as of the more recent deep learning techniques. We discuss the unique characteristics of medical data and medical science that set medicine apart from other technological domains in order to highlight not only the potential of AI in radiology but also the very real and often overlooked constraints that may limit the applicability of certain AI methods. Finally, we provide a comprehensive perspective on the potential impact of AI on radiology and on how to evaluate it not only from a technical point of view but also from a clinical one, so that patients can ultimately benefit from it.

Key Points

• Artificial intelligence (AI) research in medical imaging has a long history

• The functioning, strengths and limitations of more classical AI methods is reviewed, together with that of more recent deep learning methods.

• A perspective is provided on the potential impact of AI on radiology and on its evaluation from both technical and clinical points of view.

References

  1. 1.

    LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444

  2. 2.

    Tang A, Tam R, Cadrin-Chênevert A et al Canadian Association of Radiologists (CAR) Artificial Intelligence Working Group (2018) Canadian Association of Radiologists white paper on artificial intelligence in radiology. Can Assoc Radiol J 69:120–135

  3. 3.

    Summers RM (2016) Progress in fully automated abdominal CT interpretation. AJR Am J Roentgenol 207:67–79

  4. 4.

    Matsuyama T (1989) Expert systems for image processing: knowledge-based composition of image analysis processes. Comput Vision Graph 48:22–49

  5. 5.

    Stansfield SA (1986) ANGY: a rule-based expert system for automatic segmentation of coronary vessels from digital subtracted angiograms. IEEE Trans Pattern Anal Mach Intell 2:188–199

  6. 6.

    Park H, Bland PH, Meyer CR (2003) Construction of an abdominal probabilistic atlas and its application in segmentation. IEEE Trans Med Imaging 22:483–492

  7. 7.

    Warfield SK, Zou KH, Wells WM. (2004) Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation. IEEE Trans Med Imaging 23:903–21

  8. 8.

    Okada T, Linguraru MG, Hori M, Summers RM, Tomiyama N, Sato Y (2015) Abdominal multi-organ segmentation from CT images using conditional shape-location and unsupervised intensity priors. Med Image Anal 26:1–18

  9. 9.

    Iglesias JE, Sabuncu MR (2015) Multi-atlas segmentation of biomedical images: a survey. Med Image Anal 24:205–219

  10. 10.

    Van Leemput K, Maes F, Vandermeulen D, Colchester A, Suetens P (2001) Automated segmentation of multiple sclerosis lesions by model outlier detection. IEEE Trans Med Imaging 20:677–688

  11. 11.

    Prastawa M, Bullitt E, Moon N, Van Leemput K, Gerig G (2003) Automatic brain tumor segmentation by subject specific modification of atlas priors. Acad Radiol 10:1341–1348

  12. 12.

    Erus G, Zacharaki EI, Davatzikos C (2014) Individualized statistical learning from medical image databases: application to identification of brain lesions. Med Image Anal 18:542–554

  13. 13.

    Viergever MA, Maintz JBA, Klein S, Murphy K, Staring M, Pluim JPW (2016) A survey of medical image registration – under review. Med Image Anal 33:140–144

  14. 14.

    Kraus WL (2015) Editorial: would you like a hypothesis with those data? Omics and the age of discovery science. Mol Endocrinol 29:1531–1534

  15. 15.

    Aerts HJ (2016) The potential of radiomic-based phenotyping in precision medicine: a review. JAMA Oncol 2:1636–1642

  16. 16.

    Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297

  17. 17.

    Quinlan JR (1986) Induction of decision trees. Mach Learn 1:81–106

  18. 18.

    Matzner-Lober E, Suehs CM, Dohan A, Molinari N (2018) Thoughts on entering correlated imaging variables into a multivariable model: application to radiomics and texture analysis. Diagn Interv Imaging 99:269–270

  19. 19.

    Guyon I, Elisseeff A (2003) An introduction to variable and feature selection. J Mach Learn Res 3:1157–1182

  20. 20.

    Tibshirani R (1996) Regression shrinkage and selection via the lasso. J R Stat Soc B 58:267–288

  21. 21.

    Chartrand G, Cheng PM, Vorontsov E et al (2017) Deep learning: a primer for radiologists. Radiographics 37:2113–2131

  22. 22.

    Werbos P (1974) Beyond regression: new tools for prediction and analysis in the behavioral sciences. PhD thesis, Harvard Univ

  23. 23.

    Rosenblatt F (1957). The Perceptron—a perceiving and recognizing automaton. Report 85-460-1, Cornell Aeronautical Laboratory

  24. 24.

    Lawrence N (2016) Deep learning, Pachinko and James Watt: efficiency is the driver of uncertainty. http://inverseprobability.com/2016/03/04/deep-learning-and-uncertainty. Accessed 23 May 2018

  25. 25.

    Szegedy C, Zaremba W, Sutskever I et al (2013) Intriguing properties of neural networks. arXiv:1312.6199

  26. 26.

    Richardson WS, Wilson MC, Nishikawa J, Hayward RS (1995) The well-built clinical question: a key to evidence-based decisions. ACP J Club 123:A12–A13

  27. 27.

    Simmons JP, Nelson LD, Simonsohn U (2011) False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci 22:1359–1366

  28. 28.

    Ferrante E, Dokania PK, Marini R, Paragios N (2017) Deformable registration through learning of context-specific metric aggregation. Machine Learning in Medical Imaging Workshop. MLMI (MICCAI 2017), Sep 2017, Quebec City, Canada

via Demystification of AI-driven medical image interpretation: past, present and future | SpringerLink

, , , ,

Leave a comment

[WEB PAGE] AI could play ‘critical’ role in identifying appropriate treatment for depression

Male doctor discussing reports with patient at desk in medical office

Image credits: Wavebreak Media Ltd – Dreamstime

Published Tuesday, February 11, 2020

A large-scale trial led by scientists at the University of Texas Southwestern (UT Southwestern) has produced a machine learning algorithm which accurately predicts the efficacy of an antidepressant, based on a patient’s neural activity.

The UT Southwestern researchers hope that this tool could eventually play a critical role in deciding which course of treatment would be best for patients with depression, as well as being part of a new generation of “biology-based, objective strategies” which make use of technologies such as AI to treat psychiatric disorders.

The US-wide trial was initiated in 2011 with the intention of better understanding mood disorders such as major depression and seasonal affective disorder (Sad). The trial has reaped many studies, the latest of which demonstrates that doctors could use computational tools to guide treatment choices for depression. The study was published in Nature Biotechnology.

“These studies have been a bigger success than anyone on our team could have imagined,” said Dr. Madhukar Trivedi, the UT Southwestern psychiatrist who oversaw the trial. “We provided abundant data to show we can move past the guessing game of choosing depression treatments and alter the mindset of how the disease should be diagnosed and treated.”

This 16-week trial involved more than 300 participants with depression, who either received a placebo or SSRI (selective serotonin reuptake inhibitor), the most common type of antidepressant. Despite the widespread prescription of SSRIs, they have been criticised for their side effects and for inefficacy in many patients.

Trivedi had previously established in another study that up to two-thirds of patients do not adequately respond to their first antidepressant, motivating him to find a way of identifying much earlier which treatment path is most likely to help the patient before they begin and potentially suffer further through ineffectual treatment.

Trivedi and his collaborators used an electroencephalogram (EEG) to measure electrical activity in the participants’ cortex before they began the treatment. This data was used to develop a machine learning algorithm to predict which patients would benefit from the medication within two months.

The researchers found that the AI accurately predicted outcomes, with patients less certain to respond to an antidepressant more likely to improve with other interventions, such as brain stimulation or therapeutic approaches. Their findings were replicated across three additional patient groups.

“It can be devastating for a patient when an antidepressant doesn’t work,” Trivedi said. “Our research is showing that they no longer have to endure the painful process of trial and error.”

Dr Amit Etkin, a Stanford University professor of psychiatry who also worked on the algorithm, added: “This study takes previous research, showing that we can predict who benefits from an antidepressant, and actually brings it to the point of practical utility.”

Next, they hope to develop an interface for the algorithm to be used alongside EEGs – and perhaps also with other means of measuring brain activity like functional magnetic resonance imaging (functional MRI, aka fMRI) or MEG – and have the system approved by the US Food and Drug Administration.

 

via AI could play ‘critical’ role in identifying appropriate treatment for depression | E&T Magazine

, , , , , ,

Leave a comment

[NEWS] Novel artificial intelligence algorithm helps detect brain tumor

 

A brain tumor is a mass of abnormal cells that grow in the brain. In 2016 alone, there were 330,000 incident cases of brain cancer and 227,000 related-deaths worldwide. Early detection is crucial to improve patient prognosis, and thanks to a team of researchers, they developed a new imaging technique and artificial intelligence algorithm that can help doctors accurately identify brain tumors.

 

Image Credit: create jobs 51 / Shutterstock.com

Image Credit: create jobs 51 / Shutterstock.com

Published in the journal Nature Medicine, the study reveals a new method that combines modern optical imaging and an artificial intelligence algorithm. The researchers at New York University studied the accuracy of machine learning in producing precise and real-time intraoperative diagnosis of brain tumors.

In the past, the only way to diagnose brain tumors is through hematoxylin and eosin staining of processed tissue in time. Plus, interpretation of the findings relies on pathologists who examine the specimen. The researchers hope the new method will provide a better and more accurate diagnosis, which can help initiate effective treatments right away.

In cancer treatment, the earlier cancer has been diagnosed, the earlier the oncologists can start the treatment. In most cases, early detection improves health outcomes. The researchers have found that their novel method of detection yielded a 94.6 percent accuracy, compared to 93.9 percent for pathology-based interpretation.

The imaging technique

The researchers used a new imaging technique called stimulated Raman histology (SRH), which can reveal tumor infiltration in human tissue. The technique collects scattered laser light and emphasizes features that are not usually seen in many body tissue images.

With the new images, the scientists processed and studied using an artificial intelligence algorithm. Within just two minutes and thirty seconds, the researchers came up with a brain tumor diagnosis. The fast detection of brain cancer can help not only in diagnosing the disease early but also in implementing a fast and effective treatment plan. With cancer caught early, treatments may be more effective in killing cancer cells.

The team also utilized the same technology to accurately identify and remove undetectable tumors that cannot be detected by conventional methods.

“As surgeons, we’re limited to acting on what we can see; this technology allows us to see what would otherwise be invisible, to improve speed and accuracy in the OR, and reduce the risk of misdiagnosis. With this imaging technology, cancer operations are safer and more effective than ever before,” Dr. Daniel A. Orringer, associate professor of Neurosurgery at NYU Grossman School of Medicine, said.

Study results

The study is a walkthrough of various ideas and efforts by the research team. First off, they built the artificial intelligence algorithm by training a deep convolutional neural network (CNN), containing more than 2.5 million samples from 415 patients. The method helped them group and classify tissue samples into 13 categories, representing the most common types of brain tumors, such as meningioma, metastatic tumors, malignant glioma, and lymphoma.

For validation, the researchers recruited 278 patients who are having brain tumor resection or epilepsy surgery at three university medical centers. The tumor samples from the brain were examined and biopsied. The researchers grouped the samples into two groups – control and experimental.

The team assigned the control group to be processed traditionally in a pathology laboratory. The process spans 20 to 30 minutes. On the other hand, the experimental group had been tested and studied intraoperatively, from getting images and processing the examination through CNN.

There were noted errors in both the experimental and control groups but were unique from each other. The new tool can help centers detect and diagnose brain tumors, particularly those without expert neuropathologists.

“SRH will revolutionize the field of neuropathology by improving decision-making during surgery and providing expert-level assessment in the hospitals where trained neuropathologists are not available,” Dr. Matija Snuderl, associate professor in the Department of Pathology at NYU Grossman School of Medicine, explained.

Journal references:

Patel, A., Fisher, J, Nichols, E., et al. (2019). Global, regional, and national burden of brain and other CNS cancer, 1990–2016: a systematic analysis for the Global Burden of Disease Study 2016. The Lancet Neurology. https://www.thelancet.com/journals/laneur/article/PIIS1474-4422(18)30468-X/fulltext#%20

Hollon, T., Pandian, B, Orringer, D. (2019). Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks. Nature Medicine. https://www.nature.com/articles/s41591-019-0715-9

 

via Novel artificial intelligence algorithm helps detect brain tumor

, , , , , , , , , , , , , , , , , , , , , ,

Leave a comment

[ARTICLE] Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation

The use of Artificial Intelligence and machine learning in basic research and clinical neuroscience is increasing. AI methods enable the interpretation of large multimodal datasets that can provide unbiased insights into the fundamental principles of brain function, potentially paving the way for earlier and more accurate detection of brain disorders and better informed intervention protocols. Despite AI’s ability to create accurate predictions and classifications, in most cases it lacks the ability to provide a mechanistic understanding of how inputs and outputs relate to each other. Explainable Artificial Intelligence (XAI) is a new set of techniques that attempts to provide such an understanding, here we report on some of these practical approaches. We discuss the potential value of XAI to the field of neurostimulation for both basic scientific inquiry and therapeutic purposes, as well as, outstanding questions and obstacles to the success of the XAI approach.

Introduction

One of the greatest challenges to effective brain-based therapies is our inability to monitor and modulate neural activity in real time. Moving beyond the relatively simple open-loop neurostimulation devices that are currently the standard in clinical practice (e.g., epilepsy) requires a closed-loop approach in which the therapeutic application of neurostimulation is determined by characterizing the moment-to-moment state of the brain (Herron et al., 2017). However, there remain major obstacles to progress for such a closed-loop approach. For one, we do not know how to objectively characterize mental states or even detect pathological activity associated with most psychiatric disorders. Second, we do not know the most effective way to improve maladaptive behaviors by means of neurostimulation. The solutions to these problems require innovative experimental frameworks leveraging intelligent computational approaches able to sense, interpret, and modulate large amount of data from behaviorally relevant neural circuits at the speed of thoughts. New approaches such as computational psychiatry (Redish and Gordon, 2016Ferrante et al., 2019) or ML are emerging. However, current ML approaches that are applied to neural data typically do not provide an understanding of the underlying neural processes or how they contributed to the outcome (i.e., prediction or classifier). For example, significant progress has been made using ML to effectively classify EEG patterns, but the understanding of brain function and mechanisms derived from such approaches still remain relatively limited (Craik et al., 2019). Such an understanding, be it correlational or causal, is key to improving ML methods and to suggesting new therapeutic targets or protocols using different techniques. Explainable Artificial Intelligence (XAI) is a relatively new set of techniques that combines sophisticated AI and ML algorithms with effective explanatory techniques to develop explainable solutions that have proven useful in many domain areas (Core et al., 2006Samek et al., 2017Yang and Shafto, 2017Adadi and Berrada, 2018Choo and Liu, 2018Dosilovic et al., 2018Holzinger et al., 2018Fernandez et al., 2019Miller, 2019). Recent work has suggested that XAI may be a promising avenue to guide basic neural circuit manipulations and clinical interventions (Holzinger et al., 2017bVu et al., 2018Langlotz et al., 2019). We will develop this idea further here.

Explainable Artificial Intelligence for neurostimulation in mental health can be seen as an extension in the design of BMI. BMI are generally understood as combinations of hardware and software systems designed to rapidly transfer information between one or more brain area and an external device (Wolpaw et al., 2002Hatsopoulos and Donoghue, 2009Nicolelis and Lebedev, 2009Andersen et al., 2010Mirabella and Lebedev, 2017). While there is a long history of research in the decoding, analyses and production of neural signal in non-human primates and rodents, a lot of progress has recently been made to develop these techniques for the human brain both invasively and non-invasively, unidirectionally or bi-directionally (Craik et al., 2019Martini et al., 2019Rao, 2019). Motor decision making for example, has been shown to involve a network of brain areas, before and during movement execution (Mirabella, 2014Hampshire and Sharp, 2015), so that BMI intervention can inhibit movement up to 200 ms after its initiation (Schultze-Kraft et al., 2016Mirabella and Lebedev, 2017). The advantage of this type of motor-decision BMI is that it is not bound to elementary motor commands (e.g., turn the wheel of a car), but rather to the high-level decision to initiate and complete a movement. That decision can potentially be affected by environmental factors (e.g., AI vision system detecting cars on the neighboring lane) and internal state (e.g., AI system assessing the state of fatigue of the driver). The current consensus is that response inhibition is an emergent property of a network of discrete brain areas that include the right inferior frontal gyrus and that leverage basic wide-spread elementary neural circuits such a local-lateral-inhibition (Hampshire and Sharp, 2015Mirabella and Lebedev, 2017). This gyrus, as with many other cortical structures, is dynamically recruited so that individual neurons may code for drastically different aspects of the behavior, depending of the task at hand. Consequently, designing a BMI targeting such an area requires the ability for the system to rapidly switch its decoding and stimulation paradigms as a function of environmental or internal state information. Such online adaptability needs of course to be learned and personalized to each individual patient, a task that is ideally suited for AI/ML approaches. In the sensory domain, some have shown that BMI can be used to generate actionable entirely artificial tactile sensations to trigger complex motor decisions (O’Doherty et al., 2012Klaes et al., 2014Flesher et al., 2017). Most of the BMI research work has, however, focused on the sensory motor system because of the relatively focused and well-defined nature of the neural circuits. Consequently, most of the clinical applications are focused on neurological disorders. Interestingly, new generations of BMIs are emerging that are focused on more cognitive functions such as detecting and manipulating reward expectations using reinforcement learning paradigms (Mahmoudi and Sanchez, 2011Marsh et al., 2015Ramkumar et al., 2016), memory enhancement (Deadwyler et al., 2017) or collective problem solving using multi-brain interfacing in rats (Pais-Vieira et al., 2015) or humans (Jiang et al., 2019). All these applications can potentially benefit from the adaptive properties of AI/ML algorithms and, as mentioned, explainable AI approaches have the promise of yielding basic mechanistic insights about the neural systems being targeted. However, the use of these approaches in the context of psychiatric or neurodevelopmental disorders has not been realized though their potential is clear.

In computational neuroscience and computational psychiatry there is a contrast between theory-driven (e.g., reinforcement learning, biophysically inspired network models) and data-driven models (e.g., deep-learning or ensemble methods). While the former models are highly explainable in terms of biological mechanisms, the latter are high performing in terms of predictive accuracy. In general, high performing methods tend to be the least explainable, while explainable methods tend to be the least accurate. Mathematically, the relationship between the two is still not fully formalized or understood. These are the type of issues that occupy the ML community beyond neuroscience and neurostimulation. XAI models in neuroscience might be created by combining theory- and data-driven models. This combination could be achieved by associating explanatory semantic information with features of the model; by using simpler models that are easier to explain; by using richer models that contain more explanatory content; or by building approximate models, solely for the purpose of explanation.

Current efforts in this area include: (1) identify how explainable learning solutions can be applied to neuroscience and neuropsychiatric datasets for neurostimulation, (2) foster the development of a community of scholars working in the field of explainable learning applied to basic neuroscience and clinical neuropsychiatry, and (3) stimulate an open exchange of data and theories between investigators in this nascent field. To frame the scope of this article, we lay out some of the major key open questions in fundamental and clinical neuroscience research that can potentially be addressed by a combination of XAI and neurostimulation approaches. To stimulate the development of XAI approaches the National Institute of Mental Health (NIMH) has released a funding opportunity to apply XAI approaches for decoding and modulating neural circuit activity linked to behavior1.

Intelligent Decoding and Modulation of Behaviorally Activated Brain Circuits

A variety of perspectives for how ML and, more generally AI could contribute to closed-loop brain circuit interventions are worth investigating (Rao, 2019). From a purely signal processing stand point, an XAI system can be an active stimulation artifact rejection component (Zhou et al., 2018). In parallel, the XAI system should have the ability to discover – in a data-driven manner – neuro-behavioral markers of the computational process or condition under consideration. Remarkable efforts are currently underway to derive biomarkers for mental health, as is the case for example for depression (Waters and Mayberg, 2017). Once these biomarkers are detected, and the artifacts rejected, the XAI system can generate complex feedback stimulation patterns designed and monitored (human in-the loop) to improve behavioral or cognitive performance (Figure 1). XAI approaches have also the potential to address outstanding biological and theoretical questions in neuroscience, as well as to address clinical applications. They seem well-suited for extracting actionable information from highly complex neural systems, moving away from traditional correlational analyses and toward a causal understanding of network activity (Yang et al., 2018). However, even with XAI approaches, one should not assume that understanding the statistical causality of neural interactions is equivalent to understanding behavior; a highly sophisticated knowledge of neural activity and neural connectivity is not generally synonymous with understanding their role in causing behavior.

Figure 1. An XAI-enabled closed-loop neurostimulation process can be described in four phases: (1) System-level recording of brain signals (e.g., spikes, LFPs, ECoG, EEG, neuromodulators, optical voltage/calcium indicators), (2) Multimodal fusion of neural data and dense behavioral/cognitive assessment measures. (3) XAI algorithm using unbiasedly discovered biomarkers to provide mechanistic explanations on how to improve behavioral/cognitive performance and reject stimulation artifacts. (4) Complex XAI-derived spatio-temporal brain stimulation patterns (e.g., TMS, ECT, DBS, ECoG, VNS, TDCS, ultrasound, optogenetics) that will validate the model and affect subsequent recordings. ADC, Analog to Digital Converter; AMP, Amplifier; CTRL, Control; DAC, Digital to Analog Converter; DNN, Deep Neural Network. XRay picture courtesy Ned T. Sahin. Diagram modified from Zhou et al. (2018).

[…]

 

Continue —->  Frontiers | Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation | Neuroscience

, , , , , ,

Leave a comment

[WEB SITE] Personal Rehab and Recovery Through Virtual Therapy

Virtual therapy is based on research that combines leading-edge data techniques with wearable robotics, artificial intelligence and machine learning.

An engineering researcher from New Zealand’s University of Auckland has been awarded a Rutherford Discovery Fellowship.

The Associate Professor, who is developing a virtual therapy technology for personal rehabilitation, is one of eleven Fellows for 2019. The Fellowship provides NZ$ 800,000 in funding over five years.

According to a recent press release, his research combines leading-edge data techniques with wearable robotics, artificial intelligence (AI) and machine learning.

The aim is to create devices that are capable of personalising rehabilitation and recovery plans, which are cheaper and more efficient than humans.

The Problem for Personal Rehabilitation

  • Currently, rehabilitation after a medical event, such as stroke, is carried out by trained physical or occupational therapists.
  • However, much of the work is physically demanding and the cost is relatively high and time-consuming.
  • While some robotics devices used for physical rehabilitation have been developed overseas, they lag far behind what a human therapist is capable of.
  • The current technology has little or no intelligence and can only act on predefined rules. Thus, it is not tailored to individuals and does not have the ability to adapt and learn as a human therapist would.

The Solution for Personal Rehabilitation

  • The researcher’s work, meanwhile, takes a strongly data-driven approach, looking at the fundamental physiology of human movement.
  • It will build on that information in order to create individual recovery plans that take into account the effects of a diverse range of physical impairments.
  • The goal is to make real progress towards creating low-cost robotic ‘virtual therapists’ with the ability to deliver automatic but very precise treatments.
  • The Rutherford Discovery Fellowships, managed on behalf of the government by the New Zealand Royal Society Te Apārangi, aim to attract and retain talented early- to mid-career researchers by helping them establish a track record for future research leadership.
  • The high costs of healthcare not just in New Zealand but around the world mean that progress in the area of medical technologies and personalised therapies and treatments needs to be prioritised.

Stressbuster

In other news, the University was the site of a unique digital treasure hunt recently to mark Stress Less Week.

Stress Less week was held 7 to 11 October as thousands of students prepare to head into study break and exam period.

A student start-up developed the technology used in the app-based game, which challenged the students to unlock and solve riddles on the City Campus to find secret locations and discover rewards.

The start-up’s Founder explained that fun is the ultimate antidote to stress.

They provided an experience that facilitated getting out and connecting with peers, before it gets too close to exams and after the mid-semester wave of assignments.

They are passionate about using new technologies to turn cities into playgrounds, developing a portfolio of technologies in the process.

These technologies include holograms, face-recognition software and transparent glass screens, which they draw on to design interactive games.

Using the campus for a big treasure hunt is a great way to test the waters before thousands of dollars are put into more commercial ventures, and scale-up the app to use in different situations.

 

via Personal Rehab and Recovery Through Virtual Therapy

, , , , ,

Leave a comment

[WEB SITE] AI helps identify patients in need of advanced care for depression

Depression is a worldwide health predicament, affecting more than 300 million adults. It is considered the leading cause of disability and contributor to the overall global burden of disease. Detecting people in need of advanced depression care is crucial.

Now, a team of researchers at the Regenstrief Institute found a way to help clinicians detect and identify patients in need of advanced care for depression. The new method, which uses machine learning or artificial intelligence (AI), can help reduce the number of people who experience depressive symptoms that could potentially lead to suicide.

The World Health Organization (WHO) reports that close to 800,000 people die due to suicide each year, making it the leading cause of death among people between the ages of 15 and 29 years old.

Major depression is one of the most common mental illness worldwide. In the United States, an estimated 17.3 million adults had at least one major depressive episode, accounting to about 7.1 percent of all adults in the country.

Image Credit: Zapp2Photo / Shutterstock

Image Credit: Zapp2Photo / Shutterstock

Predicting patients who need treatment

The study, which was published in the Journal of Medical Internet Research, unveils a new way to determine patients who might need advanced care for depression. The decision model can predict who might need more treatment than what the primary care provider can offer.

Since some forms of depression are far more severe and need advanced care by certified medical health providers, knowing who is at risk is essential. But identifying these patients is very challenging. In line with this, the researchers formulated a method that scrutinizes a comprehensive range of patient-level diagnostic, behavioral, and demographic data, including past clinic visit history from a statewide health information.

Using the data, health care providers can now build a technique on properly predicting patients in need of advanced care. The machine learning algorithm combined both behavioral and clinical data from the statewide health information exchange, called the Indiana Network for Patient Care.

“Our goal was to build reproducible models that fit into clinical workflows,” Dr. Suranga N. Kasthurirathne, a research scientist at Regenstrief Institute, and study author said.

“This algorithm is unique because it provides actionable information to clinicians, helping them to identify which patients may be more at risk for adverse events from depression,” he added.

The researchers used the new model to train random forest decision models that can predict if there’s a need for advanced care among the overall patient population and those at higher risk of depression-related adverse events.

It’s important to consider making models that can fit different patient populations. This way, the health care provider has the option to choose the best screening approach he or she needs.

“We demonstrated the ability to predict the need for advanced care for depression across various patient populations with considerable predictive performance. These efforts can easily be integrated into existing hospital workflows,” the investigators wrote in the paper.

Identifying patients in need of advanced care is important

With the high number of people who have depression, one of the most important things to do is determine who are at a higher risk of potential adverse effects, including suicide.

Depression has different types, depending on the level of risk involved. For instance, people with mild depression forms may not need assistance and can recover faster. On the other hand, those who have severe depression may require advanced care aside from what primary care providers can offer.

They may need to undergo treatment such as medications and therapies to improve their condition. Hence, the new method can act like a preventive measure to reduce the incidence of adverse events related to the condition such as suicide.

More importantly, training health care teams to successfully identify patients with severe depression can help resolve the problem. With the proper application of the novel technique, many people with depression can be treated accordingly, reducing serious complications.

Depression signs and symptoms

Health care providers need to properly identify patients with depression. The common signs and symptoms of depression include feelings of hopelessness and helplessness, loss of interest in daily activities, sleep changes, irritability, anger, appetite changes, weight changes, self-loathing, loss of energy, problems in concentrating, reckless behavior, memory problems, and unexplained pains and aches.


Journal reference:

Suranga N Kasthurirathne, Paul G Biondich, Shaun J Grannis, Saptarshi Purkayastha, Joshua R Vest, Josette F Jones. (2019). Identification of Patients in Need of Advanced Care for Depression Using Data Extracted From a Statewide Health Information Exchange: A Machine Learning Approach. Journal of Medical Internet Research. https://www.jmir.org/2019/7/e13809/


via AI helps identify patients in need of advanced care for depression

, , , , , , , , ,

1 Comment

%d bloggers like this: