Posts Tagged machine learning

[WEB PAGE] AI could play ‘critical’ role in identifying appropriate treatment for depression

Male doctor discussing reports with patient at desk in medical office

Image credits: Wavebreak Media Ltd – Dreamstime

Published Tuesday, February 11, 2020

A large-scale trial led by scientists at the University of Texas Southwestern (UT Southwestern) has produced a machine learning algorithm which accurately predicts the efficacy of an antidepressant, based on a patient’s neural activity.

The UT Southwestern researchers hope that this tool could eventually play a critical role in deciding which course of treatment would be best for patients with depression, as well as being part of a new generation of “biology-based, objective strategies” which make use of technologies such as AI to treat psychiatric disorders.

The US-wide trial was initiated in 2011 with the intention of better understanding mood disorders such as major depression and seasonal affective disorder (Sad). The trial has reaped many studies, the latest of which demonstrates that doctors could use computational tools to guide treatment choices for depression. The study was published in Nature Biotechnology.

“These studies have been a bigger success than anyone on our team could have imagined,” said Dr. Madhukar Trivedi, the UT Southwestern psychiatrist who oversaw the trial. “We provided abundant data to show we can move past the guessing game of choosing depression treatments and alter the mindset of how the disease should be diagnosed and treated.”

This 16-week trial involved more than 300 participants with depression, who either received a placebo or SSRI (selective serotonin reuptake inhibitor), the most common type of antidepressant. Despite the widespread prescription of SSRIs, they have been criticised for their side effects and for inefficacy in many patients.

Trivedi had previously established in another study that up to two-thirds of patients do not adequately respond to their first antidepressant, motivating him to find a way of identifying much earlier which treatment path is most likely to help the patient before they begin and potentially suffer further through ineffectual treatment.

Trivedi and his collaborators used an electroencephalogram (EEG) to measure electrical activity in the participants’ cortex before they began the treatment. This data was used to develop a machine learning algorithm to predict which patients would benefit from the medication within two months.

The researchers found that the AI accurately predicted outcomes, with patients less certain to respond to an antidepressant more likely to improve with other interventions, such as brain stimulation or therapeutic approaches. Their findings were replicated across three additional patient groups.

“It can be devastating for a patient when an antidepressant doesn’t work,” Trivedi said. “Our research is showing that they no longer have to endure the painful process of trial and error.”

Dr Amit Etkin, a Stanford University professor of psychiatry who also worked on the algorithm, added: “This study takes previous research, showing that we can predict who benefits from an antidepressant, and actually brings it to the point of practical utility.”

Next, they hope to develop an interface for the algorithm to be used alongside EEGs – and perhaps also with other means of measuring brain activity like functional magnetic resonance imaging (functional MRI, aka fMRI) or MEG – and have the system approved by the US Food and Drug Administration.

 

via AI could play ‘critical’ role in identifying appropriate treatment for depression | E&T Magazine

, , , , , ,

Leave a comment

[NEWS] Novel artificial intelligence algorithm helps detect brain tumor

 

A brain tumor is a mass of abnormal cells that grow in the brain. In 2016 alone, there were 330,000 incident cases of brain cancer and 227,000 related-deaths worldwide. Early detection is crucial to improve patient prognosis, and thanks to a team of researchers, they developed a new imaging technique and artificial intelligence algorithm that can help doctors accurately identify brain tumors.

 

Image Credit: create jobs 51 / Shutterstock.com

Image Credit: create jobs 51 / Shutterstock.com

Published in the journal Nature Medicine, the study reveals a new method that combines modern optical imaging and an artificial intelligence algorithm. The researchers at New York University studied the accuracy of machine learning in producing precise and real-time intraoperative diagnosis of brain tumors.

In the past, the only way to diagnose brain tumors is through hematoxylin and eosin staining of processed tissue in time. Plus, interpretation of the findings relies on pathologists who examine the specimen. The researchers hope the new method will provide a better and more accurate diagnosis, which can help initiate effective treatments right away.

In cancer treatment, the earlier cancer has been diagnosed, the earlier the oncologists can start the treatment. In most cases, early detection improves health outcomes. The researchers have found that their novel method of detection yielded a 94.6 percent accuracy, compared to 93.9 percent for pathology-based interpretation.

The imaging technique

The researchers used a new imaging technique called stimulated Raman histology (SRH), which can reveal tumor infiltration in human tissue. The technique collects scattered laser light and emphasizes features that are not usually seen in many body tissue images.

With the new images, the scientists processed and studied using an artificial intelligence algorithm. Within just two minutes and thirty seconds, the researchers came up with a brain tumor diagnosis. The fast detection of brain cancer can help not only in diagnosing the disease early but also in implementing a fast and effective treatment plan. With cancer caught early, treatments may be more effective in killing cancer cells.

The team also utilized the same technology to accurately identify and remove undetectable tumors that cannot be detected by conventional methods.

“As surgeons, we’re limited to acting on what we can see; this technology allows us to see what would otherwise be invisible, to improve speed and accuracy in the OR, and reduce the risk of misdiagnosis. With this imaging technology, cancer operations are safer and more effective than ever before,” Dr. Daniel A. Orringer, associate professor of Neurosurgery at NYU Grossman School of Medicine, said.

Study results

The study is a walkthrough of various ideas and efforts by the research team. First off, they built the artificial intelligence algorithm by training a deep convolutional neural network (CNN), containing more than 2.5 million samples from 415 patients. The method helped them group and classify tissue samples into 13 categories, representing the most common types of brain tumors, such as meningioma, metastatic tumors, malignant glioma, and lymphoma.

For validation, the researchers recruited 278 patients who are having brain tumor resection or epilepsy surgery at three university medical centers. The tumor samples from the brain were examined and biopsied. The researchers grouped the samples into two groups – control and experimental.

The team assigned the control group to be processed traditionally in a pathology laboratory. The process spans 20 to 30 minutes. On the other hand, the experimental group had been tested and studied intraoperatively, from getting images and processing the examination through CNN.

There were noted errors in both the experimental and control groups but were unique from each other. The new tool can help centers detect and diagnose brain tumors, particularly those without expert neuropathologists.

“SRH will revolutionize the field of neuropathology by improving decision-making during surgery and providing expert-level assessment in the hospitals where trained neuropathologists are not available,” Dr. Matija Snuderl, associate professor in the Department of Pathology at NYU Grossman School of Medicine, explained.

Journal references:

Patel, A., Fisher, J, Nichols, E., et al. (2019). Global, regional, and national burden of brain and other CNS cancer, 1990–2016: a systematic analysis for the Global Burden of Disease Study 2016. The Lancet Neurology. https://www.thelancet.com/journals/laneur/article/PIIS1474-4422(18)30468-X/fulltext#%20

Hollon, T., Pandian, B, Orringer, D. (2019). Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks. Nature Medicine. https://www.nature.com/articles/s41591-019-0715-9

 

via Novel artificial intelligence algorithm helps detect brain tumor

, , , , , , , , , , , , , , , , , , , , , ,

Leave a comment

[ARTICLE] Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation

The use of Artificial Intelligence and machine learning in basic research and clinical neuroscience is increasing. AI methods enable the interpretation of large multimodal datasets that can provide unbiased insights into the fundamental principles of brain function, potentially paving the way for earlier and more accurate detection of brain disorders and better informed intervention protocols. Despite AI’s ability to create accurate predictions and classifications, in most cases it lacks the ability to provide a mechanistic understanding of how inputs and outputs relate to each other. Explainable Artificial Intelligence (XAI) is a new set of techniques that attempts to provide such an understanding, here we report on some of these practical approaches. We discuss the potential value of XAI to the field of neurostimulation for both basic scientific inquiry and therapeutic purposes, as well as, outstanding questions and obstacles to the success of the XAI approach.

Introduction

One of the greatest challenges to effective brain-based therapies is our inability to monitor and modulate neural activity in real time. Moving beyond the relatively simple open-loop neurostimulation devices that are currently the standard in clinical practice (e.g., epilepsy) requires a closed-loop approach in which the therapeutic application of neurostimulation is determined by characterizing the moment-to-moment state of the brain (Herron et al., 2017). However, there remain major obstacles to progress for such a closed-loop approach. For one, we do not know how to objectively characterize mental states or even detect pathological activity associated with most psychiatric disorders. Second, we do not know the most effective way to improve maladaptive behaviors by means of neurostimulation. The solutions to these problems require innovative experimental frameworks leveraging intelligent computational approaches able to sense, interpret, and modulate large amount of data from behaviorally relevant neural circuits at the speed of thoughts. New approaches such as computational psychiatry (Redish and Gordon, 2016Ferrante et al., 2019) or ML are emerging. However, current ML approaches that are applied to neural data typically do not provide an understanding of the underlying neural processes or how they contributed to the outcome (i.e., prediction or classifier). For example, significant progress has been made using ML to effectively classify EEG patterns, but the understanding of brain function and mechanisms derived from such approaches still remain relatively limited (Craik et al., 2019). Such an understanding, be it correlational or causal, is key to improving ML methods and to suggesting new therapeutic targets or protocols using different techniques. Explainable Artificial Intelligence (XAI) is a relatively new set of techniques that combines sophisticated AI and ML algorithms with effective explanatory techniques to develop explainable solutions that have proven useful in many domain areas (Core et al., 2006Samek et al., 2017Yang and Shafto, 2017Adadi and Berrada, 2018Choo and Liu, 2018Dosilovic et al., 2018Holzinger et al., 2018Fernandez et al., 2019Miller, 2019). Recent work has suggested that XAI may be a promising avenue to guide basic neural circuit manipulations and clinical interventions (Holzinger et al., 2017bVu et al., 2018Langlotz et al., 2019). We will develop this idea further here.

Explainable Artificial Intelligence for neurostimulation in mental health can be seen as an extension in the design of BMI. BMI are generally understood as combinations of hardware and software systems designed to rapidly transfer information between one or more brain area and an external device (Wolpaw et al., 2002Hatsopoulos and Donoghue, 2009Nicolelis and Lebedev, 2009Andersen et al., 2010Mirabella and Lebedev, 2017). While there is a long history of research in the decoding, analyses and production of neural signal in non-human primates and rodents, a lot of progress has recently been made to develop these techniques for the human brain both invasively and non-invasively, unidirectionally or bi-directionally (Craik et al., 2019Martini et al., 2019Rao, 2019). Motor decision making for example, has been shown to involve a network of brain areas, before and during movement execution (Mirabella, 2014Hampshire and Sharp, 2015), so that BMI intervention can inhibit movement up to 200 ms after its initiation (Schultze-Kraft et al., 2016Mirabella and Lebedev, 2017). The advantage of this type of motor-decision BMI is that it is not bound to elementary motor commands (e.g., turn the wheel of a car), but rather to the high-level decision to initiate and complete a movement. That decision can potentially be affected by environmental factors (e.g., AI vision system detecting cars on the neighboring lane) and internal state (e.g., AI system assessing the state of fatigue of the driver). The current consensus is that response inhibition is an emergent property of a network of discrete brain areas that include the right inferior frontal gyrus and that leverage basic wide-spread elementary neural circuits such a local-lateral-inhibition (Hampshire and Sharp, 2015Mirabella and Lebedev, 2017). This gyrus, as with many other cortical structures, is dynamically recruited so that individual neurons may code for drastically different aspects of the behavior, depending of the task at hand. Consequently, designing a BMI targeting such an area requires the ability for the system to rapidly switch its decoding and stimulation paradigms as a function of environmental or internal state information. Such online adaptability needs of course to be learned and personalized to each individual patient, a task that is ideally suited for AI/ML approaches. In the sensory domain, some have shown that BMI can be used to generate actionable entirely artificial tactile sensations to trigger complex motor decisions (O’Doherty et al., 2012Klaes et al., 2014Flesher et al., 2017). Most of the BMI research work has, however, focused on the sensory motor system because of the relatively focused and well-defined nature of the neural circuits. Consequently, most of the clinical applications are focused on neurological disorders. Interestingly, new generations of BMIs are emerging that are focused on more cognitive functions such as detecting and manipulating reward expectations using reinforcement learning paradigms (Mahmoudi and Sanchez, 2011Marsh et al., 2015Ramkumar et al., 2016), memory enhancement (Deadwyler et al., 2017) or collective problem solving using multi-brain interfacing in rats (Pais-Vieira et al., 2015) or humans (Jiang et al., 2019). All these applications can potentially benefit from the adaptive properties of AI/ML algorithms and, as mentioned, explainable AI approaches have the promise of yielding basic mechanistic insights about the neural systems being targeted. However, the use of these approaches in the context of psychiatric or neurodevelopmental disorders has not been realized though their potential is clear.

In computational neuroscience and computational psychiatry there is a contrast between theory-driven (e.g., reinforcement learning, biophysically inspired network models) and data-driven models (e.g., deep-learning or ensemble methods). While the former models are highly explainable in terms of biological mechanisms, the latter are high performing in terms of predictive accuracy. In general, high performing methods tend to be the least explainable, while explainable methods tend to be the least accurate. Mathematically, the relationship between the two is still not fully formalized or understood. These are the type of issues that occupy the ML community beyond neuroscience and neurostimulation. XAI models in neuroscience might be created by combining theory- and data-driven models. This combination could be achieved by associating explanatory semantic information with features of the model; by using simpler models that are easier to explain; by using richer models that contain more explanatory content; or by building approximate models, solely for the purpose of explanation.

Current efforts in this area include: (1) identify how explainable learning solutions can be applied to neuroscience and neuropsychiatric datasets for neurostimulation, (2) foster the development of a community of scholars working in the field of explainable learning applied to basic neuroscience and clinical neuropsychiatry, and (3) stimulate an open exchange of data and theories between investigators in this nascent field. To frame the scope of this article, we lay out some of the major key open questions in fundamental and clinical neuroscience research that can potentially be addressed by a combination of XAI and neurostimulation approaches. To stimulate the development of XAI approaches the National Institute of Mental Health (NIMH) has released a funding opportunity to apply XAI approaches for decoding and modulating neural circuit activity linked to behavior1.

Intelligent Decoding and Modulation of Behaviorally Activated Brain Circuits

A variety of perspectives for how ML and, more generally AI could contribute to closed-loop brain circuit interventions are worth investigating (Rao, 2019). From a purely signal processing stand point, an XAI system can be an active stimulation artifact rejection component (Zhou et al., 2018). In parallel, the XAI system should have the ability to discover – in a data-driven manner – neuro-behavioral markers of the computational process or condition under consideration. Remarkable efforts are currently underway to derive biomarkers for mental health, as is the case for example for depression (Waters and Mayberg, 2017). Once these biomarkers are detected, and the artifacts rejected, the XAI system can generate complex feedback stimulation patterns designed and monitored (human in-the loop) to improve behavioral or cognitive performance (Figure 1). XAI approaches have also the potential to address outstanding biological and theoretical questions in neuroscience, as well as to address clinical applications. They seem well-suited for extracting actionable information from highly complex neural systems, moving away from traditional correlational analyses and toward a causal understanding of network activity (Yang et al., 2018). However, even with XAI approaches, one should not assume that understanding the statistical causality of neural interactions is equivalent to understanding behavior; a highly sophisticated knowledge of neural activity and neural connectivity is not generally synonymous with understanding their role in causing behavior.

Figure 1. An XAI-enabled closed-loop neurostimulation process can be described in four phases: (1) System-level recording of brain signals (e.g., spikes, LFPs, ECoG, EEG, neuromodulators, optical voltage/calcium indicators), (2) Multimodal fusion of neural data and dense behavioral/cognitive assessment measures. (3) XAI algorithm using unbiasedly discovered biomarkers to provide mechanistic explanations on how to improve behavioral/cognitive performance and reject stimulation artifacts. (4) Complex XAI-derived spatio-temporal brain stimulation patterns (e.g., TMS, ECT, DBS, ECoG, VNS, TDCS, ultrasound, optogenetics) that will validate the model and affect subsequent recordings. ADC, Analog to Digital Converter; AMP, Amplifier; CTRL, Control; DAC, Digital to Analog Converter; DNN, Deep Neural Network. XRay picture courtesy Ned T. Sahin. Diagram modified from Zhou et al. (2018).

[…]

 

Continue —->  Frontiers | Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation | Neuroscience

, , , , , ,

Leave a comment

[WEB SITE] Personal Rehab and Recovery Through Virtual Therapy

Virtual therapy is based on research that combines leading-edge data techniques with wearable robotics, artificial intelligence and machine learning.

An engineering researcher from New Zealand’s University of Auckland has been awarded a Rutherford Discovery Fellowship.

The Associate Professor, who is developing a virtual therapy technology for personal rehabilitation, is one of eleven Fellows for 2019. The Fellowship provides NZ$ 800,000 in funding over five years.

According to a recent press release, his research combines leading-edge data techniques with wearable robotics, artificial intelligence (AI) and machine learning.

The aim is to create devices that are capable of personalising rehabilitation and recovery plans, which are cheaper and more efficient than humans.

The Problem for Personal Rehabilitation

  • Currently, rehabilitation after a medical event, such as stroke, is carried out by trained physical or occupational therapists.
  • However, much of the work is physically demanding and the cost is relatively high and time-consuming.
  • While some robotics devices used for physical rehabilitation have been developed overseas, they lag far behind what a human therapist is capable of.
  • The current technology has little or no intelligence and can only act on predefined rules. Thus, it is not tailored to individuals and does not have the ability to adapt and learn as a human therapist would.

The Solution for Personal Rehabilitation

  • The researcher’s work, meanwhile, takes a strongly data-driven approach, looking at the fundamental physiology of human movement.
  • It will build on that information in order to create individual recovery plans that take into account the effects of a diverse range of physical impairments.
  • The goal is to make real progress towards creating low-cost robotic ‘virtual therapists’ with the ability to deliver automatic but very precise treatments.
  • The Rutherford Discovery Fellowships, managed on behalf of the government by the New Zealand Royal Society Te Apārangi, aim to attract and retain talented early- to mid-career researchers by helping them establish a track record for future research leadership.
  • The high costs of healthcare not just in New Zealand but around the world mean that progress in the area of medical technologies and personalised therapies and treatments needs to be prioritised.

Stressbuster

In other news, the University was the site of a unique digital treasure hunt recently to mark Stress Less Week.

Stress Less week was held 7 to 11 October as thousands of students prepare to head into study break and exam period.

A student start-up developed the technology used in the app-based game, which challenged the students to unlock and solve riddles on the City Campus to find secret locations and discover rewards.

The start-up’s Founder explained that fun is the ultimate antidote to stress.

They provided an experience that facilitated getting out and connecting with peers, before it gets too close to exams and after the mid-semester wave of assignments.

They are passionate about using new technologies to turn cities into playgrounds, developing a portfolio of technologies in the process.

These technologies include holograms, face-recognition software and transparent glass screens, which they draw on to design interactive games.

Using the campus for a big treasure hunt is a great way to test the waters before thousands of dollars are put into more commercial ventures, and scale-up the app to use in different situations.

 

via Personal Rehab and Recovery Through Virtual Therapy

, , , , ,

Leave a comment

[WEB SITE] AI helps identify patients in need of advanced care for depression

Depression is a worldwide health predicament, affecting more than 300 million adults. It is considered the leading cause of disability and contributor to the overall global burden of disease. Detecting people in need of advanced depression care is crucial.

Now, a team of researchers at the Regenstrief Institute found a way to help clinicians detect and identify patients in need of advanced care for depression. The new method, which uses machine learning or artificial intelligence (AI), can help reduce the number of people who experience depressive symptoms that could potentially lead to suicide.

The World Health Organization (WHO) reports that close to 800,000 people die due to suicide each year, making it the leading cause of death among people between the ages of 15 and 29 years old.

Major depression is one of the most common mental illness worldwide. In the United States, an estimated 17.3 million adults had at least one major depressive episode, accounting to about 7.1 percent of all adults in the country.

Image Credit: Zapp2Photo / Shutterstock

Image Credit: Zapp2Photo / Shutterstock

Predicting patients who need treatment

The study, which was published in the Journal of Medical Internet Research, unveils a new way to determine patients who might need advanced care for depression. The decision model can predict who might need more treatment than what the primary care provider can offer.

Since some forms of depression are far more severe and need advanced care by certified medical health providers, knowing who is at risk is essential. But identifying these patients is very challenging. In line with this, the researchers formulated a method that scrutinizes a comprehensive range of patient-level diagnostic, behavioral, and demographic data, including past clinic visit history from a statewide health information.

Using the data, health care providers can now build a technique on properly predicting patients in need of advanced care. The machine learning algorithm combined both behavioral and clinical data from the statewide health information exchange, called the Indiana Network for Patient Care.

“Our goal was to build reproducible models that fit into clinical workflows,” Dr. Suranga N. Kasthurirathne, a research scientist at Regenstrief Institute, and study author said.

“This algorithm is unique because it provides actionable information to clinicians, helping them to identify which patients may be more at risk for adverse events from depression,” he added.

The researchers used the new model to train random forest decision models that can predict if there’s a need for advanced care among the overall patient population and those at higher risk of depression-related adverse events.

It’s important to consider making models that can fit different patient populations. This way, the health care provider has the option to choose the best screening approach he or she needs.

“We demonstrated the ability to predict the need for advanced care for depression across various patient populations with considerable predictive performance. These efforts can easily be integrated into existing hospital workflows,” the investigators wrote in the paper.

Identifying patients in need of advanced care is important

With the high number of people who have depression, one of the most important things to do is determine who are at a higher risk of potential adverse effects, including suicide.

Depression has different types, depending on the level of risk involved. For instance, people with mild depression forms may not need assistance and can recover faster. On the other hand, those who have severe depression may require advanced care aside from what primary care providers can offer.

They may need to undergo treatment such as medications and therapies to improve their condition. Hence, the new method can act like a preventive measure to reduce the incidence of adverse events related to the condition such as suicide.

More importantly, training health care teams to successfully identify patients with severe depression can help resolve the problem. With the proper application of the novel technique, many people with depression can be treated accordingly, reducing serious complications.

Depression signs and symptoms

Health care providers need to properly identify patients with depression. The common signs and symptoms of depression include feelings of hopelessness and helplessness, loss of interest in daily activities, sleep changes, irritability, anger, appetite changes, weight changes, self-loathing, loss of energy, problems in concentrating, reckless behavior, memory problems, and unexplained pains and aches.


Journal reference:

Suranga N Kasthurirathne, Paul G Biondich, Shaun J Grannis, Saptarshi Purkayastha, Joshua R Vest, Josette F Jones. (2019). Identification of Patients in Need of Advanced Care for Depression Using Data Extracted From a Statewide Health Information Exchange: A Machine Learning Approach. Journal of Medical Internet Research. https://www.jmir.org/2019/7/e13809/


via AI helps identify patients in need of advanced care for depression

, , , , , , , , ,

Leave a comment

[ARTICLE] Εvaluation of machine learning methods for seizure prediction in epilepsy – Full Text PDF

Abstract:

Epilepsy affects about 50 million people worldwide of which one third is refractory to medication. An automated and reliable system that warns of impending seizures would greatly improve patient’s quality of life by overcoming the uncertainty and helplessness due to the unpredicted events. Here we present new seizure prediction results including a performance comparison of different methods. The analysis is based on a new set of intracranial EEG data that has been recorded in our working group during presurgical evaluation.
We applied two different methods for seizure prediction and evaluated their performance pseudoprospectively. The comparison of this evaluation with common statistical evaluation reveals possible reasons for overly optimistic estimations of the performance of seizure forecasting systems.

1 Introduction

Affecting about 1 % of the world population, epilepsy is one of the most common neurological diseases. Although seizures cover relatively short periods in a patient’s life, the uncertainty when the next seizure will occur can produce a high level of anxiety [4]. For 70 % of the patients, medication can reduce the frequency of seizures or even abolish them. However, patients report that unwanted side effects of the medication as well as the unpredictability of seizures are the severest handicaps of this disease [13]. A mobile system with the ability to predict seizures can help to relief the patients’ anxiety related to the uncertainty of events by enabling them to seek shelter, apply a short acting drug or inform the treating physician about the event. The device might also be used to prevent or mitigate the seizure [12].

Usually, seizure prediction is treated as a binary classification problem of brain activity, recorded as intracranial electroencephalography (icEEG) [8], with the state of impending seizures (preictal) being labeled as 1 and periods with a big temporal distance to the next seizure (interictal) labeled as 0. In this contribution, we present a new database that has been recorded in our working group. By intensifying the cooperation of clinical research and data analysis we minimize loss of descriptive metadata. For feature extraction and classification of the recorded icEEG signals we employed both, a recently proposed deep convolutional neural network and a featurebased method.

[…]

Full Texy PGF

, , , ,

Leave a comment

[WEB PAGE] Reconnecting the Disconnected: Restoring Movement in Paralyzed Limbs – Video

"Moving an arm can involve more than 50 different muscles," UA professor Andrew Fuglevand said. "Replicating how the brain naturally coordinates the activities of these muscles is extremely challenging."

“Moving an arm can involve more than 50 different muscles,” UA professor Andrew Fuglevand said. “Replicating how the brain naturally coordinates the activities of these muscles is extremely challenging.”

UA professor Andrew Fuglevand is using artificial intelligence to stimulate multiple muscles to elicit natural movement in ways previous methods have been unable to do.
Dec. 20, 2018
Andrew Fuglevand

Andrew Fuglevand

Scientists now know that the brain controls movement in people by signaling groups of neurons to tell the muscles when and where to move. Researchers also have learned it takes a complex orchestration of many signals to produce even seemingly simple body movements.

If any of these signals are blocked or broken, such as from a spinal cord injury or stroke, the messages from the brain to the muscles are unable to connect, causing paralysis. The person’s muscles are functional, but they no longer are being sent instructions.

Andrew Fuglevand, professor of physiology at the University of Arizona College of Medicine – Tucson and professor of neuroscience at the UA College of Science, has received a $1.2 million grant from the National Institutes of Health to study electrical stimulation of the muscles as a way to restore limb movements in paralyzed individuals. Fuglevand’s goal is to restore voluntary movement to a person’s own limbs rather than relying on external mechanical or robotic devices.

Producing a wide range of movements in paralyzed limbs has been unsuccessful so far because of the substantial challenges associated with identifying the patterns of muscle stimulation needed to elicit specified movements, Fuglevand explained.

“Moving a finger involves as many as 20 different muscles at a time. Moving an arm can involve more than 50 different muscles. They all work together in an intricate ‘dance’ to produce beautifully smooth movements,” he said. “Replicating how the brain naturally coordinates the activities of these muscles is extremely challenging.”

Recent advances in “machine learning,” or artificial intelligence, are making the impossible possible.

Fuglevand, who also is an affiliate professor of biomedical engineering and teaches neuroscience courses at the UA, is employing machine learning to mimic and replicate the patterns of brain activity that control groups of muscles. Tiny electrodes implanted in the muscles replay the artificially generated signals to produce complex movements.

“If successful, this approach would greatly expand the repertoire of motor behaviors available to paralyzed individuals,” he said.

“More than 5 million Americans are living with some form of paralysis, and the leading causes are stroke and spinal injury,” said Nicholas Delamere, head of the UA Department of Physiology. “New innovations in artificial intelligence, developed by scientists like Fuglevand and his team, are allowing them to decode subtle brain signals and make brain-machine interfaces that ultimately will help people move their limbs again.”

“The headway researchers have made in our understanding of artificial intelligence, machine learning and the brain is incredible,” said UA President Robert C. Robbins. “The opportunity to incorporate AI to brain-limb communication has life-changing potential, and while there are many challenges to optimize these interventions, we are really committed to making this step forward. I am incredibly excited to track Dr. Fuglevand’s progress with this new grant.”

Research reported in this release was supported by the National Institutes of Health, National Institute of Neurological Disorders and Stroke, under grant No. 1R01NS102259-01A1. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
A version of this article originally appeared on the UA Health Sciences website:https://opa.uahs.arizona.edu/newsroom/news/2018/reconnecting-disconnected-ua-physiology-professor-receives-12m-nih-grant-use-ai

 

via Reconnecting the Disconnected: Restoring Movement in Paralyzed Limbs | UANews

, , , , , , , ,

Leave a comment

[WEB SITE] Novel machine learning technique for simulating the every day task of dressing

Summary: Computer scientists have devised a novel computational method, driven by machine learning techniques, to successfully and realistically simulate the multi-step process of putting on clothes.

Putting on clothes is a daily, mundane task that most of us perform with little or no thought. We may never take into consideration the multiple steps and physical motions involved when we’re getting dressed in the mornings. But that is precisely what needs to be explored when attempting to capture the motion of dressing and simulating cloth for computer animation.

Computer scientists from the Georgia Institute of Technology and Google Brain, Google’s artificial intelligence research arm, have devised a novel computational method, driven by machine learning techniques, to successfully and realistically simulate the multi-step process of putting on clothes. When dissected, the task of dressing is quite complex, and involves several different physical interactions between the character and his or her clothing, primarily guided by the person’s sense of touch.

Creating animation of a character putting on clothing is challenging due to the complex interactions between the character and the simulated garment. Most work in highly constrained character animation deals with static environments which don’t react very much to the motion of the character, notes the researchers. In contrast, clothing can respond immediately and drastically to small changes in the position of the body; clothing has the tendency to fold, stick and cling to the body, making haptic, or touch sensation, essential to the task.

Another unique challenge about dressing is that it requires the character to perform a prolonged sequence of motion involving a diverse set of subtasks, such as grasping the front layer of a shirt, tucking a hand into the shirt opening and pushing a hand through a sleeve.

“Dressing seems easy to many of us because we practice it every single day. In reality, the dynamics of cloth make it very challenging to learn how to dress from scratch,” says Alexander Clegg, lead author of the research and a computer science PhD student at the Georgia Institute of Technology. “We leverage simulation to teach a neural network to accomplish these complex tasks by breaking the task down into smaller pieces with well-defined goals, allowing the character to try the task thousands of times and providing reward or penalty signals when the character tries beneficial or detrimental changes to its policy.”

The researchers’ method then updates the neural network one step at a time to make the discovered positive changes more likely to occur in the future. “In this way, we teach the character how to succeed at the task,” notes Clegg.

Clegg and his collaborators at Georgia Tech include computer scientists Wenhao Yu, Greg Turk and Karen Liu. Together with Google Brain researcher Jie Tan, the group will present their work at SIGGRAPH Asia 2018 in Tokyo 4 December to 7 December. The annual conference features the most respected technical and creative members in the field of computer graphics and interactive techniques, and showcases leading edge research in science, art, gaming and animation, among other sectors.

In this study, the researchers demonstrated their approach on several dressing tasks: putting on a t-shirt, throwing on a jacket and robot-assisted dressing of a sleeve. With the trained neural network, they were able to achieve complex reenactment of a variety of ways an animated character puts on clothes. Key is incorporating the sense of touch into their framework to overcome the challenges in cloth simulation. The researchers found that careful selection of the cloth observations and the reward functions in their trained network are crucial to the framework’s success. As a result, this novel approach not only enables single dressing sequences but a character controller that can successfully dress under various conditions.

“We’ve opened the door to a new way of animating multi-step interaction tasks in complex environments using reinforcement learning,” says Clegg. “There is still plenty of work to be done continuing down this path, allowing simulation to provide experience and practice for task training in a virtual world.” In expanding this work, the team is currently collaborating with other researchers in Georgia Tech’s Healthcare Robotics lab to investigate the application of robotics for dressing assistance.

 

Story Source:

Materials provided by Association for Computing MachineryNote: Content may be edited for style and length.

 

via Novel machine learning technique for simulating the every day task of dressing — ScienceDaily

, , , , ,

Leave a comment

[ARTICLE] Enabling Stroke Rehabilitation in Home and Community Settings: A Wearable Sensor-Based Approach for Upper-Limb Motor Training – Full Text

A conceptual representation of the wrist-worn sensor system for home-based upper-limb rehabilitation. The system consists of two wearable sensors, a tablet computer to be… View more

Abstract:

High-dosage motor practice can significantly contribute to achieving functional recovery after a stroke. Performing rehabilitation exercises at home and using, or attempting to use, the stroke-affected upper limb during Activities of Daily Living (ADL) are effective ways to achieve high-dosage motor practice in stroke survivors. This paper presents a novel technological approach that enables 1) detecting goal-directed upper limb movements during the performance of ADL, so that timely feedback can be provided to encourage the use of the affected limb, and 2) assessing the quality of motor performance during in-home rehabilitation exercises so that appropriate feedback can be generated to promote high-quality exercise. The results herein presented show that it is possible to detect 1) goal-directed movements during the performance of ADL with a c -statistic of 87.0% and 2) poorly performed movements in selected rehabilitation exercises with an F -score of 84.3%, thus enabling the generation of appropriate feedback. In a survey to gather preliminary data concerning the clinical adequacy of the proposed approach, 91.7% of occupational therapists demonstrated willingness to use it in their practice, and 88.2% of stroke survivors indicated that they would use it if recommended by their therapist.

Introduction

Stroke is a leading cause of severe long-term disability. In the US alone, nearly 800,000 people suffer a stroke each year [1]. The number of individuals who suffer a stroke each year is expected to rise in the coming years because the prevalence of stroke increases with age and the world population is aging [2]. Approximately 85% of individuals who have a stroke survive, but they often experience significant motor impairments. Upper-limb paresis is the most common impairment following a stroke. It affects 75% of stroke survivors and leads to limitations in the performance of Activities of Daily Living (ADL) [4].

Inability to use the stroke-affected upper limb for ADL often leads to a phenomenon that is referred to as learned non-use [5]. As patients rely more and more on the unaffected (or less impaired) upper limb [5] they progressively lose motor abilities of the stroke-affected upper limb that they may have recovered as a result of a rehabilitation intervention [6].

A high dosage of motor practice using the stroke-affected upper limb during the performance of ADL, despite considerable difficulty, stimulates neuroplasticity and motor function recovery [7]–[8][9]. Thus, it is clinically important to encourage stroke survivors to continue making appropriate use of the affected upper limb [10]–[11][12][13], in addition to engaging in rehabilitation exercises that focus on range-of-motion and functional abilities [14]–[15][16].

The use of wearable sensors has recently emerged as an efficient way to monitor the amount of upper-limb use after a stroke [17]–[18][19][20][21][22]. However, despite growing evidence of the clinical potential of these devices [23], their widespread clinical deployment has been hindered by technical limitations. A shortcoming of currently available wrist-worn devices is that they cannot distinguish between Goal-Directed (GD) movements (i.e., movements performed for a specific purposeful task) and non-Goal-Directed (non-GD) movements (e.g., the arm swinging during gait). Instead, these sensors focus on recording the number and/or intensity of any type of arm movements [10]. Consequently, non-GD movements are reflected as part of the measurements with equal importance as GD movements. This results in an overestimation of the amount of actual arm use [24]. Furthermore, monitoring the aggregate number of stroke-affected upper limb movements is not sufficient for the purpose of providing timely feedback to encourage the use of the affected limb during the performance of ADL. To promote the use of the stroke-affected limb, it is critical that feedback reflects the relative use of the affected upper limb compared to the contralateral one.

Wrist-worn movement sensors have also been applied to monitoring rehabilitation exercises in the home setting [25]–[26][27][28]. However, existing systems primarily focus on quantifying the dosage/intensity of the exercises (e.g., the duration of the exercises and the number of movement repetitions) and do not monitor if the quality of the performed exercise is appropriate. Ensuring good quality of movement during the performance of rehabilitation exercises is critical for maximizing functional recovery after a stroke [29]. Moreover, providing customized feedback regarding the quality of exercise movements can increase motivation, promote long-term adherence to a prescribed exercise regimen, and ultimately maximize clinical outcomes [30]. One of the reasons for limited exercise participation by stroke survivors is the lack of access to resources to support exercise including performance feedback from rehabilitation specialists [31]. There are no technical solutions that provide feedback regarding the quality of exercise performance for upper-limb rehabilitation after stroke.

We propose a system for aiding in functional recovery after a stroke that consists of two wearable sensors, one worn on the stroke-affected upper limb and the other on the contralateral upper limb [32] (Fig. 1). The proposed system can be used to provide timely feedback when ADL are performed. If the system detects that the patient consistently performs GD movements with the unaffected upper limb, and rarely uses the stroke-affected upper limb, then a visual or vibrotactile reminder can be triggered to encourage the patient to attempt GD movements with the stroke-affected limb. A benefit of this approach is that if a movement is critical (e.g., signing a check), patients can use the unaffected upper limb without receiving negative feedback as long as they have performed a sufficient number of movements with the affected upper limb throughout the day. Furthermore, the system promotes high-dosage motor practice with appropriate feedback to extend components of rehabilitation interventions into the home environment.[…]

via Enabling Stroke Rehabilitation in Home and Community Settings: A Wearable Sensor-Based Approach for Upper-Limb Motor Training – IEEE Journals & Magazine

, , , , , , , , , ,

Leave a comment

[WEB SITE] Gaming helps personalized therapy level up – Penn State University

UNIVERSITY PARK, Pa. — Using game features in non-game contexts, computers can learn to build personalized mental- and physical-therapy programs that enhance individual motivation, according to Penn State engineers.

“We want to understand the human and team behaviors that motivate learning to ultimately develop personalized methods of learning instead of the one-size-fits-all approach that is often taken,” said Conrad Tucker, assistant professor of engineering design and industrial engineering.

They seek to use machine learning to train computers to develop personalized mental or physical therapy regimens — for example, to overcome anxiety or recover from a shoulder injury — so many individuals can each use a tailor-made program.

“Using people to individually evaluate others is not efficient or sustainable in time or human resources and does not scale up well to large numbers of people,” said Tucker. “We need to train computers to read individual people. Gamification explores the idea that different people are motivated by different things.”

To begin creating computer models for therapy programs, the researchers tested how to most effectively make the completion of a physical task into a gamified application by incorporating game features like scoring, avatars, challenges and competition.

“We’re exploring here how gamification could be applied to health and wellness by focusing on physically interactive gamified applications,” said Christian Lopez, graduate student in industrial engineering, who helped conduct the tests using a virtual-reality game environment.

Screen from game designed to test features for gamification use in physical and mental therapy. Image: Kimberly Cartier / Penn State

In the virtual-reality tests, researchers asked participants to physically avoid obstacles as they moved through a virtual environment. The game system recorded their actual body positions using motion sensors and then mirrored their movements with an avatar in virtual reality.

Participants had to bend, crouch, raise their arms, and jump to avoid obstacles. The participant successfully avoided a virtual obstacle if no part of their avatar touched the obstacle. If they made contact, the researchers rated the severity of the mistake by how much of the avatar touched the obstacle.

In one of the application designs, participants could earn more points by moving to collect virtual coins, which sometimes made them hit an obstacle.

“As task complexity increases, participants need more motivation to achieve the same level of results,” said Lopez. “No matter how engaging a particular feature is, it needs to move the participant towards completing the objective rather than backtracking or wasting time on a tangential task. Adding more features doesn’t necessarily enhance performance.”

Tucker and Lopez created a predictive algorithm — a mathematical formula to forecast the outcome of an event — that rates the potential usefulness of a game feature. They then tested how well each game feature motivated participants when completing the virtual-reality tasks. They compared their test results to the algorithm’s predictions as a proof of concept and found that the formula correctly anticipated which game features best motivated people in the physically interactive tasks.

The researchers found that gamified applications with a scoring system, the ability to select an avatar, and in-game rewards led to significantly fewer mistakes and higher performance than those with a win-or-lose system, randomized gaming backgrounds and performance-based awards.

Sixty-eight participants tested two designs that differed only by the features used to complete the same set of tasks. Tucker and Lopez published their results in Computers in Human Behavior.

The researchers chose the tested game features from the top-ranked games in the Google Play app store, taking advantage of the features that make the games binge-worthy and re-playable, and then narrowed the selection based on available technology.

Their algorithm next ranked game features by how easily designers could implement them, the physical complexity of using the feature, and the impact of the feature on participant motivation and ability to complete the task. If a game feature is too technologically difficult to incorporate into the game, too physically complex, does not offer enough incentive for added effort or works against the end goal of the game, then the feature has low potential usefulness.

The researchers would also like to use these results to boost workplace performance and personalize virtual-reality classrooms for online education.

“Game culture has already explored and mastered the psychological aspects of games that make them engaging and motivating,” said Tucker. “We want to leverage that knowledge towards the goal of individualized optimization of workplace performance.”

To do this, Tucker and Lopez next want to connect performance with mental state during these gamified physical tasks. Heart rate, electroencephalogram signals and facial expressions will be used as proxies for mood and mental state while completing tasks to connect mood with game features that affect motivation.

The National Science Foundation funded this research.

Source: Gaming helps personalized therapy level up | Penn State University

, , ,

Leave a comment

%d bloggers like this: