[ARTICLE] Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation – Full Text

Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this “online” multisensory improvement, there is evidence of long-lasting, “offline” effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced “online” effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations.

Introduction

The primary human visual pathway conveys the majority of retinal fibers to the lateral geniculate nucleus of the thalamus and then, via the optic radiations, to the primary visual cortex (V1) (the retino-geniculo-striate pathway). V1 is the main distributor of visual information to extrastriate visual areas, for further processing. A secondary visual pathway (the retino-collicular pathway) routes a minority of retinal fibers directly to the Superior Colliculus (a midbrain structure), which also has reciprocal connections with striate and extrastriate visual cortices (May, 2006).

Patients with lateralized damages to the primary visual cortex (V1) or to the neural pathway feeding V1 often develop homonymous hemianopia, a visual field defect with the loss of conscious vision in one hemifield. Hemianopic patients cannot perceive visual stimuli presented in the blind hemifield; moreover, they show the inability to spontaneously develop effective oculomotor strategies to compensate for the visual field loss (Hildebrandt et al., 1999Zihl, 2000Tant et al., 2002).

Despite the visual deficit, hemianopic patients can preserve the ability to integrate audiovisual stimuli in the affected field, with beneficial effects (Frassinetti et al., 2005Leo et al., 2008). In particular, data by Frassinetti and colleagues (Frassinetti et al., 2005) show that patients performing a visual detection task, while maintaining central fixation, significantly improved conscious visual detections in the affected field, when the auditory stimuli were applied in spatial and temporal coincidence with the visual targets.

The Superior Colliculus is the most likely structure mediating this multisensory improvement, because of its anatomical connections and the properties of its neuronal responses. Indeed, SC neurons receive not only visual information but also signals from other different sensory modalities, such as audition (Meredith and Stein, 1986Stein and Meredith, 1993May, 2006). Visual and auditory information are integrated in multisensory SC neurons according to specific principles (Stein and Meredith, 1993): an audiovisual stimulation elicits a stronger neuronal activation than each single component, when the visual and auditory components are presented in spatial and temporal register (spatial and temporal principle). Moreover, a proportionally greater enhancement of multisensory neuronal activation is evoked when weakly effective unisensory stimuli are combined, compared to the combination of highly effective stimuli (inverse effectiveness principle). The SC integrative principles have strong implications in hemianopia, as the SC and the retino-collicular pathway are preserved in these patients. Visual retinal input to SC, although weak, can still be efficiently combined with an accessory auditory input thanks to the inverse effectiveness principle, provided the rule of spatial and temporal proximity is satisfied. Furthermore, SC multisensory enhancement can affect cortical visual processing thanks to the projections from the SC to the visual cortices.

In addition to the immediate, “online” multisensory improvement in visual detection, there is also evidence of prolonged, “offline” effects that can be induced by repeated exposure to audiovisual stimuli. Indeed, long-lasting improvements of visual performances in hemianopic patients, promoted by audiovisual training protocols stimulating the blind hemifield, have been reported (Bolognini et al., 2005Passamonti et al., 2009Dundon et al., 2015bTinelli et al., 2015Grasso et al., 2016). During the training, a visual target was given in close spatial and temporal proximity with an auditory stimulus, at various positions in the visual field; patients were asked to detect the presence of the visual target, by directing the gaze toward it from a central fixation point. Results revealed a significant post-training improvement in detection of unimodal visual targets in the blind field when the patients were allowed to use eye movements, while a weak amelioration was found when they had to maintain central fixation (Bolognini et al., 2005Tinelli et al., 2015). Such results suggest that the audiovisual training could promote an increased oculomotor response to visual stimuli in the affected hemifield.

Beyond the “online” effects of audiovisual stimulation, the Superior Colliculus is a possible candidate for mediating the training effects, too. Indeed, the SC projects to brainstem motor areas controlling eyes and head orientation, and is critically involved in the initiation and execution of reflexive (i.e., exogenously-driven) saccades (Sparks, 1986Jay and Sparks, 1987aMay, 2006Johnston and Everling, 2008). Importantly, more than 70% of SC neurons projecting to the brainstem and, therefore involved in saccade generation, respond to multisensory stimulations (Meredith and Stein, 1986). As such, audiovisual stimuli, enhancing multisensory SC activation, might plastically reinforce the gain of the transduction from the SC sensory response to the motor output; in other words, after training the oculomotor system could have acquired increased responsiveness to the visual input conveyed via the retino-collicular pathway. However, the plastic mechanisms and synaptic reorganization that can functionally instantiate these visuomotor capabilities remain undetermined. Moreover, it is unclear whether the training may even stimulate genuine visual restitution beyond oculomotor compensation, and how the compensatory and restitutive effects may complementary contribute to visual improvements.

Recently, we have developed a neural network model (Magosso et al., 2016) that formalized the main cortico-collicular loops involved in audiovisual integration, and implemented—via neural connections and input-output neural characteristics—the SC multisensory integrative principles. The network postulated neural correlates of visual consciousness and mimicked unilateral V1 lesion. Simulations, performed in fixed-eyes condition, reproduced the “online” effects of enhanced visual detection under audiovisual stimulation.

Here, we extend our previous neural network to explore the effects of training in simulated hemianopic patients, providing quantitative predictions that can contribute to a mechanistic understanding of visual performance improvement observed in real patients. To this aim, the network has been integrated by novel elements. First, we have included a module of saccade generation, embracing the colliculus sensory-motor transduction; in this way, we can account for the potentiality of short-latency saccades triggered in a bottom-up fashion. Second, Hebbian mechanisms of synaptic learning have been implemented and adopted during training simulations. Different training paradigms (audiovisual multisensory/visual unisensory stimulation in eye-movements/fixed-eyes condition) are tested, to examine their efficacy in promoting different forms of rehabilitation (compensatory/restitutive), and to assess the predicted results in light of in vivo data.

 

Materials and Methods

The neural network is conceptually made up of two modules (Figure 1A). A sensory module (blue blocks and lines) includes cortical and subcortical (SC) neuronal areas devoted to the sensory representation of the external stimulation. An oculomotor module (red blocks and lines) can potentially react to the sensory neural representation, generating a saccade toward the external stimulation. The SC is involved in both modules.

Figure 1. (A) Sketch of the neural network architecture. Blue blocks and lines represent the sensory module; red blocks and lines denote the oculomotor module. R, retina; V1, primary visual cortex; E, extrastriate visual cortex; SC, Superior Colliculus; A, auditory area; FP, saccade-related frontoparietal areas (δ denotes a pure delay); SG, Brainstem Saccade Generator. g(t) is the current gaze position (resulting from the oculomotor module); θg is the target gaze position decoded from the SC activity. pis the position of the external (visual or spatially coincident audiovisual) stimulus in head-centered coordinates, and p-g(t) is the stimulus position in retinotopic coordinates. WH, Q denotes inter-area synapses from neurons in area Q to neurons in area H(B) Exemplary pattern of basal (i.e., pre-training) inter-area synapses. Here, synapses WSC, R from the retina to SC are depicted, limited to about one hemifield (−10° ÷ +90°) (the same pattern holds for the remaining not shown positions). x-axis reports the position (j, in deg) of the pre-synaptic neuron in area R and the y-axis the position (i, in deg) of the post-synaptic neuron in area SC. The color at each intersection (j, i) codes the strength of the synapse from the pre-synaptic neuron j in area R to the post-synaptic neuron i in SC. Similar patterns hold for all other inter-area synapses within the sensory module. Consistently with the following figures, scale color is between 0 and the maximum value reachable by training (WSC,RmaxWmaxSC,R). WSC,R0W0SC,R denotes the central weight of the pre-training Gaussian pattern of the synapses. (C) Schematic picture of the eye-centered topological organization of neurons in each area. In case (1), the stimulus induces an activation bubble centered on the neuron with preferred retinal position = 45°, in a given area; in case (2), the stimulus induces an activation bubble centered on the neuron with preferred retinal position = 45°–30° = 15°.

Continue —> Frontiers | Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation | Frontiers in Computational Neuroscience

, , , , , , ,

  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: