Somatovisual processing in the deep layers of the human superior colliculus

Poster No:


Submission Type:

Abstract Submission 


Kevin Sitek1, Qureshi Asma1, Francesko Molla2, Gisela Hagberg2, Jung Hwan Kim1, Klaus Scheffler3, Marc Himmelbach4, David Ress1


1Baylor College of Medicine, Houston, TX, 2Max Planck Institute for Biological Cybernetics, Tübingen, Tübingen, 3Max Planck Institute for Biological Cybernetics, Tuebingen, Baden Württemberg, 4University of Tuebingen, Tübingen, Tübingen

First Author:

Kevin Sitek, PhD  
Baylor College of Medicine
Houston, TX


Qureshi Asma, PhD  
Baylor College of Medicine
Houston, TX
Francesko Molla  
Max Planck Institute for Biological Cybernetics
Tübingen, Tübingen
Gisela Hagberg  
Max Planck Institute for Biological Cybernetics
Tübingen, Tübingen
Jung Hwan Kim  
Baylor College of Medicine
Houston, TX
Klaus Scheffler  
Max Planck Institute for Biological Cybernetics
Tuebingen, Baden Württemberg
Marc Himmelbach  
University of Tuebingen
Tübingen, Tübingen
David Ress  
Baylor College of Medicine
Houston, TX


The deep layers of superior colliculus (SC) integrate sensory information from multiple modalities to create a coherent sensory representation of the world [1,2]. However, information about the human SC is limited due to the technical challenge of imaging small structures deep within the cranium [3]. Advances in ultra-high field MRI enable imaging with greater signal-to-noise ratio in smaller voxels, allowing us to probe functional responses within SC [4]. To understand how human SC integrates information across somatosensory and visual modalities, we utilized functional MRI (fMRI) at 9.4T during an integration task.


We collected fMRI from 5 individuals at 9.4T using a 16-channel transmit/31 receive array [5]. Participants performed a somatovisual integration task in which air puffs delivered to their fingers cued them to attend (but not saccade) to a quadrant of the visual field. Participants were asked to count the number of "+" signs that appeared in dot patterns in the cued quadrant while ignoring "X" and other random patterns. Single air puffs were continuously alternately presented to the index and ring fingers; a random double air puff cued visual attention to the upper (via index finger stimulation) or lower (via ring finger stimulation) visual fields. Stimulation alternated between the left and right hands (cuing left and right visual fields) every 15 seconds, enabling sinoidal data analysis.
In one participant, a second session used a visually cued paradigm with no tactile stimulation, allowing us to compare visual-only to somatovisual collicular processing.
Functional images (point-spread function-corrected EPI) were collected with 1 mm isotropic voxels over 26 slices which include the colliculi and most of early visual cortex (TR = 1.25 s). T1-weighted anatomical images were acquired with an MP2RAGE sequence (0.6-mm isotropic voxels).
Brain regions were initially segmented from the T1-weighted images using FreeSurfer, followed by manual adjustment. Next, a level-set depth-mapping approach was used to compute unique associations-streamlines-from the collicular surface to the cerebral aqueduct, enabling quantification of BOLD responses as a function of collicular depth.
Functional data were processed using a variant of the MrVista package. We corrected data for slice timing and motion and then fit a sinusoid (with frequency matching the left-right stimulus alternation) to each voxel time series, extracting amplitude, phase, and coherence. Next, using the depth streamlines, we averaged responses at superficial (0.6–1.8 mm) and deep (3.5–5.5 mm) levels.


We found strong lateralization of BOLD responses in the SC in all participants, with the attended visual hemifield having increased contralateral collicular activity. Activation was widespread in rostral SC at multiple depths. In rostral SC at superficial depths, BOLD responses were strongly lateralized, contralateral to the attended visual stimulus. In caudal SC, where deep somatosensory processing is expected for fore-limb stimulation [1], we saw activation in at least one SC (Figure 1 bottom). Indeed, compared to the BOLD phase in a visual-only task, deep caudal SC was significantly stronger (p < 0.01) in the somatovisual integration task (Figure 2).
Supporting Image: Sitek_Figure1.png
Supporting Image: Sitek_Figure2.png


Using high resolution fMRI, we identified regions in SC that respond to somatovisual integration. These correspond to deep layers of the SC, which are believed to represent multisensory information onto a visuotopic map. In confirmation, a visual-only version of the task resulted in much weaker responses in deep caudolateral SC, while maintaining strong responses in the rostral superficial SC, corresponding to predominantly visual layers that represent the visual stimulus. Overall, we found that ultra-high field fMRI is sensitive to somatosensory integration in deep layers of human superior colliculus, which to this point has only been accessible in animal models.

Neuroanatomy, Physiology, Metabolism and Neurotransmission:

Subcortical Structures 1

Perception, Attention and Motor Behavior:

Attention: Visual
Perception: Auditory/ Vestibular
Perception: Multisensory and Crossmodal 2



1|2Indicates the priority used for review

My abstract is being submitted as a Software Demonstration.


Please indicate below if your study was a "resting state" or "task-activation” study.


Healthy subjects only or patients (note that patient studies may also involve healthy subjects):

Healthy subjects

Was any human subjects research approved by the relevant Institutional Review Board or ethics panel? NOTE: Any human subjects studies without IRB approval will be automatically rejected.


Was any animal research approved by the relevant IACUC or other animal research panel? NOTE: Any animal studies without IACUC approval will be automatically rejected.

Not applicable

Please indicate which methods were used in your research:

Functional MRI
Structural MRI

For human MRI, what field strength scanner do you use?

If Other, please list  -   9.4T

Which processing packages did you use for your study?

Free Surfer
Other, Please list  -   MrVista

Provide references using author date format

Drager, U. C., & Hubel, D. H. (1975). Physiology of visual cells in mouse superior colliculus and correlation with somatosensory and auditory input. Nature, 253(5488), 203–204.
Stein, B. E., Magalhaes-Castro, B. & Kruger, L. (1976). Relationship between visual and tactile representations in cat superior colliculus. J Neurophysiol 39, 401-419.
Himmelbach, M., Linzenbold, W., & Ilg, U. J. (2013). Dissociation of reach-related and visual signals in the human superior colliculus. Neuroimage, 82, 61-67.
Loureiro, J. R., Hagberg, G. E., Ethofer, T., Erb, M., Bause, J., Ehses, P., ... & Himmelbach, M. (2017). Depth‐dependence of visual signals in the human superior colliculus at 9.4 T. Human brain mapping, 38(1), 574-587.
Shajan, G., Kozlov, M., Hoffmann, J., Turner, R., Scheffler, K., & Pohmann, R. (2014). A 16‐channel dual‐row transmit array in combination with a 31‐element receive array for human brain imaging at 9.4 T. Magnetic resonance in medicine, 71(2), 870-879.
Drager, U. C., & Hubel, D. H. (1975). Responses to visual stimulation and relationship between visual, auditory, and somatosensory inputs in mouse superior colliculus. Journal of Neurophysiology, 38(3), 690-713.