Technical Program

Paper Detail

Paper: PS-2A.38
Session: Poster Session 2A
Location: Symphony/Overture
Session Time: Friday, September 7, 17:15 - 19:15
Presentation Time:Friday, September 7, 17:15 - 19:15
Presentation: Poster
Publication: 2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania
Paper Title: Auditory letter-name processing elicits crossmodal representations in blind listeners
Manuscript:  Click here to view manuscript
Authors: Santani Teng, Smith-Kettlewell Eye Research Institute, United States; Verena Sommer, Max Planck Institute for Human Development, Germany; Radoslaw Cichy, Free University of Berlin, Germany; Dimitrios Pantazis, Aude Oliva, Massachusetts Institute of Technology, United States
Abstract: As incoming stimuli travel through our sensory pipelines, they are processed in multiple formats, spanning levels of abstraction as well as sensory domains. The spatiotemporal and representational dynamics of this processing cascade remain especially unclear in nonvisual modalities and nontypical populations; in particular, a stimulus representation may either facilitate or suppress its multisensory analogue. Here, we presented auditory alphabetic letter names to blind listeners with no visual letter experience, as well as sighted listeners with no braille (tactile) reading experience. Blind and sighted groups’ brain responses distinguished letter identity along similar time courses and accuracies. However, only blind listeners’ brain signals correlated with a low-level model of braille characters, while no such correlation was found between sighted listeners’ brain signals and a neural network model of low-level visual letter features. The results illustrate that visual experience modulates both the extent and nature of multisensory processing and object representation generally.