Paper: | PS-1A.28 |
Session: | Poster Session 1A |
Location: | Symphony/Overture |
Session Time: | Thursday, September 6, 16:30 - 18:30 |
Presentation Time: | Thursday, September 6, 16:30 - 18:30 |
Presentation: |
Poster
|
Publication: |
2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania |
Paper Title: |
Temporal dynamics underlying sound discrimination in the human brain |
Manuscript: |
Click here to view manuscript |
DOI: |
https://doi.org/10.32470/CCN.2018.1090-0 |
Authors: |
Matthew Lowe, Santani Teng, Yalda Mohsenzadeh, MIT, United States; Ian Charest, University of Birmingham, United Kingdom; Dimitrios Pantazis, Aude Oliva, MIT, United States |
Abstract: |
The ability to orient and respond swiftly to our surroundings requires the detection and identification of sounds within moments. Broad descriptors of sounds, such as living and non-living objects, can be discriminated from neural activity starting within the first 100ms of perception, yet when are distinct sound categories (e.g., animals) and individual sounds (e.g., goat) represented and distinguished within the brain across time? To investigate this question, we use magnetoencephalography (MEG) and multivariate analyses of neural activity to examine the time course of audition across individual sounds (e.g., goat) and sound categories (animals, objects, people, spaces). Our results reveal a striking early signal for sound selectivity starting within 80ms after stimulus onset for both individual sounds and sound categories. Sound categories showed a more diffuse generalization across time. Notably, human voices were especially pronounced and distinctive compared with other sound categories. These results illuminate the rapid and parallel emergence of sound identity and category information in the brain, and provide critical evidence that these representations dynamically evolve across time in distinct ways. |