Technical Program

Paper Detail

Paper: PS-1B.27
Session: Poster Session 1B
Location: Symphony/Overture
Session Time: Thursday, September 6, 18:45 - 20:45
Presentation Time:Thursday, September 6, 18:45 - 20:45
Presentation: Poster
Publication: 2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania
Paper Title: Phoneme-level processing in low-frequency cortical responses to speech explained by acoustic features
Manuscript:  Click here to view manuscript
Authors: Christoph Daube, Robin A. A. Ince, University of Glasgow, United Kingdom; Joachim Gross, Westfälische Wilhelms-Universität, Germany
Abstract: Linear encoding models constructed to explain human cortical responses to speech have the potential to provide insights into the mechanisms of speech comprehension. It has been shown that combining annotated linguistic features with acoustic features of the speech signal can consistently improve the prediction of brain responses. Here we aim to replicate these effects in source-level magnetoencephalography (MEG) data to ask if the contribution made by linguistic features could be explained by more comprehensive models considering acoustic features only. We thus compare the predictive performance of several acoustic feature spaces of varying dimensionality with that of an annotated linguistic feature space. While we replicate the effect of increased performance when combining annotated features with spectrograms over spectrograms alone, we also obtain similar increases with Gabor-filtered spectrograms and even stronger increases with the combination of spectrograms and their temporal gradients. We then find that the predictions of this best acoustic model are highly redundant with those of the annotated feature space. We conclude that annotated feature spaces are a great as benchmarks. However, we stress that for the understanding of the computations underlying cortical responses to speech, models specifying transformations of the acoustic input are necessary.