Paper: | PS-2A.33 |
Session: | Poster Session 2A |
Location: | Symphony/Overture |
Session Time: | Friday, September 7, 17:15 - 19:15 |
Presentation Time: | Friday, September 7, 17:15 - 19:15 |
Presentation: |
Poster
|
Publication: |
2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania |
Paper Title: |
Representational dynamics in the human ventral stream captured in deep recurrent neural nets |
Manuscript: |
Click here to view manuscript |
DOI: |
https://doi.org/10.32470/CCN.2018.1190-0 |
Authors: |
Tim C Kietzmann, Courtney J Spoerer, University of Cambridge, United Kingdom; Lynn Sörensen, University of Amsterdam, Netherlands; Olaf Hauk, University of Cambridge, United Kingdom; Radoslaw M Cichy, Freie Universität Berlin, Germany; Nikolaus Kriegeskorte, Columbia University, United States |
Abstract: |
Feedforward models of visual processing provide human-level object-recognition performance and state-of-the-art predictions of temporally averaged neural responses. However, the primate visual system processes information through dynamic recurrent signaling. Here we characterize and model the representational dynamics of visual processing along multiple areas of the human ventral stream by combining source-reconstructed magnetoencephalography data and deep learning. Our analyses of the empirical data revealed neural responses that traverse distinct encoding schemes across time and space, in line with signatures of recurrent signaling. Next, we estimated the ability of different deep network architectures to capture the neural dynamics by using neural representational trajectories as space- and time-varying target functions. Feedforward models, with units that ramp-up their activity over time, predicted nonlinear representational dynamics, but failed to account for the neural effects. Recurrent models of matched parametric complexity significantly better explained the held-out data. We next optimised the recurrent networks for a classification objective only. While performing significantly better than random networks, the variance explained fell short of the architecture’s capacity. This paves the way for the search for additional objectives that the ventral stream may optimize, including category-orthogonal objectives, noise, occlusion, manipulability, and semantics. |