Paper: | GS-6.1 |
Session: | Contributed Talks VI |
Location: | Ormandy |
Session Time: | Saturday, September 8, 09:50 - 10:30 |
Presentation Time: | Saturday, September 8, 09:50 - 10:10 |
Presentation: |
Oral
|
Publication: |
2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania |
Paper Title: |
Distinct Computational Models of Reading Correspond to Distinct but Similar Neural Activation Patterns |
Manuscript: |
Click here to view manuscript |
DOI: |
https://doi.org/10.32470/CCN.2018.1184-0 |
Authors: |
William Graves, Rutgers University -- Newark, United States; Amulya Bidar Nataraj, Rutgers Business School, United States |
Abstract: |
A long-standing debate in cognitive computational neuroscience relates to the merits of modeling cognition using brain-inspired artificial neural networks with distributed representations compared to models using symbolic representations. Using established examples of each type of model from the domain of reading, we directly evaluated each with respect to its ability to capture correspondences among stimuli that matched with neural correspondences among the stimuli. Specifically, we focused on the internal feature representations situated between model inputs and outputs in a feed-forward distributed model of reading and in the symbolic dual-route model of reading. Word representations from the models were vectorized and pair-wise correlations were calculated among 464 words. To test for brain areas where activation-based word representations correlated with model-based representations, a searchlight analysis was performed across the whole left hemisphere cortex, and specifically within an atlas-defined region of interest in the fusiform gyrus. Both models showed similar correspondence with activation in anterior lateral temporal and fusiform regions, while only the distributed model correlated with activation in the inferior frontal gyrus language-related cortex. Overall, these results suggest that both modeling approaches capture neurally relevant information. The distributed model in particular, however, may capture more task-relevant information for reading aloud. |