Paper: | PS-2A.2 |
Session: | Poster Session 2A |
Location: | Symphony/Overture |
Session Time: | Friday, September 7, 17:15 - 19:15 |
Presentation Time: | Friday, September 7, 17:15 - 19:15 |
Presentation: |
Poster
|
Publication: |
2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania |
Paper Title: |
A public fMRI dataset of 5000 scenes: a resource for human vision science |
Manuscript: |
Click here to view manuscript |
DOI: |
https://doi.org/10.32470/CCN.2018.1140-0 |
Authors: |
Nadine Chang, John Pyles, Abhinav Gupta, Michael Tarr, Carnegie Mellon University, United States; Elissa Aminoff, Fordham University, United States |
Abstract: |
Vision science - particularly machine vision - is being revolutionized by large-scale datasets. State-of-the-art artificial vision models critically depend on large-scale datasets to achieve high performance. In contrast, al- though large-scale learning models (e.g., models such as Alexnet) have been applied to human neuroimaging data, the image datasets used on neural studies often rely on significantly fewer images. The small size of these datasets also translates to limited image diversity. Here we dramatically increase the image dataset size deployed in an fMRI study of visual scene processing: over 5,000 discrete image stimuli were presented to each of four par- ticipants. We believe this boost in dataset size will bet- ter connect the field of computer vision to human neuro- science. To further enhance this connection and increase image overlap with computer vision datasets, we include images from two standard artificial learning datasets in our stimuli: 2,000 images from COCO; 2 images per cat- egory from ImageNet (∼ 2000). Also included are 1,000 hand-curated scene images from 250 categories. The scale advantage of our dataset and the use of a slow event-related design enables, for the first time, joint com- puter vision and fMRI analyses that span a significant and diverse region of image space using high-performing models. |