Paper: | PS-1A.27 |
Session: | Poster Session 1A |
Location: | Symphony/Overture |
Session Time: | Thursday, September 6, 16:30 - 18:30 |
Presentation Time: | Thursday, September 6, 16:30 - 18:30 |
Presentation: |
Poster
|
Publication: |
2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania |
Paper Title: |
Deep Neural Networks Represent Semantic Category in Object Images Independently from Low-level Shape |
Manuscript: |
Click here to view manuscript |
DOI: |
https://doi.org/10.32470/CCN.2018.1081-0 |
Authors: |
Astrid Zeman, J Brendan Ritchie, Stefania Bracci, Hans Op de Beeck, KULeuven, Belgium |
Abstract: |
Deep Neural Networks (DNNs) categorize object images with extremely high levels of accuracy, with performance that is able to match, or even surpass, humans. In natural images, category is often confounded with shape information, therefore it is possible that DNNs rely heavily upon visual shape, rather than semantics, in order to discriminate between categories. Using two datasets that explicitly dissociate shape from category, we quantify the extent to which DNNs represent semantic information independently from shape. One dataset defines shape as a high-level property, namely low versus high aspect ratio. The second dataset defines shape as 9 different types that best represent low-level, retinotopic shape. We discover that DNNs are able to encode semantic information independently from low-level shape, peaking at the final fully connected layer in multiple DNN architectures. The final layer of multiple DNNs represents high-level shape to the same level of correlation as category. This work suggests that DNNs are able to bridge the semantic gap, by representing category independently from low-level shape. |