Paper: | PS-1A.40 |
Session: | Poster Session 1A |
Location: | Symphony/Overture |
Session Time: | Thursday, September 6, 16:30 - 18:30 |
Presentation Time: | Thursday, September 6, 16:30 - 18:30 |
Presentation: |
Poster
|
Publication: |
2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania |
Paper Title: |
Learning Intermediate Features of Object Affordances with a Convolutional Neural Network |
Manuscript: |
Click here to view manuscript |
DOI: |
https://doi.org/10.32470/CCN.2018.1134-0 |
Authors: |
Aria Wang, Michael Tarr, Carnegie Mellon University, United States |
Abstract: |
Our ability to interact with the world around us relies on being able to infer what actions objects afford -- often referred to as affordances. The neural mechanisms of object-action associations are realized in the visuomotor pathway where information about both visual properties and actions is integrated into common representations. However, explicating these mechanisms is particularly challenging in the case of affordances because there is hardly any one-to-one mapping between visual features and inferred actions. To better understand the nature of affordances, we trained a deep convolutional neural network (CNN) to recognize affordances from images and to learn the underlying features or the dimensionality of affordances. Such features form an underlying compositional structure for the general representation of affordances which can then be tested against human neural data. We view this representational analysis as the first step towards a more formal account of how humans perceive and interact with the environment. |