Paper: | GS-1.2 |
Session: | Contributed Talks I |
Location: | Ormandy |
Session Time: | Thursday, September 6, 11:10 - 12:00 |
Presentation Time: | Thursday, September 6, 11:35 - 12:00 |
Presentation: |
Oral
|
Publication: |
2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania |
Paper Title: |
Neurocomputational Modeling of Human Physical Scene Understanding |
Manuscript: |
Click here to view manuscript |
DOI: |
https://doi.org/10.32470/CCN.2018.1091-0 |
Authors: |
Ilker Yildirim, Kevin Smith, Mario Belledonne, Jiajun Wu, Joshua Tenenbaum, MIT, United States |
Abstract: |
Human scene understanding involves not just localizing objects, but also inferring latent attributes that affect how the scene might unfold, such as the masses of objects within the scene. These attributes can sometimes only be inferred from the dynamics of a scene, but people can flexibly integrate this information to update their inferences. Here we propose a neurally plausible Efficient Physical Inference model that can generate and update inferences from videos. This model makes inferences over the inputs to a generative model of physics and graphics, using an LSTM based recognition network to efficiently approximate rational probabilistic conditioning. We find that this model not only rapidly and accurately recovers latent object information, but also that its inferences evolve with more information in a way similar to human judgments. The model provides a testable hypothesis about the population-level activity in brain regions underlying physical reasoning. |