Paper: | RS-2A.4 |
Session: | Late Breaking Research 2A |
Location: | Late-Breaking Research Area |
Session Time: | Friday, September 7, 17:15 - 19:15 |
Presentation Time: | Friday, September 7, 17:15 - 19:15 |
Presentation: |
Poster
|
Publication: |
2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania |
Paper Title: |
Can neural computation be compressed enough for us to understand it? |
Authors: |
Timothy Lillicrap, Google Deepmind, United Kingdom; Konrad Kording, University of Pennsylvania, United Kingdom |
Abstract: |
The mainstream view of computational systems neuroscience demands an understanding of the whole brain: how the nervous system converts stimuli and internal states into complex behaviors. It also demands that the understanding can be communicated to scientists. However, informed by the fact that we can not well approximate neural networks that play Go, or identify images by human digestable equations, we conjecture that brain networks cannot be human understandable. If true, we can at best divide computation into compact, human understandable principles, and a huge set of parameters that we can not hope to communicate. Neural computation is shaped by huge amounts of information from the environment, making it unlikely we could capture the learned aspects into principles. Here we argue that therefore anatomy and learning dynamics are particularly interesting principles; while we can not understand the learned parameters as they reflect a complex world, we can meaningfully understand how they come about. |