Paper: | PS-2A.11 |
Session: | Poster Session 2A |
Location: | Symphony/Overture |
Session Time: | Friday, September 7, 17:15 - 19:15 |
Presentation Time: | Friday, September 7, 17:15 - 19:15 |
Presentation: |
Poster
|
Publication: |
2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania |
Paper Title: |
Activation alignment: exploring the use of approximate activity gradients in multilayer networks |
Manuscript: |
Click here to view manuscript |
DOI: |
https://doi.org/10.32470/CCN.2018.1251-0 |
Authors: |
Thomas Mesnard, Montreal Institute for Learning Algorithms, Canada; Blake Richards, University of Toronto Scarborough, Canada |
Abstract: |
Thanks to the backpropagation-of-error algorithm, deep learning has significantly improved the state-of-the-art in various domains of machine learning. However, because backpropagation relies on assumptions that cannot be met in neuroscience, it is still unclear how similarly efficient algorithms for credit assignment in hierarchical networks could be implemented in the brain. In this paper, we look at one of the specific biologically implausible assumptions of backpropagation that hasn't been solved yet: the need for a precise knowledge of the derivative of the forward activation in the backward pass. We show that by choosing a simple, drastic approximation of the true derivative, learning still performs well---even slightly better than standard backpropagation---and this approximation seems to play a regularization role. This approximation would also be much easier for real neurons to implement. Thus, this work brings us a step closer to understanding how the brain could perform credit assignment in deep structures. |