Paper: | PS-2A.26 |
Session: | Poster Session 2A |
Location: | Symphony/Overture |
Session Time: | Friday, September 7, 17:15 - 19:15 |
Presentation Time: | Friday, September 7, 17:15 - 19:15 |
Presentation: |
Poster
|
Publication: |
2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania |
Paper Title: |
Learning Simple Computations in Dynamical Systems by Example |
Manuscript: |
Click here to view manuscript |
DOI: |
https://doi.org/10.32470/CCN.2018.1225-0 |
Authors: |
Jason Kim, Danielle Bassett, University of Pennsylvania, United States |
Abstract: |
The ability to store, represent, and manipulate information is a crucial element of processing systems. While computers are carefully engineered and manufactured to perform mathematical operations on data, neurobiological systems robustly provide similar functions with substantially more variability in physical brain connectivity across species and development. In addition, neural systems can recognize largely unstructured patterns of sensory inputs that cannot always be nicely represented as discrete static pieces of information. Here we present a dynamical system that can represent chaotic attractors, and learn a simple translation operation by example. Specifically, we train a sparse and randomly connected reservoir computer system to evolve along two translated chaotic Lorenz attractors with different initial conditions embedded in 3 dimensions. During training, we apply a fourth input that takes a unique constant value for each attractor. We demonstrate that by driving the trained reservoir with new values of this fourth input, the reservoir is able to extrapolate the translation of the Lorenz attractor. Together, our results provide a simple but powerful mechanism by which a general dynamical system can learn to manipulate internal representations of complex information. |