Paper: | PS-1A.42 |
Session: | Poster Session 1A |
Location: | Symphony/Overture |
Session Time: | Thursday, September 6, 16:30 - 18:30 |
Presentation Time: | Thursday, September 6, 16:30 - 18:30 |
Presentation: |
Poster
|
Publication: |
2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania |
Paper Title: |
A Neural Microcircuit Model for a Scalable Scale-invariant Representation of Time |
Manuscript: |
Click here to view manuscript |
DOI: |
https://doi.org/10.32470/CCN.2018.1097-0 |
Authors: |
Yue Liu, Zoran Tiganj, Michael Hasselmo, Marc Howard, Boston University, United States |
Abstract: |
Scale-invariant timing has been observed in a wide range of behavioral experiments. The firing properties of recently described "time cells" provide a possible neural substrate for scale-invariant behavior. Earlier neural circuit models do not produce scale-invariant neural sequences. In this paper we present a biologically detailed network model based on an earlier mathematical algorithm. The simulations incorporate exponentially decaying persistent firing maintained by the calcium-activated nonspecific (CAN) cationic current and a network structure given by the inverse Laplace transform to generate time cells with scale-invariant firing rates. This model provides the first biologically detailed neural circuit for generating scale-invariant time cells. The circuit that implements the inverse Laplace transform merely consists of off-center/on-surround receptive fields. Critically, rescaling temporal sequences can be accomplished simply via cortical gain control (changing the slope of the f-I curve). Because of the generality of the Laplace transform and the flexibility of this neural model, this neural architecture could contribute to many neural computations over functions. |