Paper: | RS-2A.1 |
Session: | Late Breaking Research 2A |
Location: | Late-Breaking Research Area |
Session Time: | Friday, September 7, 17:15 - 19:15 |
Presentation Time: | Friday, September 7, 17:15 - 19:15 |
Presentation: |
Poster
|
Publication: |
2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania |
Paper Title: |
A rational account of human memory search |
Authors: |
Qiong Zhang, John Anderson, Carnegie Mellon University, United States |
Abstract: |
Performing everyday tasks requires the ability to search through and retrieve past memories. A central paradigm to study human memory search is the semantic fluency task, where participants are asked to retrieve as many items as possible from a category. Observed responses tend to be clustered semantically. To understand when our mind decides to switch from one cluster to the next, recent work has proposed two competing mechanisms. Under the first switching mechanism, people make strategic decision to switch away from a depleted patch based on marginal value theorem, similar to optimal foraging in a spatial environment. The second mechanism demonstrates that similar behavior patterns can emerge using a random walk on a semantic network. In the current work, instead of comparing competing switching mechanisms over observed human data, we carry out a rational analysis examining what would be the optimal patch-switching policy under the framework of reinforcement learning. The reinforcement learning agent, a Deep Q-Network (DQN), is built upon the random walk model and allows strategic switches based on features of the local semantic patch. After learning from rewards, the resulted policy of the agent gives rise to a third switching mechanism, which outperforms the previous two switching mechanisms. |