Paper: | RS-1B.3 |
Session: | Late Breaking Research 1B |
Location: | Late-Breaking Research Area |
Session Time: | Thursday, September 6, 18:45 - 20:45 |
Presentation Time: | Thursday, September 6, 18:45 - 20:45 |
Presentation: |
Poster
|
Publication: |
2018 Conference on Cognitive Computational Neuroscience, 5-8 September 2018, Philadelphia, Pennsylvania |
Paper Title: |
Automatically Composing Representation Transformations as a Means for Generalization |
Authors: |
Michael Chang, Abhishek Gupta, Sergey Levine, Thomas Griffiths, University of California, Berkeley, United States |
Abstract: |
Learning to generalize in hetereogenous multitask settings is a difficult challenge in machine learning because the learner must discover regularities that cover a combinatorially large space of problems from only limited data. Recent works in cognitive science have framed this challenge as one of program induction, by recasting the problem of generalization to a problem of learning algorithmic procedures. While these approaches focus on searching through programs defined over a set of primitive operations, it is an open question where these primitives come from. To tackle this question, our approach investigates how to learn algorithmic procedures over learned representation transformations. The main idea is to formulate the construction of a computation graph as a sequential decision-making problem, where a controller selects functions to iteratively transform the input to the output, allowing the controller to be trained with policy optimization algorithms and the functions with backpropagation. Experiments on solving a variety of multilingual arithmetic problems demonstrate that our method discovers the hierarchical decomposition of a problem into its subproblems, generalizes out of distribution to unseen problem classes, and extrapolates to harder versions of the same problem, yielding a 10-fold reduction in sample complexity compared to a monolithic recurrent neural network. |