We address the problem of teaching an RNN to approximate list-processing algorithms given a small number of input-output training examples. Our approach is to generalize the idea of parametricity from programming language theory to formulate a semantic property that distinguishes common algorithms from arbitrary non-algorithmic functions. This characterization leads naturally to a learned data augmentation scheme that encourages RNNs to learn algorithmic behavior and enables small-sample learning in a variety of list-processing tasks.

}, author = {Owen Lewis and Katherine Hermann} } @mastersthesis {4189, title = {Structured learning and inference with neural networks and generative models}, year = {2018}, author = {Owen Lewis} } @inbook {2286, title = {Object and Scene Perception}, booktitle = {From Neuron to Cognition via Computational Neuroscience}, year = {2016}, publisher = {The MIT Press}, organization = {The MIT Press}, chapter = {17}, address = {Cambridge, MA, USA}, abstract = {This textbook presents a wide range of subjects in neuroscience from a computational perspective. It offers a comprehensive, integrated introduction to core topics, using computational tools to trace a path from neurons and circuits to behavior and cognition. Moreover, the chapters show how computational neuroscience{\textemdash}methods for modeling the causal interactions underlying neural systems{\textemdash}complements empirical research in advancing the understanding of brain and behavior.

The chapters{\textemdash}all by leaders in the field, and carefully integrated by the editors{\textemdash}cover such subjects as action and motor control; neuroplasticity, neuromodulation, and reinforcement learning; vision; and language{\textemdash}the core of human cognition.

The book can be used for advanced undergraduate or graduate level courses. It presents all necessary background in neuroscience beyond basic facts about neurons and synapses and general ideas about the structure and function of the human brain. Students should be familiar with differential equations and probability theory, and be able to pick up the basics of programming in MATLAB and/or Python. Slides, exercises, and other ancillary materials are freely available online, and many of the models described in the chapters are documented in the brain operation database, BODB (which is also described in a book chapter).

Available now through MIT Press - https://mitpress.mit.edu/neuron-cognition

}, isbn = {9780262034968}, url = {https://mitpress.mit.edu/neuron-cognition}, author = {Owen Lewis and Tomaso Poggio} } @conference {1754, title = {Metareasoning in Symbolic Domains}, booktitle = {NIPS Workshop | Bounded Optimality and Rational Metareasoning}, year = {2015}, month = {12/2015}, abstract = {Many AI problems, such as planning, grammar learning, program induction, and theory discovery, require searching in symbolic domains. Most models perform this search by evaluating a sequence of candidate solutions, generated in order by some heuristic. Human reasoning, though, is not limited to sequential trial and error.\ In particular, humans are able to reason *about *what the solution to a particular problem should look like, before comparing candidates against the data. In a program synthesis task, for instance, a human might first determine that the task at hand should be solved by a tail-recursive algorithm, before filling in the\ algorithm{\textquoteright}s details.

Reasoning in this way about solution structure confers at least two computational advantages. First, a given structure subsumes a potentially large collection of primitive solutions, and exploiting the constraints present in the structure{\textquoteright}s definition makes it possible to eval- uate the collection in substantially less time than it would take to evaluate each in turn. For example, a programmer might quickly conclude that a given algorithm cannot be implemented without recursion, without having to consider all possible non-recursive solutions. Second, it is often possible to estimate ahead of time the cost of evaluating different structures, making it possible to prioritize those that can be treated cheaply. In planning a route through an unfamiliar city, for example, one might first consider possibilities which use the subway exclusively, excluding for the moment ones that involve bus trips as well: if a successfully subway-only solution can be found, one then avoids the (potentially) exponentially more difficult bus-and-subway search problem.

Here, we consider a family of toy problems [1], in which an agent is given a balance scale, and is required to find a lighter counterfeit coin in a collection of genuine coins using at most some prescribed number of weighings. We develop a language for expressing solution structure\ that places restrictions on a set of *programs*,\ and use recent program synthesis techniques to search for a solution, encoded as a program, subject to hypothesized constraints on the program \ structure.