%0 Conference Paper %B ICLR 2019 %D 2019 %T Data for free: Fewer-shot algorithm learning with parametricity data augmentation %A Owen Lewis %A Katherine Hermann %X

We address the problem of teaching an RNN to approximate list-processing algorithms given a small number of input-output training examples. Our approach is to generalize the idea of parametricity from programming language theory to formulate a semantic property that distinguishes common algorithms from arbitrary non-algorithmic functions. This characterization leads naturally to a learned data augmentation scheme that encourages RNNs to learn algorithmic behavior and enables small-sample learning in a variety of list-processing tasks.

%B ICLR 2019 %8 04/2019 %G eng %0 Thesis %D 2018 %T Structured learning and inference with neural networks and generative models %A Owen Lewis %G eng %0 Book Section %B From Neuron to Cognition via Computational Neuroscience %D 2016 %T Object and Scene Perception %A Owen Lewis %A Tomaso Poggio %X

Overview

This textbook presents a wide range of subjects in neuroscience from a computational perspective. It offers a comprehensive, integrated introduction to core topics, using computational tools to trace a path from neurons and circuits to behavior and cognition. Moreover, the chapters show how computational neuroscience—methods for modeling the causal interactions underlying neural systems—complements empirical research in advancing the understanding of brain and behavior.

The chapters—all by leaders in the field, and carefully integrated by the editors—cover such subjects as action and motor control; neuroplasticity, neuromodulation, and reinforcement learning; vision; and language—the core of human cognition.

The book can be used for advanced undergraduate or graduate level courses. It presents all necessary background in neuroscience beyond basic facts about neurons and synapses and general ideas about the structure and function of the human brain. Students should be familiar with differential equations and probability theory, and be able to pick up the basics of programming in MATLAB and/or Python. Slides, exercises, and other ancillary materials are freely available online, and many of the models described in the chapters are documented in the brain operation database, BODB (which is also described in a book chapter).

Available now through MIT Press - https://mitpress.mit.edu/neuron-cognition

%B From Neuron to Cognition via Computational Neuroscience %I The MIT Press %C Cambridge, MA, USA %@ 9780262034968 %G eng %U https://mitpress.mit.edu/neuron-cognition %& 17 %0 Conference Paper %B NIPS Workshop | Bounded Optimality and Rational Metareasoning %D 2015 %T Metareasoning in Symbolic Domains %A Kevin Ellis %A Owen Lewis %X

Many AI problems, such as planning, grammar learning, program induction, and theory discovery, require searching in symbolic domains. Most models perform this search by evaluating a sequence of candidate solutions, generated in order by some heuristic. Human reasoning, though, is not limited to sequential trial and error.  In particular, humans are able to reason about what the solution to a particular problem should look like, before comparing candidates against the data. In a program synthesis task, for instance, a human might first determine that the task at hand should be solved by a tail-recursive algorithm, before filling in the  algorithm’s details.

Reasoning in this way about solution structure confers at least two computational advantages. First, a given structure subsumes a potentially large collection of primitive solutions, and exploiting the constraints present in the structure’s definition makes it possible to eval- uate the collection in substantially less time than it would take to evaluate each in turn. For example, a programmer might quickly conclude that a given algorithm cannot be implemented without recursion, without having to consider all possible non-recursive solutions. Second, it is often possible to estimate ahead of time the cost of evaluating different structures, making it possible to prioritize those that can be treated cheaply. In planning a route through an unfamiliar city, for example, one might first consider possibilities which use the subway exclusively, excluding for the moment ones that involve bus trips as well: if a successfully subway-only solution can be found, one then avoids the (potentially) exponentially more difficult bus-and-subway search problem.

Here, we consider a family of toy problems [1], in which an agent is given a balance scale, and is required to find a lighter counterfeit coin in a collection of genuine coins using at most some prescribed number of weighings. We develop a language for expressing solution structure  that places restrictions on a set of programs,  and use recent program synthesis techniques to search for a solution, encoded as a program, subject to hypothesized constraints on the program  structure.

%B NIPS Workshop | Bounded Optimality and Rational Metareasoning %8 12/2015 %G eng %U https://sites.google.com/site/boundedoptimalityworkshop/