NIPS 2015 Symposium on Brains, Minds, and Machines

Neural Information Processing Systems (NIPS) 2015 Symposium on Brains, Minds, and Machines

December 10th, 2015 | Palais des Congrès de Montréal, CANADA

Read Gabriel Kreiman's review of the event

About the symposium

Today's science, tomorrow's engineering: We will be discussing current results in the scientific understanding of intelligence and how these results enable new approaches to replicate intelligence in engineered systems.

Understanding intelligence and the brain requires theories at different levels, ranging from the biophysics of single neurons to algorithms, computations, and a theory of learning. In this symposium, we aim to bring together researchers from machine learning, artificial intelligence, neuroscience, and cognitive science to present and discuss state-of-the-art research that is focused on understanding intelligence at these different levels.

Central questions of the symposium include how intelligence is grounded in computation, how these computations are implemented in neural systems, how intelligence can be described via unifying mathematical theories, and how we can build intelligent machines based on these principles.

Our core goal is to develop a science of intelligence, which means understanding human intelligence and its basis in the circuits of the brain and the biophysics of neurons. We also believe that the engineering of tomorrow will need the science of today, in the same way as the basic research of Hubel and Wiesel in the ‘60s was the foundation for today's deep learning architectures.

Symposium Program

Tomaso Poggio Tomaso Poggio
Director, Center for Brains Minds and Machines
McGovern Institute, Brain and Cognitive Sciences Department, CSAIL, MIT

Brains, Minds and Machines:
Today’s Science, Tomorrow’s Engineering

The mission of CBMM is to make progress on the greatest problem in science — human intelligence. A new field is emerging bringing together computer scientists, cognitive scientists and neuroscientists to work in close collaboration dedicated to developing a computationally centered understanding of human intelligence and to establishing an engineering practice based on that understanding. I will describe the Turing++ Questions idea, their scientific role and their potential impact on the engineering of tomorrow.

Christof Koch Christof Koch
President and Chief Scientific Officer
Allen Institute for Brain Science

The Neuroscience of Intelligence

Yesterday’s scientific research, starting with Hubel and Wiesel’s Nobel-prize winning work on the circuitry underlying visual processing in cortex, gave rise to today’s deep machine learning networks. Likewise, today’s research into the neuronal basis underlying high-level cognition and intelligence in homo sapiens should help with the future engineering of human-level AI. This talk will highlight what is known about the neuronal basis of intelligence and will describe an ongoing large project focused on fully characterizing the basic switching elements and their interconnections in the mouse and human neocortex.

Gabriel Kreiman Gabriel Kreiman
Associate Professor, Harvard Medical School

The Roles of Recurrent and Feedback Computations in Cortex

There are abundant recurrent connections throughout the brain, yet their functional roles remain poorly understood and these connections are notoriously absent in the successful body of work on deep feed-forward architectures. In this talk, I will take inspiration from neurobiology to suggest possible computations that could be instantiated by recurrent connections. As a paradigmatic example, we will consider the problem of pattern completion, whereby we are able to extrapolate and make inferences from partial information. Following Marr’s three-level description of visual processing, we will present behavioral, physiological and computational evidence demonstrating how recurrent connections can help solve the problem of pattern completion.

Andrew Saxe Andrew Saxe
Swartz Postdoctoral Fellow in Theoretical Neuroscience
Center for Brain Science, Harvard University

Hallmarks of Deep Learning in the Brain

Anatomically, the brain is deep. To understand the ramifications of depth on learning in the brain requires a clear theory of deep learning. I develop the theory of gradient descent learning in deep linear neural networks, which gives exact quantitative answers to fundamental questions such as how learning speed scales with depth, how unsupervised pretraining speeds learning, and how internal representations change across a deep network. Several key hallmarks of deep learning are consistent with behavioral and neural observations. The theory can be further specialized for specific experimental paradigms. Taking perceptual learning as an example, I show that a deep learning theory accounts for neural tuning changes across the cortical hierarchy; and predicts behavioral performance transfer to untrained tasks as a function of task precision, restricted position training, and learning time. Together, these findings suggest that depth may be a key factor constraining learning dynamics in the brain. A better scientific understanding should eventually contribute to engineering advances, and I discuss one example from this work: a class of scaled, orthogonal initializations which permit rapid training of very deep nonlinear networks. Joint work with Surya Ganguli and Jay McClelland

Surya Ganguli Surya Ganguli
Assistant Professor, Deparment of Applied Physics, Stanford University

Towards Glimpses of a New Science of Brains, Minds and Machines:
Weaving Together Physics, Computer Science, and Neurobiology

Our neural circuits exploit the laws of physics to perform computations in ways that are fundamentally different from traditional computers designed by these same neural circuits. To eradicate this irony, we must develop a new science of brains, minds and machines that seamlessly weaves together physics, computation and neurobiology to both elucidate the design principles governing neural systems, and instantiate these principles in physical devices. We will discuss several glimpses in such a direction, including: (1) understanding the speed with which both infants and deep neural circuits learn hierarchical structure, (2) exploiting the geometry of high dimensional error surfaces to speed up learning, (3) exploiting ideas from non-equilibrium statistical mechanics to circumvent credit-assignment and mixing time problems to learn very deep stochastic generative models, and (4) delineating fundamental theoretical limits on the energy, speed and accuracy of communication by any physically implementable device.

Demis Hassabis Demis Hassabis
Co-Founder & CEO, DeepMind
Vice President of Engineering, Google

Neuroscience and the Quest for AI

How systems neuroscience can help in the quest for Artificial General Intelligence

Joshua Tenenbaum Joshua Tenenbaum
Professor, Department of Brain and Cognitive Sciences, MIT

Building Machines That Learn like Humans

What is the essence of human intelligence — what makes any human child smarter than any artificial intelligence system that has ever been built? Recent advances in machine learning and computer vision are extremely impressive as engineering accomplishments, but are far from approaching learning and perception the way humans do. I will talk about this gap, highlighting the difference between a view of intelligence as pattern recognition, where the goal is to find invariant features for classification, and intelligence as causal modeling, where the goal is to build and reason with generative models of the world's causal structure. I will talk about the ways cognitive scientists are beginning to reverse-engineer human scene understanding and concept learning using methods from probabilistic programs and program induction -- often complemented by deep learning, nonparametric Bayes, and other more conventional machine learning approaches. I hope to convince you that a deeper conversation between these fields can benefit us all, laying the foundations for more human-like approaches to artificial intelligence as well as a better understanding of human minds and brains in computational terms.

Panel Discussion

Including all speakers and the following panelists

Gary Marcus Gary Marcus

Director NYU Center for Language and Music
Geometric Intelligence

Terry Sejnowski Terrence Sejnowski

Howard Hughes Medical Institute Investigator, Francis Crick Chair
Salk Institute for Biological Studies