Seminars

Brains, Minds and Machines Seminar Series: The Integrated Information Theory of Consciousness

Sep 23, 2014 - 4:00 pm
Dr. Christof Koch, Chief Scientific Officer - Allen Institute for Brain Science
Venue:  MIT: McGovern Institute Singleton Auditorium, 46-3002 Address:  43 Vassar Street, MIT Bldg 46, Cambridge, 02139 United States Speaker/s:  Dr. Christof Koch, Chief Scientific Officer - Allen Institute for Brain Science

Abstract:

The science of consciousness has made great strides by focusing on the behavioral and neuronal correlates of experience. However, such correlates are not enough if we are to understand even basic facts, for example, why the cerebral cortex gives rise to consciousness but the cerebellum does not, though it has even more neurons and appears to be just as complicated. Moreover, correlates are of little help in many instances where we would like to know if consciousness is present: patients with a few remaining islands of functioning cortex, pre-term infants, non-mammalian species, and machines that are rapidly outperforming people at driving, recognizing faces and objects, and answering difficult questions. To address these issues, we need a theory of consciousness – one that says what experience is and what type of physical systems can have it. Giulio Tononi’s Integrated Information Theory (IIT) does so by starting from conscious experience itself via five phenomenological axioms of existence, composition, information, integration, and exclusion. From these it derives five postulates about the properties required of physical mechanisms to support consciousness. The theory provides a principled account of both the quantity and the quality of an individual experience, and a calculus to evaluate whether or not a particular system of mechanisms is conscious and of what. Moreover, IIT can explain a range of clinical and laboratory findings, makes a number of testable predictions, and extrapolates to a number of unusual conditions. In sharp contrast with widespread functionalist beliefs, IIT implies that digital computers, even if their behavior were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.

Bio:

Born in the American Midwest, Christof Koch grew up in Holland, Germany, Canada, and Morocco. He studied Physics and Philosophy at the University of Tübingen in Germany and was awarded his Ph.D. in Biophysics in 1982. After 4 years at MIT, Christof was a Professor in Biology and Engineering at the California Institute of Technology, in Pasadena, California from 1986 until 2013. In 2011, he became the Chief Scientific Officer at the Allen Institute for Brain Science in Seattle, where he is leading a ten year, high through-put effort of several hundred scientists building brain observatories to catalogue, map, analyze and understand the cerebral cortex in humans and mice. He loves books, dogs, climbing, biking and long-distance running.

Christof has authored more than 300 scientific papers and articles, eight patents and five books about the biophysics of nerve cells, attention, visual perception and the brain basis of consciousness. He has worked extensively with neurosurgeons, neuroscientists, physicists, computer scientists and philosophers and is an engaging and frequent public speaker. Together with his long-time collaborator, Francis Crick, Christof pioneered the scientific study of consciousness. His latest book is Consciousness – Confessions of a Romantic Reductionist.

Organizer:  Tomaso Poggio

CBMM Weekly Research Meeting: Parsing Objects and Scenes in Two- and Three-Dimensions

May 16, 2014 - 4:00 pm
Parsing Objects and Scenes in Two- and Three-Dimensions
Venue:  MIT: McGovern Institute Singleton Auditorium, 46-3002 Address:  43 Vassar Street, MIT Bldg 46, Cambridge, 02139 United States Speaker/s:  Alan L. Yuille, Professor & Investigator, UCLA

Topic: Progress on CBMM Challenge

Abstract:

We continue the series of weekly discussions and reports on each CBMM challenge question describing progress and problems of ongoing work at CBMM.

Thrust 5 is focused on models for the CBMM challenge that can answer CBMM challenge questions while being consistent with human behavior and neural data. This talk presents three recent studies on detecting and parsing objects and scenes and discusses how they contribute to the CBMM challenge. We first address the “what?” problem of detecting animals and animal parts (in a newly labelled dataset) and show the advantages of part-sharing (X. Chen et al. 2014). Next, within the same “what?” problem, we describe an approach to parse humans and estimate their  three-dimensional structure from single images (C. Chen et al. CVPR 2014).  Finally, we describe “psychophysics in the wild” for rapid detection of objects in complex scenes in a newly labelled dataset (Y. Li et al, CVPR 2014). We conclude discussing how these approaches should be extended to meet the CBMM challenge and other efforts at CBMM.

Special Seminar: Computational diversity and the mesoscale organization of the neocortex

Apr 22, 2014 - 4:00 pm
Gary Marcus
Venue:  MIT: McGovern Institute Singleton Auditorium, 46-3002 Address:  43 Vassar Street, MIT Bldg 46, Cambridge, 02139 United States Speaker/s:  Gary Marcus, Professor of Psychology at NYU and Visiting Cognitive Scientist at the Allen Institute for Brain Science, with Adam Marblestone, Harvard University and Tom Dean, Google

Abstract:

The human neocortex participates in a wide range of tasks, yet superficially appears to adhere to a relatively uniform six-layered architecture throughout its extent. For that reason, much research has been devoted to characterizing a single “canonical” cortical computation”, repeated massively throughout the cortex, with differences between areas presumed to arise from their inputs and outputs rather than from “intrinsic” properties. There is as yet no consensus, however, about what such a canonical computation might be, little evidence that uniform systems can capture abstract and symbolic computation (e.g., language) and little contact between proposals for a single canonical circuit and complexities such as differential gene expression across the cortex, or the diversity of neurons and synapse types.  Here, we evaluate and synthesize diverse evidence for a different way of thinking about neocortical architecture, which we believe to be more compatible with evolutionary and developmental biology, as well as with the inherent diversity of cortical functions. In this conception, the cortex is composed of an array of reconfigurable computational blocks, each capable of performing a variety of distinct operations, and possibly evolved through duplication and divergence. The computation performed by each block depends on its internal configuration. Area-specific specialization arises as a function of differing configurations of the local logic blocks, area-specific long-range axonal projection patterns and area-specific properties of the input. This view provides a possible framework for integrating detailed knowledge of cortical microcircuitry with computational characterizations.

Biography:

Gary Marcus, Professor of Psychology at NYU and Visiting Cognitive Scientist at the Allen Institute for Brain Science, is the author of four books including the NYTimes Bestseller, Guitar Zero. He frequently blogs for The New Yorker, and is co-editor of the forthcoming book, The Future of the Brain: Essays By The World’s Leading Neuroscientists.  His research on language, evolution, computation and cognitive development has been published widely, in leading journals such as Science and Nature.

This talk is part of the Brains, Minds & Machines Seminar Series 2013-2014.

Special seminar: Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies

Mar 6, 2014 - 11:45 am
Alexander V. Terekhov
Venue:  Harvard University: Northwest Bldg, Room 243 Address:  52 Oxford Street, Harvard University Northwest Building, Cambridge, 02138 Speaker/s:  Alexander V. Terekhov, Postdoc, Institute for Intelligent Systems and Robotics, Paris Descartes University (Paris 5)

Abstract:

The brain sitting inside its bony cavity sends and receives myriads of sensory inputs and outputs. A problem that must be solved either in ontogeny or phylogeny is how to extract the particular characteristics within this “blooming buzzing confusion” that signal the existence and nature of physical space, with structured objects immersed in it, among them the agent’s body. The idea that spatial knowledge must be extracted from the sensorimotor flow in order to underlie perception has been considered by a number of thinkers, including Helmholtz, Poincare, Nicod, Gibson, etc. However, little work has considered how this could actually be done by organisms without a priori knowledge of the nature of their sensors and effectors. Here we show how an agent with arbitrary sensors will naturally discover spatial knowledge from the undifferentiated sensorimotor flow. The method first involves tabulating sensorimotor contingencies, that is, the laws linking sensory and motor variables. Second, further laws are created linking these sensorimotor contingencies together. The method works without any prior knowledge about the structure of the agent’s sensors, body, or of the world. We show that the extracted laws endow the agent with basic spatial knowledge, manifesting itself through perceptual shape constancy and the ability to do path integration. We further show that the ability of the agent to learn all spatial dimensions depends on the ability to move in all these dimensions, rather than on possessing a sensor that has that dimensionality. This latter result suggests, for example, that three dimensional space can be learned in spite of the fact that the retinas are two-dimensional. We conclude by showing how the acquired spatial knowledge paves the way to building the notion of object.

Biography

Alexander Terekhov was born in Moscow, Russia in 1981. He received his B.S. (2003) and Ph.D. (2007) in Applied Mathematics from the Moscow State University. His early work was mainly focused on biomechanics and control of human movements. Being a postdoc at Penn State (2007-2008) he has identified the uniqueness conditions for inverse optimization problem. This result was used to develop an algorithm for cost functions identification, which was applied to various human motor activities in healthy population as well as in patients. In 2008 he switched to engineering and joined Movicom Ltd in Moscow to develop a system for aerial coverage of sport events which is being used at Winter Olympics 2014. In 2009 he has made a sharp turn in his career by switching to the perception studies, at first to haptics, where he contributed to psychophysical and neurophysiological studies, and later to sensorimotor theory in general. He is one of the main developers and popularizers of the formal sensorimotor theory of perception. Currently he works as a postdoctoral fellow at the Paris Descartes University where he studies how naive agents (biological or artificial) can learn such fundamental perceptual notions as ‘space’, ‘body’, ‘object’, ‘tool’, ‘color’, etc.

Organizer:  Kenneth Blum

Special Seminar: What is the information content of an algorithm?

Nov 7, 2013 - 3:00 pm
Joachim M. Buhman
Venue:  MIT: Ray and Maria Stata Center - Star Conference Room, 32-D463 Address:  32 Vassar Street MIT Bldg 32 Cambridge, MA 02139 United States Speaker/s:  Joachim M. Buhman, Machine Learning Laboratory in the Department of Computer Science at ETH Zurich

Abstract:
Algorithms are exposed to randomness in the input or noise during the computation. How well can they preserve the information in the data w.r.t. the output space? Algorithms especially in Machine Learning are required to generalize over input fluctuations or randomization during execution. This talk elaborates a new framework to measure the “informativeness” of algorithmic procedures and their “stability” against noise. An algorithm is considered to be a noisy channel which is characterized by a generalization capacity (GC). The generalization capacity objectively ranks different algorithms for the same data processing task based on the bit rate of their respective capacities. The problem of grouping data is used to demonstrate this validation principle for clustering algorithms, e.g. k-means, pairwise clustering, normalized cut, adaptive ratio cut and dominant set clustering. Our new validation approach selects the most informative clustering algorithm, which filters out the maximal number of stable, task-related bits relative to the underlying hypothesis class. The concept also enables us to measure how many bit are extracted by sorting algorithms when the input and thereby the pairwise comparisons are subject to fluctuations.

Biography:
Joachim M. Buhmann leads the Machine Learning Laboratory in the Department of Computer Science at ETH Zurich. He has been a full professor of Information Science and Engineering since October 2003. He studied physics at the Technical University Munich and obtained his PhD in Theoretical Physics. As postdoc and research assistant professor, he spent 1988-92 at the University of Southern California, Los Angeles, and the Lawrence Livermore National Laboratory. He held a professorship for applied computer science at the University of Bonn, Germany from 1992 to 2003. His research interests spans the areas of pattern recognition and data analysis, including machine learning, statistical learning theory and information theory. Application areas of his research include image analysis, medical imaging, acoustic processing and bioinformatics. Currently, he serves as president of the German Pattern Recognition Society.

This talk is part of the Brains, Minds & Machines Seminar Series 2013-2014.

Organizer:  Tomaso Poggio Lorenzo Rosasco

Special Seminar: Understanding the building blocks of neural computation: Insights from connectomics and theory

Oct 10, 2013 - 3:30 pm
Dmitri “Mitya” Chklovskii
Venue:  MIT: McGovern Institute Singleton Auditorium, 46-3002 Speaker/s:  Dmitri “Mitya” Chklovskii, Janelia Farm, HHMI

Animal behaviour arises from computations in neuronal circuits, but our understanding of these computations has been frustrated by the lack of detailed synaptic connection maps, or connectomes. For example, despite intensive investigations over half a century, the neuronal implementation of local motion detection in the insect visual system remains elusive. We developed a semi-automated pipeline using electron microscopy to reconstruct a connectome, containing 379 neurons and 8,637 chemical synaptic contacts, within the Drosophila optic medulla. By matching reconstructed neurons to examples from light microscopy, we assigned neurons to cell types and assembled a connectome of the repeating module of the medulla. Within this module, we identified cell types constituting a motion detection circuit, and showed that the connections onto individual motion-sensitive neurons in this circuit were consistent with their direction selectivity. Our identification of cell types involved in motion detection allowed targeting of extremely demanding electrophysiological recordings by other labs. Preliminary results from such recordings are consistent with a correlation-based motion detector. This demonstrates that connectomes can provide key insights into neuronal computations.

Organizer:  Tomaso Poggio

Pages

Subscribe to Seminars