Talks

Brains, Minds and Machines Seminar Series: Body-Brain Interface: Neuroanatomical and Functional Insights from the Primate Insular Cortex

Oct 26, 2018 - 4:00 pm
Photo of Henry Evrard
Venue:  Singleton Auditorium (MIT 46-3002) Address:  43 Vassar Street, Cambridge MA 02139 Speaker/s:  Henry Evrard , Head of Research Group CIN Functional and Comparative Neuroanatomy, Werner Reichardt Center for Integrative Neuroscience, Max Planck Institute for Biological Cybernetics

Abstract:  Interoception substantiate embodied feelings and shape cognitive processes including perceptual awareness.  My lab combines architectonics, tract-tracing, electrophysiology, direct electrical stimulation fMRI (DES-fMRI), neural event triggered fMRI (NET-fMRI) and optogenetics in the macaque monkey in order to examine the neuroanatomical and functional organization of the insular cortex, one of the key central interface of bodily and brain states. Our anatomical examination revealed that the insular cortex is anatomically organized according to a refined and high-consistent modular Bauplan where architectonics and hodology perfectly overlap. Hodological and functional examinations suggest that the insula contains a granular-to-dysgranular-to-agranular processing flow where interoceptive afferents are progressively integrated with self-agency and socially relevant activities from other parts of the brain, until reaching an ultimate representation of instantaneous physiological states in the anterior insula. The anterior insula contains distinct areas that have each specific projections. One of these areas specifically contains the atypical spindle-shaped von Economo neuron (VEN). A relatively high proportion of VEN projects to distant preautonomic midbrain regions. Recording and stimulation in the 'VEN area' confirmed the connection with these regions and highlighted prominent functional relations to high-order cortical areas, supporting the idea that the VEN area could serve as hub for the simultaneous interoceptive shaping of polymodal perceptual experience and high-order regulation of bodily states.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Special Seminar: What information dynamics can tell us about ... brains

Jul 24, 2018 - 11:00 am
Photo of Dr. Joseph Lizier
Venue:  Singleton Auditorium (MIT 46-3002) Address:  MIT Brain and Cognitive Sciences Complex (MIT Bldg 46), 43 Vassar St., Cambridge MA 02139 Speaker/s:  Dr. Joseph T. Lizier, The University of Sydney

Abstract: 

The space-time dynamics of interactions in neural systems are often described using terminology of information processing, or computation, in particular with reference to information being stored, transferred and modified in these systems. In this talk, we describe an information-theoretic framework -- information dynamics --  that we have used to quantify each of these operations on information, and their dynamics in space and time. Not only does this framework quantitatively align with natural qualitative descriptions of neural information processing, it provides multiple complementary perspectives on how, where and why a system is exhibiting complexity. We will review the application of this framework in computational neuroscience, describing what it can and indeed has revealed in this domain. First, we discuss examples of characterising behavioural regimes and responses in terms of information processing, including under different neural conditions and around critical states. Next, we show how the space-time dynamics of information storage, transfer and modification directly reveal how distributed computation is implemented in a system, highlighting information processing hot-spots and emergent computational structures, and providing evidence for conjectures on neural information processing such as predictive coding theory. Finally, via applications to several models of dynamical networks and human brain images, we demonstrate how information dynamics relates the structure of complex networks to their function, and how it can invert such analysis to infer structure from dynamics.

 

This event is organized by the CBMM Trainee Leadership Council.

Organizer:  Wiktor Młynarski Organizer Email:  cbmm-contact@mit.edu

CBMM Special Seminar: Transformative Generative Models

Jul 2, 2018 - 4:00 pm
Photo of Prof. Lior Wolf
Venue:  Singleton Auditorium (MIT 46-3002) Address:  Brain and Cognitive Sciences Complex (MIT Bldg. 46), 43 Vassar St., Cambridge MA 02139 Speaker/s:  Prof. Lior Wolf, Tel Aviv University and Facebook AI Research

Abstract: Generative models are constantly improving, thanks to recent contributions in adversarial training, unsupervised learning, and autoregressive models. In this talk, I will describe new generative models in computer vision, voice synthesis, and music.

In music – I will describe the first music translation method to produce convincing results (https://arxiv.org/abs/1805.07848)

In voice synthesis – I will discuss the current state of multi-speaker text to speech (https://arxiv.org/abs/1802.06984)

Organizer:  Tomaso Poggio Organizer Email:  cbmm-contact@mit.edu

CBMM Special Seminar: Psychophysics of Cephalopod Camouflage: Life as GIM in a GAN.

May 21, 2018 - 4:00 pm
Photo of Prof. Jonathan Miller, OIST
Venue:  MIT Building 46-3002 (Singleton Auditorium) Speaker/s:  Jonathan Miller, Associate Professor | Physics and Biology Unit, Okinawa Institute of Science and Technology Graduate University(OIST)

Abstract: By a quirk of evolution, camouflaging octopus and cuttlefish report their visual perceptions by modulating their skin color and 3-d texture on time scales of seconds or minutes to match their surroundings (they are generative image modelers).  Their survival demands that predators perceive them as visual noise, whereas the survival of a predator demands that it detect them as signal, in a feedback loop that evolved on evolutionary time scales over millions of years (the generalized adversarial network).
 
Whereas the mechanical and physiological mechanisms of this camouflage have been studied intensively over the last few decades with steady progress, my research group seeks instead to elucidate the computation underlying it. Following the phenomenological tradition of Helmholtz, who discovered the RGB basis of human color perception over one hundred years before its physics and physiology emerged, we couple experimental, computational, and theoretical methods to characterize the i/o transfer function of the eye to skin mapping, in the first instance by identifying its fixed points.

Organizer:  Andrzej Banburski Organizer Email:  kappa666@mit.edu

CBMM Special Seminar: How genes encode neural circuits

Apr 12, 2018 - 4:30 pm
Photo of Prof. Marge Livingstone
Venue:  MIT 46-6011, Simons Center Conference Room Address:  6th Floor of the Brain and Cognitive Sciences Complex, 43 Vassar St., Cambridge MA 02139 Speaker/s:  Marge Livingstone, CBMM, Harvard Medical

Host: Tomaso Poggio

This talk is open to the CBMM Community only

Organizer:  Hector Penagos Frederico Azevedo Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: Is a Turing test for intelligence equivalent to a Turing test for consciousness?

Apr 13, 2018 - 4:30 pm
Photo of Dr. Christof Koch
Venue:  MIT 54-100, MIT Green Building Address:  MIT 54-100, MIT Green Building Access Via 21 Ames Street, Cambridge, MA 02139 Speaker/s:  Christof Koch, CBMM EAC member, Allen Institute for Brain Science

Abstract:

Rapid advances in convolutional networks and other machine learning techniques, in combination with large data bases and the relentless hardware advances due to Moore’s Law, have brought us closer to the day when we will be able to have extended conversations with programmable systems, such as advanced versions of Alexa or Siri, without being able to tell their siren voices from those of humans. This raises the questions to which extent systems that can pass a non-trivial version of the Turing test will also feel anything, that is, be conscious. I shall argue against this possibility for three reasons. Firstly, intelligent behavior, including speech, is conceptually radically different from subjective experience. Secondly, clinical case studies demonstrate that the neural basis of intelligence, self-monitoring, insights and other higher-order cognitive processes in the frontal regions of neocortex are distinct from the neural correlates of conscious experience in the posterior cortex. Thirdly, Integrated Information Theory (IIT), a fundamental theory of consciousness, predicts that conventional computers, even though they will be able, at least in principle, to simulate human-level behavior, will not experience anything. Building human-level consciousness requires neuromorphic computer architectures.

Speaker Bio: 

Christof Koch is an American neuroscientist best known for his studies and writings exploring the basis of consciousness. Trained as a physicist, Koch was for 27 years a professor of biology and engineering at the California Institute of Technology. He is now Chief Scientist and President of the Allen Institute for Brain Science in Seattle, leading a ten year, large-scale, high through-put effort to build brain observatories to map, analyze and understand the mouse and human cerebral cortex.

On a quest to understand the physical roots of consciousness, he published his first paper on the neural correlates of consciousness with the molecular biologist Francis Crick more than a quarter of a century ago.

He is a frequent public speaker and writes a regular column for Scientific American. Christof is a vegetarian and cyclist who lives in Seattle and loves big dogs, climbing and rowing.

Organizer:  Frederico Azevedo Hector Penagos Tomaso Poggio Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: Learning representations of the visual world

May 4, 2018 - 2:00 pm
Google Brain logo
Venue:  Singleton Auditorium (46-3002) Address:  43 Vassar Street, Bldg 46, Cambridge MA 02139 Speaker/s:  Jon Shlens, Google Brain

Abstract:
Recent advances in machine learning have profoundly influenced our study of computer vision. Successes in this field have demonstrated the expressive power of learning representations directly from visual imagery — both in terms of practical utility and unexpected expressive abilities. In this talk I will discuss several contributions which have helped improve our ability to learn representations of images. First, I will describe recent advances for constructing models for extracting semantic information from images by leveraging transfer learning and meta-learning techniques. Such learned models outperform human-invented architectures and are readily scalable across a range of computational budgets. Second, I will highlight recent efforts focused on the converse problem of synthesizing images through the rich visual vocabulary of painting styles and visual textures. This work permits a unique exploration of visual space and offers a window on to the structure of the learned representation of visual imagery. My hope is that these works will highlight common threads in machine and human vision and point towards opportunities for future research.

Speaker Bio: Jon Shlens is a senior research scientist at Google since 2010. Prior to joining Google Research, he was a research fellow at the Howard Hughes Medical Institute and a Miller Fellow at UC Berkeley. His research interests include machine perception, statistical signal processing, machine learning and biological neuroscience.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: Accelerating Bio Discovery with Machine Learning.

Apr 20, 2018 - 4:00 pm
Venue:  Singleton Auditorium (46-3002) Address:  43 Vassar Street, Cambridge MA 02139 Speaker/s:  Phil Nelson, Google Research | Google Accelerated Science team

Abstract: Google Accelerated Sciences is a translational research team that brings Google's technological expertise to the scientific community.  Recent advances in machine learning have delivered incredible results in consumer applications (e.g. photo recognition, language translation), and is now beginning to play an important role in life sciences.  Taking examples from active collaborations in the biochemical, biological, and biomedical fields, I will focus on how our team transforms science problems into data problems and applies Google's scaled computation, data-driven engineering, and machine learning to accelerate discovery.

Speaker Bio: Philip Nelson is a Director of Engineering in Google Research. He joined Google in 2008 and was previously responsible for a range of Google applications and geo services. In 2013, he helped found and currently leads the Google Accelerated Science team that collaborates with academic and commercial scientists to apply Google's knowledge and experience running complex algorithms over large data sets to important scientific problems. Philip graduated from MIT in 1985 where he did award-winning research on hip prosthetics at Harvard Medical School. Before Google, Philip helped found and lead several Silicon Valley start ups in search (Verity), optimization (Impresse), and genome sequencing (Complete Genomics) and was also an Entrepreneur in Residence at Accel Partners.

Organizer:  Hector Penagos Frederico Azevedo Organizer Email:  cbmm-contact@mit.edu

CBMM Special Seminar: Fit without fear: an over-fitting perspective on modern deep and shallow learning

Apr 18, 2018 - 2:00 pm
Photo of Prof. Mikhail Belkin, Ohio State University
Venue:  Singleton Auditorium (46-3002) Address:  MIT Bldg. 46, 43 Vassar St., Cambridge, MA 02139 Speaker/s:  Mikhail Belkin, Ohio State University

Abstract:

A striking feature of modern supervised machine learning is its pervasive over-parametrization. Deep networks contain millions of parameters, often exceeding the number of data points by orders of magnitude. These networks are trained to nearly interpolate the data by driving the training error to zero. Yet, at odds with most theory, they show excellent test performance. It has become accepted wisdom that these properties are special to deep networks and require non-convex analysis to understand.

In this talk I will show that classical (convex) kernel methods do, in fact, exhibit these unusual properties. Moreover, kernel methods provide a competitive practical alternative to deep learning, after we address the non-trivial challenges of scaling to modern big data. I will also present theoretical and empirical results indicating that we are unlikely to make progress on understanding deep learning until we develop a fundamental understanding of classical "shallow" kernel classifiers in the "modern" over-fitted setting. Finally, I will show that ubiquitously used stochastic gradient descent (SGD) is very effective at driving the training error to zero in the interpolated regime, a finding that sheds light on the effectiveness of modern methods and provides specific guidance for parameter selection.

These results present a perspective and a challenge. Much of the success of modern learning comes into focus when considered from over-parametrization and interpolation point of view. The next step is to address the basic question of why classifiers in the "modern" interpolated setting generalize so well to unseen data. Kernel methods provide both a compelling set of practical algorithms and an analytical platform for resolving this fundamental issue.

Based on joint work with Siyuan Ma, Raef Bassily, Chayoue Liu and Soumik Mandal.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Special Seminar: How can the brain efficiently build an understanding of the natural world?

Feb 16, 2018 - 4:00 pm
Photo of Dr. Ann M. Hermundstad and AMH lab logo.
Venue:  Singleton Auditorium (MIT 46-3002) Address:  MIT Bldg. 46, 43 Vassar St., Cambridge, MA 02139 Speaker/s:  Ann M. Hermundstad, PhD, Janelia Research Campus

Abstract: The brain exploits the statistical regularities of the natural world. In the visual system, an efficient representation of light intensity begins in retina, where statistical redundancies are removed via spatiotemporal decorrelation. Much less is known, however, about the efficient representation of complex features in higher visual areas. I will discuss how the central visual system, operating with different goals and under different constraints, makes efficient use of resources to extract meaningful features from complex visual stimuli. I will then highlight how these same principles can be generalized to dynamic situations, where both the environment and the goals of the system are in flux. Together, these principles have implications for understanding a broad range of phenomena across animals and sensory modalities.

This event is organized by the CBMM Trainee Leadership Council.

Organizer:  Wiktor Młynarski Kelsey Allen Jiye Kim Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to Talks