Seminars

CBMM Special Seminar: Transformative Generative Models

Jul 2, 2018 - 4:00 pm
Photo of Prof. Lior Wolf
Venue:  Singleton Auditorium (MIT 46-3002) Address:  Brain and Cognitive Sciences Complex (MIT Bldg. 46), 43 Vassar St., Cambridge MA 02139 Speaker/s:  Prof. Lior Wolf, Tel Aviv University and Facebook AI Research

Abstract: Generative models are constantly improving, thanks to recent contributions in adversarial training, unsupervised learning, and autoregressive models. In this talk, I will describe new generative models in computer vision, voice synthesis, and music.

In music – I will describe the first music translation method to produce convincing results (https://arxiv.org/abs/1805.07848)

In voice synthesis – I will discuss the current state of multi-speaker text to speech (https://arxiv.org/abs/1802.06984)

Organizer:  Tomaso Poggio Organizer Email:  cbmm-contact@mit.edu

CBMM Special Seminar: Psychophysics of Cephalopod Camouflage: Life as GIM in a GAN.

May 21, 2018 - 4:00 pm
Photo of Prof. Jonathan Miller, OIST
Venue:  MIT Building 46-3002 (Singleton Auditorium) Speaker/s:  Jonathan Miller, Associate Professor | Physics and Biology Unit, Okinawa Institute of Science and Technology Graduate University(OIST)

Abstract: By a quirk of evolution, camouflaging octopus and cuttlefish report their visual perceptions by modulating their skin color and 3-d texture on time scales of seconds or minutes to match their surroundings (they are generative image modelers).  Their survival demands that predators perceive them as visual noise, whereas the survival of a predator demands that it detect them as signal, in a feedback loop that evolved on evolutionary time scales over millions of years (the generalized adversarial network).
 
Whereas the mechanical and physiological mechanisms of this camouflage have been studied intensively over the last few decades with steady progress, my research group seeks instead to elucidate the computation underlying it. Following the phenomenological tradition of Helmholtz, who discovered the RGB basis of human color perception over one hundred years before its physics and physiology emerged, we couple experimental, computational, and theoretical methods to characterize the i/o transfer function of the eye to skin mapping, in the first instance by identifying its fixed points.

Organizer:  Andrzej Banburski Organizer Email:  kappa666@mit.edu

CBMM Special Seminar: How genes encode neural circuits

Apr 12, 2018 - 4:30 pm
Photo of Prof. Marge Livingstone
Venue:  MIT 46-6011, Simons Center Conference Room Address:  6th Floor of the Brain and Cognitive Sciences Complex, 43 Vassar St., Cambridge MA 02139 Speaker/s:  Marge Livingstone, CBMM, Harvard Medical

Host: Tomaso Poggio

This talk is open to the CBMM Community only

Organizer:  Hector Penagos Frederico Azevedo Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: Is a Turing test for intelligence equivalent to a Turing test for consciousness?

Apr 13, 2018 - 4:30 pm
Photo of Dr. Christof Koch
Venue:  MIT 54-100, MIT Green Building Address:  MIT 54-100, MIT Green Building Access Via 21 Ames Street, Cambridge, MA 02139 Speaker/s:  Christof Koch, CBMM EAC member, Allen Institute for Brain Science

Abstract:

Rapid advances in convolutional networks and other machine learning techniques, in combination with large data bases and the relentless hardware advances due to Moore’s Law, have brought us closer to the day when we will be able to have extended conversations with programmable systems, such as advanced versions of Alexa or Siri, without being able to tell their siren voices from those of humans. This raises the questions to which extent systems that can pass a non-trivial version of the Turing test will also feel anything, that is, be conscious. I shall argue against this possibility for three reasons. Firstly, intelligent behavior, including speech, is conceptually radically different from subjective experience. Secondly, clinical case studies demonstrate that the neural basis of intelligence, self-monitoring, insights and other higher-order cognitive processes in the frontal regions of neocortex are distinct from the neural correlates of conscious experience in the posterior cortex. Thirdly, Integrated Information Theory (IIT), a fundamental theory of consciousness, predicts that conventional computers, even though they will be able, at least in principle, to simulate human-level behavior, will not experience anything. Building human-level consciousness requires neuromorphic computer architectures.

Speaker Bio: 

Christof Koch is an American neuroscientist best known for his studies and writings exploring the basis of consciousness. Trained as a physicist, Koch was for 27 years a professor of biology and engineering at the California Institute of Technology. He is now Chief Scientist and President of the Allen Institute for Brain Science in Seattle, leading a ten year, large-scale, high through-put effort to build brain observatories to map, analyze and understand the mouse and human cerebral cortex.

On a quest to understand the physical roots of consciousness, he published his first paper on the neural correlates of consciousness with the molecular biologist Francis Crick more than a quarter of a century ago.

He is a frequent public speaker and writes a regular column for Scientific American. Christof is a vegetarian and cyclist who lives in Seattle and loves big dogs, climbing and rowing.

Organizer:  Frederico Azevedo Hector Penagos Tomaso Poggio Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: Learning representations of the visual world

May 4, 2018 - 2:00 pm
Google Brain logo
Venue:  Singleton Auditorium (46-3002) Address:  43 Vassar Street, Bldg 46, Cambridge MA 02139 Speaker/s:  Jon Shlens, Google Brain

Abstract:
Recent advances in machine learning have profoundly influenced our study of computer vision. Successes in this field have demonstrated the expressive power of learning representations directly from visual imagery — both in terms of practical utility and unexpected expressive abilities. In this talk I will discuss several contributions which have helped improve our ability to learn representations of images. First, I will describe recent advances for constructing models for extracting semantic information from images by leveraging transfer learning and meta-learning techniques. Such learned models outperform human-invented architectures and are readily scalable across a range of computational budgets. Second, I will highlight recent efforts focused on the converse problem of synthesizing images through the rich visual vocabulary of painting styles and visual textures. This work permits a unique exploration of visual space and offers a window on to the structure of the learned representation of visual imagery. My hope is that these works will highlight common threads in machine and human vision and point towards opportunities for future research.

Speaker Bio: Jon Shlens is a senior research scientist at Google since 2010. Prior to joining Google Research, he was a research fellow at the Howard Hughes Medical Institute and a Miller Fellow at UC Berkeley. His research interests include machine perception, statistical signal processing, machine learning and biological neuroscience.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: Accelerating Bio Discovery with Machine Learning.

Apr 20, 2018 - 4:00 pm
Venue:  Singleton Auditorium (46-3002) Address:  43 Vassar Street, Cambridge MA 02139 Speaker/s:  Phil Nelson, Google Research | Google Accelerated Science team

Abstract: Google Accelerated Sciences is a translational research team that brings Google's technological expertise to the scientific community.  Recent advances in machine learning have delivered incredible results in consumer applications (e.g. photo recognition, language translation), and is now beginning to play an important role in life sciences.  Taking examples from active collaborations in the biochemical, biological, and biomedical fields, I will focus on how our team transforms science problems into data problems and applies Google's scaled computation, data-driven engineering, and machine learning to accelerate discovery.

Speaker Bio: Philip Nelson is a Director of Engineering in Google Research. He joined Google in 2008 and was previously responsible for a range of Google applications and geo services. In 2013, he helped found and currently leads the Google Accelerated Science team that collaborates with academic and commercial scientists to apply Google's knowledge and experience running complex algorithms over large data sets to important scientific problems. Philip graduated from MIT in 1985 where he did award-winning research on hip prosthetics at Harvard Medical School. Before Google, Philip helped found and lead several Silicon Valley start ups in search (Verity), optimization (Impresse), and genome sequencing (Complete Genomics) and was also an Entrepreneur in Residence at Accel Partners.

Organizer:  Hector Penagos Frederico Azevedo Organizer Email:  cbmm-contact@mit.edu

CBMM Special Seminar: Fit without fear: an over-fitting perspective on modern deep and shallow learning

Apr 18, 2018 - 2:00 pm
Photo of Prof. Mikhail Belkin, Ohio State University
Venue:  Singleton Auditorium (46-3002) Address:  MIT Bldg. 46, 43 Vassar St., Cambridge, MA 02139 Speaker/s:  Mikhail Belkin, Ohio State University

Abstract:

A striking feature of modern supervised machine learning is its pervasive over-parametrization. Deep networks contain millions of parameters, often exceeding the number of data points by orders of magnitude. These networks are trained to nearly interpolate the data by driving the training error to zero. Yet, at odds with most theory, they show excellent test performance. It has become accepted wisdom that these properties are special to deep networks and require non-convex analysis to understand.

In this talk I will show that classical (convex) kernel methods do, in fact, exhibit these unusual properties. Moreover, kernel methods provide a competitive practical alternative to deep learning, after we address the non-trivial challenges of scaling to modern big data. I will also present theoretical and empirical results indicating that we are unlikely to make progress on understanding deep learning until we develop a fundamental understanding of classical "shallow" kernel classifiers in the "modern" over-fitted setting. Finally, I will show that ubiquitously used stochastic gradient descent (SGD) is very effective at driving the training error to zero in the interpolated regime, a finding that sheds light on the effectiveness of modern methods and provides specific guidance for parameter selection.

These results present a perspective and a challenge. Much of the success of modern learning comes into focus when considered from over-parametrization and interpolation point of view. The next step is to address the basic question of why classifiers in the "modern" interpolated setting generalize so well to unseen data. Kernel methods provide both a compelling set of practical algorithms and an analytical platform for resolving this fundamental issue.

Based on joint work with Siyuan Ma, Raef Bassily, Chayoue Liu and Soumik Mandal.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Special Seminar: How can the brain efficiently build an understanding of the natural world?

Feb 16, 2018 - 4:00 pm
Photo of Dr. Ann M. Hermundstad and AMH lab logo.
Venue:  Singleton Auditorium (MIT 46-3002) Address:  MIT Bldg. 46, 43 Vassar St., Cambridge, MA 02139 Speaker/s:  Ann M. Hermundstad, PhD, Janelia Research Campus

Abstract: The brain exploits the statistical regularities of the natural world. In the visual system, an efficient representation of light intensity begins in retina, where statistical redundancies are removed via spatiotemporal decorrelation. Much less is known, however, about the efficient representation of complex features in higher visual areas. I will discuss how the central visual system, operating with different goals and under different constraints, makes efficient use of resources to extract meaningful features from complex visual stimuli. I will then highlight how these same principles can be generalized to dynamic situations, where both the environment and the goals of the system are in flux. Together, these principles have implications for understanding a broad range of phenomena across animals and sensory modalities.

This event is organized by the CBMM Trainee Leadership Council.

Organizer:  Wiktor Młynarski Kelsey Allen Jiye Kim Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: Have We Missed Half of What the Neocortex Does? Allocentric Location as the Basis of Perception

Dec 15, 2017 - 4:30 pm
Photo of Jeff Hawkins
Venue:  Singleton Auditorium (MIT 46-3002) Address:  3rd Floor, MIT Bldg 46, 43 Vassar St., Cambridge MA 02139 Speaker/s:  Jeff Hawkins, Co-Founder, Numenta

Please note the change in start time. This talk will be starting at 4:30pm, on Friday, Dec. 15, 2017.

Abstract:  In this talk I will describe a theory that sensory regions of the neocortex process two inputs. One input is the well-known sensory data arriving via thalamic relay cells. We propose the second input is a representation of allocentric location. The allocentric location represents where the sensed feature is relative to the object being sensed, in an object-centric reference frame. As the sensors move, cortical columns learn complete models of objects by integrating sensory features and location representations over time. Lateral projections allow columns to rapidly reach a consensus of what object is being sensed. We propose that the representation of allocentric location is derived locally, in layer 6 of each column, using the same tiling principles as grid cells in the entorhinal cortex. Because individual cortical columns are able to model complete complex objects, cortical regions are far more powerful than currently believed. The inclusion of allocentric location offers the possibility of rapid progress in understanding the function of numerous aspects of cortical anatomy.

I will be discussing material from these two papers. Others can be found at www.Numenta.com/papers

A Theory of How Columns in the Neocortex Enable Learning the Structure of the World
URL: https://doi.org/10.3389/fncir.2017.00081

Why Neurons Have Thousands of Synapses, A Theory of Sequence Memory in the Neocortex
URL: https://doi.org/10.3389/fncir.2016.00023

 

Speaker Biography:  Jeff Hawkins is a scientist and co-founder at Numenta, an independent research company focused on neocortical theory. His research focuses on how the cortex learns predictive models of the world through sensation and movement. In 2002, he founded the Redwood Neuroscience Institute, where he served as Director for three years. The institute is currently located at U.C. Berkeley. Previously, he co-founded two companies, Palm and Handspring, where he designed products such as the PalmPilot and Treo smartphone. In 2004 he wrote “On Intelligence”, a book about cortical theory.

Hawkins earned his B.S. in electrical engineering from Cornell University in 1979. He was elected to the National Academy of Engineering in 2003.

Organizer:  Kathleen Sullivan Organizer Email:  cbmm-contact@mit.edu

CompLang Special Seminar: Ryan Cotterell

Nov 3, 2017 - 4:00 pm
Ryan Cotterell
Venue:  McGovern Seminar Room (46-3189) Address:  MIT Bldg 46, 43 Vassar Street, Cambridge MA, 02139. Speaker/s:  Ryan Cotterell

Title: Probabilistic Typology: Deep Generative Models of Vowel Inventories

Abstract: Linguistic typology studies the range of structures present in human language. The main goal of the field is to discover which sets of possible phenomena are universal, and which are merely frequent. For example, all languages have vowels, while most—but not all—languages have an [u] sound. In this paper we present the first probabilistic treatment of a basic question in phonological typology: What makes a natural vowel inventory? We introduce a series of deep stochastic point processes, and contrast them with previous computational, simulation-based approaches. We provide a comprehensive suite of experiments on over 200 distinct languages.

Bio: I am a fourth year Ph.D. student in the Johns Hopkins Computer Science department affiliated with the Center for Language and Speech Processing, where I am co-advised by Jason Eisner and David Yarowsky. I specialize in Natural Language Processing, Computational Linguistics and Machine Learning, focusing on deep learning and statistical approaches to phonology, morphology, linguistic typology and low-resource languages.

This talk is co-coordinated by CBMM and CompLang.

CompLang is a student-run discussion group on language and computation that takes place at MIT. The aim of the group is to bring together the language community at MIT and nearby, learn about each other's research, and foster cross-laboratory collaborations.

Organizer:  Joel Oller Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to Seminars