LH - Lecture Series

Science of Intelligence Public Lecture Series

From these popular lectures aimed at a broad audience, learn about exciting developments in the study of human intelligence from the perspectives of neuroscience and cognitive science, applications of this research to the creation of intelligent machines, and the significance of this work for society.

CBMM faculty speak at an event launching the MIT Quest for Intelligence, an Institute-wide initiative on human and machine intelligence research, its applications, and its bearing on society. View talks by James DiCarlo, Tomaso Poggio, Laura Schulz, Rebecca Saxe, and Joshua Tenenbaum.

Our brains are wired with specific regions for face-recognition, color perception, language, music, and even for thinking about how other people think. MIT neuroscientist Nancy Kanwisher reveals the techniques used to localize brain activity and to track its development from infancy, in this talk at the Museum of Science, Boston. 

Many artificial intelligence researchers expect AI to outsmart humans at all tasks and jobs within decades, enabling a future where we're restricted only by the laws of physics, not the limits of our intelligence. MIT physicist and AI researcher Max Tegmark separates the real opportunities and threats from the myths, describing the concrete steps we should take today to ensure that AI ends up being the best -- rather than worst -- thing to ever happen to humanity.
(Talk owned by TED.com)

How do babies learn so much from so little so quickly? In a fun, experiment-filled talk, cognitive scientist Laura Schulz shows how our young ones make decisions with a surprisingly strong sense of logic, well before they can talk.
(Talk owned by TED.com)

Brain imaging pioneer Nancy Kanwisher, who uses fMRI scans to see activity in brain regions (often her own), shares what she and her colleagues have learned: The brain is made up of both highly specialized components and general-purpose "machinery."
(Talk owned by TED.com)

Ed Boyden shows how, by inserting genes for light-sensitive proteins into brain cells, he can selectively activate or de-activate specific neurons with fiber-optic implants. With this unprecedented level of control, he's managed to cure mice of analogs of PTSD and certain forms of blindness. On the horizon: neural prosthetics. Session host Juan Enriquez leads a brief post-talk Q&A.
(Talk owned by TED.com)

Sensing the motives and feelings of others is a natural talent for humans. But how do we do it? Here, Rebecca Saxe shares fascinating lab work that uncovers how the brain thinks about other peoples' thoughts -- and judges their actions.
(Talk owned by TED.com)

Do sleeping rats dream? Matt Wilson explains how a surprise moment in the lab led to groundbreaking discoveries in the correlation between memory and sleep in his talk "Reading A Rat's Mind."
(Talk owned by TED.com)

At MIT, Rebecca Saxe studies human brain development, in order to understand how the human mind is built. The challenges and rewards of this research connect her experiences, as a scientist and as a mother.
(Talk owned by TED.com)

Nancy Kanwisher presents many short talks on scientific methods used to study the human mind and brain, and some of the exciting discoveries that researchers have made about the structure and function of the brain.

In this invited lecture for the MIT course, Artificial General Intelligence, Josh Tenenbaum talks about how reverse-engineering the human mind and brain provides valuable insights into how we can create an AI that is able to model the world as flexibly and deeply as humans.

James DiCarlo leads a panel discussion and audience Q&A related to how the brain and cognitive sciences, merged with the creation of engineered systems, enables a deeper understanding of human intelligence.

Demis Hassabis discusses the capabilities and power of self-learning systems. He illustrates this capability with some of DeepMind's recent breakthroughs and discusses the implications of cutting-edge AI research for scientific and philosophical discovery.

Demis Hassabis talks about cutting edge AI projects at Google DeepMind inspired by neuroscience and an interest in creating artificial systems with general intelligence, including work leading to the historic AlphaGo match against world champion Go player Lee Sedol.

Max Tegmark and Matt Wilson discuss ethical and social issues that arise with the creation of AI systems, such as transparency and predictability, morality and decision making, liability, and the societal implications of systems with general or "super" intelligence.

Christof Koch reflects on the scientific study of consciousness and presents a theory that captures what conscious experience is and what type of physical systems can have it. This theory can explain a range of clinical and laboratory findings.

Amnon Shashua, Co-Founder and CTO of Mobileye, characterizes the current state and future challenges of autonomous driving, and the critical role of AI and machine learning in Mobileye’s work in this domain.

Amnon Shashua, Co-Founder and CTO of Mobileye and OrCam, describes two key applications of computer vision: driving assistance systems that perform emergency braking to avoid collisions and systems to enhance the lives of the visually impaired.

Baylabs seeks to expand global access to medical imaging with low-cost ultrasound technology for the diagnosis of health conditions such as rheumatic heart disease, enabled by the application of deep learning methods.

Terrence Sejnowski, Salk Institute for Biological Studies
Reflection on the historical evolution of deep learning in Artificial Intelligence, from perceptrons to deep neural networks that play Go, detect thermal updrafts, control a social robot, and analyze complex neural data using methods that are revolutionizing Neuroscience.

Beyond her research on vision in the primate brain, Margaret Livingstone explores how we can use what we know about human visual processing to understand discoveries that artists have made about how we see. View Dr. Livingstone’s talks from Distinguished Visitors series at the University of Michigan School of Art & Design and the University of Alabama.

Undergraduate Lecture Series

The CBMM Summer Research Program for Undergraduates features research talks on topics related to the Center’s scientific mission, aimed at an undergraduate audience. Additional resources include links to speaker websites and recent articles.

Andrei Barbu

Andrei Barbu, Research Scientist at MIT, discusses using language to understand vision and vision to understand language.

Ed Boyden

Ed Boyden, Professor of Biological Engineering and Brain and Cognitive Sciences at MIT, discusses tools for mapping and repairing the brain.

Elizabeth Spelke

Elizabeth Spelke, Professor of Psychology and Director of the Laboratory for Developmental Studies at Harvard, presents a developmental perspective on brains, minds and machines.

Leyla Isik

Leyla Isik, post-doctoral researcher at MIT and Boston Children's Hospital, explains how to use neural decoding to study object and action recognition in the human brain.

Leyla Isik

Leyla Isik, a post-doctoral researcher at MIT, studies how the human brain recognizes objects and social interactions, using MEG, fMRI, and computational modeling.

Nancy Kanwisher

Nancy Kanwisher, Professor of Brain and Cognitive Sciences at MIT, talks about fMRI and the discovery of functional specificity in the brain.

Sam Gershman

Sam Gershman, Professor of Psychology at Harvard, discusses how we might build machines that learn and think like people.

Winrich Freiwald

Winrich Freiwald, Professor of Neurosciences and Behavior at The Rockefeller University, discusses the neural machinery underlying face recognition.

Josh McDermott

Josh McDermott, Professor of Brain and Cognitive Sciences at MIT, discusses how the human auditory system is able to distinguish multiple sound sources from the complex signal that enters the ear.

Emery Brown

Emery Brown, Professor of Brain and Cognitive Sciences at MIT and Professor of Anesthesia at Harvard, discusses the impact of general anesthesia on brain function and its implications.

Photo of Idan Blank

Idan Blank, a post-doctoral researcher at MIT, explains how fMRI works and describes important principles for designing fMRI experiments on functional specialization in the brain.

Carmen Varela

Carmen Varela, a research scientist at MIT, highlights the importance of interdisciplinarity in the study of intelligence, focusing on the hippocampus and its role in the formation of new memories and encoding of spatial information to support navigation.

Photo of Roger Levy

Roger Levy, Professor of Brain and Cognitive Sciences at MIT, describes his research on human language that integrates linguistic theory, computational models, psychological experimentation, and use of language datasets.

Michael Halassa

Michael Halassa, Professor of Brain and Cognitive Sciences at MIT, examines the role of the thalamus in gating cortical interactions that underlie the control of attention and cognitive flexibility.

Brains, Minds, and Machines Summer Course Lecture Series

The content of the 2015 Brains, Minds, and Machines Summer Course is published on MIT OpenCourseware. Since this time, many new speakers and topics were presented each year, captured in the lecture videos below. The summer course is aimed at graduate students and postdocs, but many of the lectures are accessible to an undergraduate audience.

Big science, team science, & open science to understand neocortex

Christof Koch, Allen Institute for Brain Science
Overview of the work of the Allen Institute for Brain Science on the creation of highly standardized and extensive databases characterizing and cataloguing the complex structure and function of all neurons constituting the mouse neocortex.

The Allen Institute for Brain Science Resources: Connectivity Atlas, Cell Types Database, Brain Observatory

Lydia Ng, Allen Institute for Brain Science
In-depth overview of public resources created by the Allen Institute for Brain Science, including the Connectivity Atlas, Cell Types Database, and Brain Observatory, describing the nature of the data, how it is presented, and how it can be accessed and used for neuroscience research.

The Sciences of Consciousness: Progress and Problems

Christof Koch, Chief Scientific Officer and President, Allen Institute for Brain Science
Reflection on the history of research on what is consciousness, what is currently known about the nature of conscious experience, and what are possible neural correlates of consciousness. Summary of the Integrated Information Theory of Consciousness proposed by Koch and Tononi that can explain a range of clinical and laboratory findings, and offers key predictions for future research.

The frontal and parietal cortex: Eye movements and attention

Jacqueline Gottlieb, The Kavli Institute for Brain Science at Columbia University
Introduction to physiological studies of the control of eye movements and attention in the frontal eye fields and lateral intraparietal cortex of the primate brain, and reflection on how the brain actively allocates attention and assigns value to sources of information.

Image Instance Retrieval: Overview of state-of-the-art

Vijay Chandrasekhar, A*STAR
Overview of current computer vision systems for retrieving similar images from a large database, describing the image processing stages leading to a representation of image features suitable for matching, the application of deep learning methods to the extraction of feature descriptors, and future challenges for large scale image and video retrieval systems.

Question answering for language and Vision

Richard Socher, MetaMind (A Salesforce Company)
A model based on dynamic memory networks casts many aspects of natural language processing as question-answering tasks. The model uses gated recurrent units from recurrent neural networks and can also be applied to the task of answering questions about natural images.

Deep Networks for Vision

Alan Yuille, Johns Hopkins University
Introduction to deep networks for visual tasks such as object detection and semantic segmentation; complex models that integrate deep networks with representations of parts and spatial relations, grammars, and top-down attention; and probing deep network internal representations of parts, visual concepts, and occlusions.

Connections between physics and deep learning

Max Tegmark, MIT
The study of deep learning lies at the intersection between AI and machine learning, physics, and neuroscience. Exploring connections between physics and deep learning can yield important insights about the theory and behavior of deep neural networks, such as their expressibility, efficiency, learnability, and robustness.

Neural representations of faces, bodies, and objects in ventral temporal cortex

James Haxby, Dartmouth College
There is substantial evidence that processing in ventral temporal cortex is optimized for face and body perception. Recent research employing a greater range of static and dynamic stimuli highlight the important role of agent action and the complexity of fine category distinctions. Computational analyses of fMRI data using multivariate methods capture fine-grained patterns of object representations in the ventral pathway.

Nancy Kanwisher - James Haxby debate

Nancy Kanwisher, MIT & James Haxby, Dartmouth College
This debate highlights different perspectives on functional specificity in the human brain, addressing how studies from neuroscience and cognition are used to study the nature of neural representations of information from different modalities and cognitive processes, to determine, for example, the spatial localization of these representations and how later processing stages decode this information from earlier levels.

Computational Models of Cognition: Part 1

Joshua Tenenbaum, MIT
Past work on human intelligence has framed the underlying processes as engines for pattern recognition, prediction, or symbol manipulation. Systems that can reason broadly about the physical and social world must embody models of the world that enable causal inference, prediction, and learning from experience. Such systems might be created by starting with the intelligence of a baby and learning like a child.

Computational Models of Cognition: Part 2

Joshua Tenenbaum, MIT
Elaboration of intuitive physics and intuitive psychology engines based on a probabilistic framework that captures aspects of human reasoning and understanding of the behavior of physical objects, as well as the beliefs, desires, goals, and actions of other agents.

Computational Models of Cognition: Part 3

Joshua Tenenbaum, MIT
Overview of efforts to create neurally plausible models for face recognition, intuitive physics, and intuitive psychology, that integrate the probabilistic programming framework with deep neural networks, and preliminary results from empirical studies aimed at identifying brain areas engaged in these computations.

What are you searching for? ... and how do you do it?

Jeremy M. Wolfe, Brigham & Women's Hospital; Harvard Medical School
Introduction to visual search that examines Treisman’s Feature Integration Theory, features that guide shifts of attention during search, challenges for model development such as how to terminate the search process and the temporal differences between search and recognition, and a neural architecture that combines a selective pathway for object recognition and non-selective pathway for visual properties such as texture and gist.

Attention

Robert Desimone, MIT
Overview of the neural basis of attention in primate vision, evidence for its computational role in enhancing the neural encoding of attended objects in areas V4 and IT, models of the underlying neural circuitry, implementation of attention through neural synchrony within and across cortical areas, role of the frontal eye fields and ventral prearcuiate region of prefrontal cortex in top-down feature-based attentional control.

DES-fMRI: Direct electrical stimulation and fMRI

Nikos K. Logothetis, Max Planck Institute for Biological Cybernetics in Tübingen
Overview of technical advances that enable the integration of direct electrical stimulation of neural tissue with fMRI recordings, and application of this technology to study monosynaptic neural connectivity, network plasticity, and cortico-thalamo-cortical loops in the primate brain.

Neural event triggered fMRI (NET-fMRI)

Nikos K. Logothetis, Max Planck Institute for Biological Cybernetics in Tübingen
Overview of the integration of concurrent physiological multi-site recordings with fMRI imaging, and it application to the study of dynamic connectivity related to system and synaptic memory consolidation in primates.