No

CBMM10 - A Symposium on Intelligence: Brains, Minds, and Machines

Oct 6, 2023 - 2:30 pm
Venue:  Singleton Auditorium (46-3002)

The dream of understanding the mind and the brain and replicating human intelligence in machines was at the core of several new fields created at MIT during the ‘50s and ‘60s, including information theory, cybernetics, and Artificial Intelligence. The same dream was at the core of the NSF-funded, multi-institutional Center for Brains, Minds, and Machines (CBMM) and of its integration in the new Quest for Intelligence, which is bridging faculty across all the Schools of the Massachusetts Institute of Technology.

Our symposium will focus on the topic of intelligence – one of the greatest problems in science and engineering and a key to our future as a society. The symposium will look at the past, in particular at the advances achieved by CBMM over the past 10 years. But, it will mainly focus on the future, in particular the future of neuroscience (Brains), the future of cognitive science (Minds), the future of AI (Machines) and their synergies.

The goal of the workshop is to celebrate CBMM’s success, and to explore the future of CBMM and Quest, in pursuing the natural science of intelligence and investigating its synergies with AI. Deep learning was inspired by neuroscience and led to a better computational understanding of primate perception. It also led to surprising engineering advances such as AlphaGo, Alphafold, and LLMs. This symposium aims to take stock of what has been scientifically accomplished via that framework, to illuminate what still must be accomplished, and to chart next steps by discussing and debating which of the current approaches are likely to achieve those scientific accomplishments.

Pre-registration is required.

For more information, including registration, schedule, and speakers, please visit our event page here - https://cbmm.mit.edu/CBMM10

Organizer:  Kathleen Sullivan Organizer Email:  cbmm-contact@mit.edu

Quest | CBMM Seminar Series: Invariance and equivariance in brains and machines

May 7, 2024 - 4:00 pm
Speaker/s:  Bruno Olshausen, UC Berkeley

Abstract: The goal of building machines that can perceive and act in the world as humans and other animals do has been a focus of AI research efforts for over half a century.   Over this same period, neuroscience has sought to achieve a mechanistic understanding of the brain processes underlying perception and action.  It stands to reason that these parallel efforts could inform one another.  However recent advances in deep learning and transformers have, for the most part, not translated into new neuroscientific insights;  and other than deriving loose inspiration from neuroscience, AI has mostly pursued its own course which now deviates strongly from the brain.  Here I propose an approach to building both invariant and equivariant representations in vision that is rooted in observations of animal behavior and informed by both neurobiological mechanisms (recurrence, dendritic nonlinearities, phase coding) and mathematical principles (group theory, residue numbers).  What emerges from this approach is a neural circuit for factorization that can learn about shapes and their transformations from image data, and a model of the grid-cell system based on high-dimensional encodings of residue numbers.  These models provide efficient solutions to long-studied problems that are well-suited for implementation in neuromorphic hardware or as a basis for forming hypotheses about visual cortex and entorhinal cortex.

Bio: Professor Bruno Olshausen is a Professor in the Helen Wills Neuroscience Institute, the School of Optometry, and has a below-the-line affiliated appointment in EECS. He holds B.S. and M.S. degrees in Electrical Engineering from Stanford University, and a Ph.D. in Computation and Neural Systems from the California Institute of Technology. He did his postdoctoral work in the Department of Psychology at Cornell University and at the Center for Biological and Computational Learning at the Massachusetts Institute of Technology. From 1996-2005 he was on the faculty in the Center for Neuroscience at UC Davis, and in 2005 he moved to UC Berkeley. He also directs the Redwood Center for Theoretical Neuroscience, a multidisciplinary research group focusing on building mathematical and computational models of brain function (see http://redwood.berkeley.edu).

Olshausen's research focuses on understanding the information processing strategies employed by the visual system for tasks such as object recognition and scene analysis. Computer scientists have long sought to emulate the abilities of the visual system in digital computers, but achieving performance anywhere close to that exhibited by biological vision systems has proven elusive. Dr. Olshausen's approach is based on studying the response properties of neurons in the brain and attempting to construct mathematical models that can describe what neurons are doing in terms of a functional theory of vision. The aim of this work is not only to advance our understanding of the brain but also to devise new algorithms for image analysis and recognition based on how brains work.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Quest | CBMM Seminar Series: The Debate Over “Understanding” in AI’s Large Language Models

Apr 2, 2024 - 4:00 pm
Venue:  Singleton Auditorium (46-3002) Speaker/s:  Melanie Mitchell, Santa Fe Institute

Abstract: I will survey a current, heated debate in the AI research community on whether large pre-trained language models can be said to "understand" language—and the physical and social situations language encodes—in any important sense. I will describe arguments that have been made for and against such understanding, and, more generally, will discuss what methods can be used to fairly evaluate understanding and intelligence in AI systems.  I will conclude with key questions for the broader sciences of intelligence that have arisen in light of these discussions. 

Short Bio: Melanie Mitchell is Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction and analogy-making in artificial intelligence systems.  Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her 2009 book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award, and her 2019 book Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux) was shortlisted for the 2023 Cosmos Prize for Scientific Writing. 

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Quest | CBMM Seminar Series: Bayes in the age of intelligent machines

Mar 12, 2024 - 4:00 pm
Venue:  Singleton Auditorium (46-3002) Speaker/s:  Tom Griffiths, Princeton University

Abstract: Recent rapid progress in the creation of artificial intelligence (AI) systems has been driven in large part by innovations in architectures and algorithms for developing large scale artificial neural networks. As a consequence, it’s natural to ask what role abstract principles of intelligence — such as Bayes’ rule — might play in developing intelligent machines. In this talk, I will argue that there is a new way in which Bayes can be used in the context of AI, more akin to how it is used in cognitive science: providing an abstract description of how agents should solve certain problems and hence a tool for understanding their behavior. This new role is motivated in large part by the fact that we have succeeded in creating intelligent systems that we do not fully understand, making the problem for the machine learning researcher more closely parallel that of the cognitive scientist. I will talk about how this perspective can help us think about making machines with better informed priors about the world and give us insight into their behavior by directly creating cognitive models of neural networks.

Bio: I am interested in developing mathematical models of higher level cognition, and understanding the formal principles that underlie our ability to solve the computational problems we face in everyday life. My current focus is on inductive problems, such as probabilistic reasoning, learning causal relationships, acquiring and using language, and inferring the structure of categories. I try to analyze these aspects of human cognition by comparing human behavior to optimal or "rational" solutions to the underlying computational problems. For inductive problems, this usually means exploring how ideas from artificial intelligence, machine learning, and statistics (particularly Bayesian statistics) connect to human cognition. These interests sometimes lead me into other areas of research such as nonparametric Bayesian statistics and formal models of cultural evolution.

I am the Director of the Computational Cognitive Science Lab at Princeton University. Here is a reasonably up-to-date curriculum vitae.

My friend Brian Christian and I recently wrote a book together about the parallels between the everyday problems that arise in human lives and the problems faced by computers. Algorithms to Live By outlines practical solutions to those problems as well as a different way to think about rational decision-making.

I am interested in how novel approaches to data collection and analysis - particularly "big data" - can change psychological research. Read my manifesto and check out the Center for Data on the Mind.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Quest | CBMM Seminar Series: Latent cause inference and mental health

Feb 6, 2024 - 4:00 pm
Venue:  Singleton Auditorium (46-3002) Speaker/s:  Yael Niv, Princeton University

Abstract: No two events are alike. But still, we learn, which means that we implicitly decide what events are similar enough that experience with one can inform us about what to do in another. Starting from early work by Sam Gershman, we have suggested that this relies on parsing of incoming information into “clusters” according to inferred hidden (latent) causes. In this talk, I will present this theory and illustrate its breadth in explaining human learning. I will then discuss the relevance of latent cause inference to understanding mental health conditions and their treatment.

Research in the Niv lab focuses on the neural and computational processes underlying reinforcement learning and decision-making. We study the ongoing day-to-day processes by which animals and humans learn from trial and error, without explicit instructions, to predict future events and to act upon the environment so as to maximize reward and minimize punishment. In particular, we are interested in how attention and memory processes interact with reinforcement learning to create representations that allow us to learn to solve new tasks so efficiently. 

Our emphasis is on model-based experimentation: we use computational models to define precise hypotheses about data, to design experiments, and to analyze results. In particular, we are interested in normative explanations of behavior: models that offer a principled understanding of why our brain mechanisms use the computational algorithms that they do, and in what sense, if at all, these are optimal. In our hands, the main goal of computational models is not to simulate the system, but rather to understand what high-level computations is that system realizing, and what functionality do these computations fulfill. 

A new focus of the lab is computational cognitive neuropsychiatry. Here our aim is to use the computational toolkit that we have developed for quantifying dynamical behavioral processes in order to better diagnose, understand, and treat psychiatric illnesses such as depression, OCD, schizophrenia and addiction. This work is done under the auspices of the new Rutgers-Princeton Center for Computational Cognitive Neuropsychiatry.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Quest | CBMM Seminar Series: Statistical learning in human sensorimotor control

Dec 5, 2023 - 4:00 pm
Venue:  Singleton Auditorium (46-3002) Speaker/s:  Daniel Wolpert, Columbia University

Abstract: Humans spend a lifetime learning, storing and refining a repertoire of motor memories appropriate for the multitude of tasks we perform. However, it is unknown what principle underlies the way our continuous stream of sensorimotor experience is segmented into separate memories and how we adapt and use this growing repertoire. I will review our recent work on how humans learn to make skilled movements focusing on how statistical learning can lead to multi-modal object representations, how we represent the dynamics of objects, the role of context in the expression, updating and creation of motor memories and how families of objects are learned. 

Bio: Daniel Wolpert FMedSci FRS. Daniel qualified as a medical doctor in 1989. He worked with John Stein and Chris Miall in the Physiology Department of Oxford University where he received his D.Phil. in 1992. He worked as a postdoctoral fellow in the Department of Brain and Cognitive Sciences at MIT in Mike Jordan's group and in 1995 joined the Sobell Department of Motor Neuroscience, Institute of Neurology as a Lecturer. In 2005 moved to the University of Cambridge where he was Professor of Engineering (1875) and a fellow of Trinity College and from 2013 the Royal Society Noreen Murray Research Professorship in Neurobiology. In 2018 Daniel joined the Zuckerman Mind Brain and Behavior Institute at Columbia University as Professor of Neuroscience and is vice-chair of the Department of Neuroscience. Daniel retains a part-time position as Director of Research at the Department of Engineering, University of Cambridge.

He was elected a Fellow of the Academy of Medical Sciences in 2004 and a Fellow of the Royal Society in 2012.

He was awarded the Royal Society Francis Crick Prize Lecture (2005), the Minerva Foundation Golden Brain Award (2010),  the Royal Society Ferrier Medal (2020) and gave the Fred Kavli Distinguished International Scientist Lecture at the Society for Neuroscience (2009).

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to No