Seminars

Quest | CBMM Seminar Series: Photographic Image Priors in the Era of Machine Learning

May 9, 2023 - 4:00 pm
Venue:  Singleton Auditorium (46-3002) Address:  43 Vassar St. Cambridge, MA 02139 Speaker/s:  Eero Simoncelli, Silver Professor; Professor of Neural Science, Mathematics, Data Science and Psychology, NYU

Abstract: Inference problems in machine or biological vision generally rely on knowledge of prior probabilities, such as spectral or sparsity models.  In recent years, machine learning has provided dramatic improvements in most of these problems using artificial neural networks, which are typically optimized using nonlinear regression to provide direct solutions for each specific task.  As such, the prior probabilities are implicit, and intertwined with the tasks for which they are optimized.  I'll describe properties of priors implicitly embedded in denoising networks, and describe methods for drawing samples from them.  Extensions of these sampling methods enable the use of the implicit prior to solve any deterministic linear inverse problem, with no additional training, thus extending the power of a supervised learning for denoising to a much broader set of problems. The method relies on minimal assumptions, exhibits robust convergence over a wide range of parameter choices, and achieves state-of-the-art levels of unsupervised performance for deblurring, super-resolution, and compressive sensing. It also can be used to examine perceptual implications of physiological information processing.

Bio: Eero received a BA in Physics from Harvard (1984), a Certificate of Advanced study in Math(s) from University of Cambridge (1986), and a MS and PhD in Electrical Engineering and Computer Science from MIT (1988/1993). He was an assistant professor in the Computer and Information Science Department at the University of Pennsylvania from 1993 to 1996, and then moved to NYU as an assistant professor of Neural Science and Mathematics (later adding Psychology, and most recently, Data Science). Eero received an NSF CAREER award in 1996, an Alfred P. Sloan Research Fellowship in 1998, and became an Investigator of the Howard Hughes Medical Institute in 2000. He was elected a Fellow of the IEEE in 2008, and an associate member of the Canadian institute for Advanced Research in 2010. He has received two Outstanding Faculty awards from the NYU GSAS Graduate Student Council (2003/2011), two IEEE Best Journal Article awards (2009/2010) and a Sustained Impact Paper award (2016), an Emmy Award from the Academy of Television Arts and Sciences for a method of measuring the perceptual quality of images (2015), and the Golden Brain Award from the Minerva Foundation, for fundamental contributions to visual neuroscience (2017). His group studies the representation and analysis of visual images in biological and machine systems.

This will be an in-person only event.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Quest | CBMM Seminar Series: Improving Deep Reinforcement Learning via Quality Diversity, Open-Ended, and AI-Generating Algorithms

May 2, 2023 - 4:00 pm
Venue:  Singleton Auditorium (46-3002) Address:  43 Vassar St. Cambridge, MA 02139 Speaker/s:  Jeff Clune, Associate Professor, Computer Science, University of British Columbia; Canada CIFAR AI Chair and Faculty Member, Vector Institute; Senior Research Advisor, DeepMind

Abstract: Quality Diversity (QD) algorithms are those that seek to produce a diverse set of high-performing solutions to problems. I will describe them and a number of their positive attributes. I will summarize how they enable robots, after being damaged, to adapt in 1-2 minutes in order to continue performing their mission. I will next describe our QD-based Go-Explore algorithm, which dramatically improves the ability of deep reinforcement learning algorithms to solve previously unsolvable problems wherein reward signals are sparse, meaning that intelligent exploration is required. Go-Explore solved all unsolved Atari games, including Montezuma’s Revenge and Pitfall, considered by many to be a grand challenges of AI research. I will next motivate research into open-ended algorithms, which seek to innovate endlessly, and introduce our POET algorithm, which generates its own training challenges while learning to solve them, automatically creating a curricula for robots to learn an expanding set of diverse skills. Finally, I’ll argue that an alternate paradigm—AI-generating algorithms (AI-GAs)—may be the fastest path to accomplishing our field’s grandest ambition of creating general AI, and describe how QD, Open-Ended, and unsupervised pre-training algorithms (e.g. our recent work on video pre-training/VPT) will likely be essential ingredients of AI-GAs. 

Bio: Jeff Clune is an Associate Professor of computer science at the University of British Columbia and a Canada CIFAR AI Chair at the Vector Institute. Jeff focuses on deep learning, including deep reinforcement learning. Previously he was a research manager at OpenAI, a Senior Research Manager and founding member of Uber AI Labs (formed after Uber acquired a startup he helped lead), the Harris Associate Professor in Computer Science at the University of Wyoming, and a Research Scientist at Cornell University. He received degrees from Michigan State University (PhD, master’s) and the University of Michigan (bachelor’s). More on Jeff’s research can be found at http://www.JeffClune.com or on Twitter (@jeffclune).  Since 2015, he won the Presidential Early Career Award for Scientists and Engineers from the White House, had two papers in Nature and one in PNAS, won an NSF CAREER award, received Outstanding Paper of the Decade and Distinguished Young Investigator awards, and had best paper awards, oral presentations, and invited talks at the top machine learning conferences (NeurIPS, CVPR, ICLR, and ICML). His research is regularly covered in the press, including the New York Times, NPR, NBC, Wired, the BBC, the Economist, Science, Nature, National Geographic, the Atlantic, and the New Scientist.

This will be an in-person only event.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM | Quest Seminar Series: Characterizing complex meaning in the human brain

Apr 25, 2023 - 4:00 pm
Venue:  Singleton Auditorium (46-3002) Address:  43 Vassar St. Cambridge, MA 02139 Speaker/s:  Leila Wehbe, Carnegie Mellon University

Abstract: Aligning neural network representations with brain activity measurements is a promising approach for studying the brain. However, it is not always clear what the ability to predict brain activity from neural network representations entails. In this talk, I will describe a line of work that utilizes computational controls (control procedures used after data collection) and other procedures to understand how the brain constructs complex meaning. I will describe experiments aimed at studying the representation of the composed meaning of words during language processing, and the representation of high-level visual semantics during visual scene understanding. These experiments shed new light on meaning representation in language and vision. 

Bio: Leila Wehbe is an assistant professor in the Machine Learning Department and the Neuroscience Institute at Carnegie Mellon University. Her work is at the interface of cognitive neuroscience and computer science. It combines naturalistic functional imaging with machine learning both to improve our understanding of the brain and to find insight for building better artificial systems. Previously, she was a postdoctoral researcher at UC Berkeley, working with Jack Gallant. She obtained her PhD from Carnegie Mellon University, where she worked with Tom Mitchell.

This will be an in-person only event.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM | Quest Seminar Series - Eleanor Jack Gibson: A Life in Science

Apr 11, 2023 - 4:00 pm
Venue:  Singleton Auditorium (46-3002) Address:  43 Vassar St. Cambridge, MA 02139 Speaker/s:  Elizabeth Spelke, Harvard University

Abstract: More than two decades after her death, Eleanor Gibson still may be the best experimental psychologist ever to work in the developmental cognitive sciences, yet her work appears to have been forgotten, or never learned, by many students and investigators today.  Here, drawing on three of Gibson’s autobiographies, together with her published research and a few personal recollections, I aim to paint a portrait of her life and science.  What’s it like to be a gifted and knowledgeable scientist, working in a world that systematically excludes people like oneself, both institutionally and socially?  What institutional actions support such people, both for their benefit and for the benefit of science and its institutions?  In this talk, I focus primarily on Gibson’s thinking and research, but her life and science suggest some answers to these questions and some optimism for the future of our fields.

Bio: Elizabeth Spelke is the Marshall L. Berkman Professor of Psychology at Harvard University and an investigator at the NSF-MIT Center for Brains, Minds and Machines. Her laboratory focuses on the sources of uniquely human cognitive capacities, including capacities for formal mathematics, for constructing and using symbols, and for developing comprehensive taxonomies of objects. She probes the sources of these capacities primarily through behavioral research on human infants and preschool children, focusing on the origins and development of their understanding of objects, actions, people, places, number, and geometry. In collaboration with computational cognitive scientists, she aims to test computational models of infants’ cognitive capacities. In collaboration with economists, she has begun to take her research from the laboratory to the field, where randomized controlled experiments can serve to evaluate interventions, guided by research in cognitive science, that seek to enhance young children’s learning.

This will be an in-person only event.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Quest | CBMM Seminar Series: Quantifying and Understanding Memorization in Deep Neural Networks

Mar 21, 2023 - 4:00 pm
Venue:  Singleton Auditorium (46-3002) Speaker/s:  Chiyuan Zhang, Google

Abstract: Deep learning algorithms are well-known to have a propensity for fitting the training data very well and memorize idiosyncratic properties in the training examples. From a scientific perspective, understanding memorization in deep neural networks shed light on how those models generalize. From a practical perspective, understanding memorization is crucial to address privacy and security issues related to deploying models in real world applications. In this talk,  we present a series of studies centered at quantifying memorization in neural language models. We explain why in many real world tasks, memorization is necessary for optimal generalization. We also present quantitative studies on memorization, forgetting and unlearning of both vision and language models, to better understand the behaviors and implications of memorization in those models.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Quest | CBMM Seminar Series: The neural computations underlying real-world social interaction perception

Feb 7, 2023 - 4:00 pm
headshot of Prof. Leyla Isik
Venue:  Singleton Auditorium (46-3002) Speaker/s:  Leyla Isik, Johns Hopkins University

Leyla Isik is the Clare Boothe Luce Assistant Professor in the Department of Cognitive Science at Johns Hopkins University. Her research aims to answer the question of how humans extract complex information using a combination of human neuroimaging, intracranial recordings, machine learning, and behavioral techniques. Before joining Johns Hopkins, Isik was a postdoctoral researcher at MIT and Harvard in the Center for Brains, Minds, and Machines working with Nancy Kanwisher and Gabriel Kreiman. Isik completed her PhD at MIT where she was advised by Tomaso Poggio.

Abstract: Humans perceive the world in rich social detail. We effortlessly recognize not only objects and people in our environment, but also social interactions between people. The ability to perceive and understand social interactions is critical for functioning in our social world. We recently identified a brain region that selectively represents others’ social interactions in the posterior superior temporal sulcus (pSTS) across two diverse sets of controlled, animated videos. However, it is unclear how social interactions are processed in the real world where they co-vary with many other sensory and social features. In the first part of my talk, I will discuss new work using naturalistic fMRI movie paradigms and novel machine learning analyses to understand how humans process social interactions in real-world settings. We find that social interactions guide behavioral judgements and are selectively processed in the pSTS, even after controlling for the effects of other perceptual and social information, including faces, voices, and theory of mind. In the second part of my talk, I will discuss the computational implications of social interaction selectivity and present a novel graph neural network model, SocialGNN, that instantiates these insights. SocialGNN reproduces human social interaction judgements in both controlled and natural videos using only visual information, but requires relational, graph structure and processing to do so. Together, this work suggests that social interaction recognition is a core human ability that relies on specialized, structured visual representations.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Quest | CBMM Seminar Series: Maps and Mother Love

Nov 22, 2022 - 4:00 pm
Venue:  Virtual Speaker/s:  Margaret Livingstone, Harvard Medical School

Margaret Livingstone is the Takeda Professor of Neurobiology in the Blavatnik Institute of Neurobiology at Harvard Medical School. Livingstone has long been interested in how tuning properties of individual neurons can be clustered at a gross level in the brain. The lab began by looking at the parallel processing of different kinds of visual information, going back and forth between human psychophysics, anatomical interconnectivity of modules in the primate brain, and single unit receptive-field properties.

This will be a virtual event only. You can access the event here.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM | Quest Seminar Series: The Second Face-Processing System

Nov 8, 2022 - 4:00 pm
headshot of Winrich Freiwald in a suit.
Venue:  Singleton Auditorium (46-3002) Address:  43 Vassar St. Cambridge, MA 02139 Speaker/s:  Prof. Winrich Freiwald, The Rockefeller University

Current understanding of the neural mechanisms of face processing, and the computational principles they employ, is based, primarily, on studies of a set of fMRI-identified face areas inside macaque inferotemporal cortex. These face areas contain very high fractions of face cells, occur at reproducible locations, and are directly and selectively interconnected to form a face-processing network. Within this network, a processing hierarchy is implemented, along which an initial representation dominated mostly be image-based features is transformed, in two major steps, into a representation dominated by identity. The clarity of the system’s organization and the qualitatively different representations across face areas, have facilitated the understanding of the mechanisms and principles of hierarchical information processing. According to the standard view, the system instantiates a subsystem of the general object-processing ventral stream. In my talk, I will describe the discovery and basic properties of a second face-processing system. The second system shares one fundamental organizational feature, differs in many functional properties, and exhibits several surprising features such that its overall organization appears to be almost the inverse of the first system’s pattern of organization. I will describe these results, which link to some of the oldest findings on face cells, propose a functional theory of this system’s role, and discuss the impact of these findings on our implicit and explicit assumptions about face and object processing systems.

This will be an in person event only. No Zoom, live stream or recording will be available.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM | Quest Seminar Series - Reintegrating AI: Skills, Symbols, and the Sensorimotor Dilemma

Oct 18, 2022 - 4:00 pm
headshot of George Konidaris in blue shirt
Venue:  Singleton Auditorium (46-3002) Speaker/s:  Prof. George Konidaris, Brown University

Abstract: AI is, at once, an immensely successful field---generating remarkable  ongoing innovation that powers whole industries---and a complete failure. Despite more than 50 years of study, the field has never  settled on a widely accepted, or even well-formulated, definition of its  primary scientific goal: designing a general intelligence. Instead it  consists of siloed subfields studying isolated aspects of intelligence, each of which is important but none of which can reasonably claim to address the problem as a whole. But intelligence is not a collection of loosely related capabilities; AI is not about learning or planning, reasoning or vision, grasping or language---it is about all of these capabilities, and how they work together to generate complex behavior. We cannot hope to make progress towards answering the overarching scientific question without a sincere and sustained effort to reintegrate the field.

My talk will describe the current working hypothesis of the Brown Integrative, General-Purpose AI (bigAI) group, which takes the form of a decision-theoretic model that could plausibly generate the full range of intelligent behavior. Our approach is explicitly structuralist: we aim to understand how to structure intelligent agent by reintegrating, rather than discarding, existing subfields into a intellectually coherent single model. The model follows from the claim that general intelligence can only coherently be ascribed to a robot, not a computer, and that the resulting interaction with the world can be well-modeled as a decision process. Such a robot faces a sensorimotor dilemma: it must necessarily operate in a very rich sensorimotor space---one sufficient to support all the tasks it must solve, but that is therefore vastly overpowered for any single one. A core (but heretofore largely neglected) requirement for general intelligence is therefore the ability to autonomously formulate streamlined, task-specific representations, of  the kind that single-task agents are typically assumed to be given. Our model also cleanly incorporates existing techniques developed in robotics, viewing them as innate knowledge about the structure of the world and the robot, and modeling them as the first few layers of a hierarchy of decision processes. Finally, our model suggests that language should ground to decision process formalisms, rather than abstract knowledge bases, text, or video, because they are the model that best characterizes the principal task facing both humans and robots.

Speaker Bio: George Konidaris is an Associate Professor of Computer Science and director of the Intelligent Robot Lab at Brown, which forms part of bigAI (Brown Integrative, General AI). He is also the Chief Roboticist of Realtime Robotics, a startup based on his research on robot motion planning. Konidaris focuses on understanding how to design agents that learn abstraction hierarchies that enable fast, goal-oriented planning. He develops and applies techniques from machine learning, reinforcement learning, optimal control and planning to construct well-grounded hierarchies that result in fast planning for common cases, and are robust to uncertainty at every level of control.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Brains, Minds, and Machines Seminar Series: How fly neurons compute the direction of visual motion

Mar 22, 2022 - 2:00 pm
fruit fly on a tree looking structure setup as an abicus
Venue:  Virtual (Zoom) Speaker/s:  Alexander Borst, Max-Planck-Institute of Neurobiology, Martinsried, Germany

This talk will be fully remote via a Zoom Webinar.

Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits, involving a comparison of the signals from neighboring photoreceptors over time. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Only recently, much progress has been made in the fruit fly Drosophila by genetically targeting individual neuron types to block, activate or record from them. Our results obtained this way demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.

Please click the link below to join the webinar:
Passcode: 691430
Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to Seminars