Meetings

Research Meeting: Computational Feasibility of Artificial Human-Level Intelligence

Oct 29, 2024 - 4:00 pm
Venue:  Room 45-792 Speaker/s:  Eran Malach, Harvard University

Abstract: Modern machine learning models, in particular large language models, are approaching and even surpassing human-level performance at various benchmarks. In this talk, I will discuss the possibilities and barriers towards achieving human-level intelligence from a computational learning theory perspective. Specifically, I will talk about how auto-regressive next-token predictors can learn to solve computationally complex tasks. Additionally, I will discuss how generative models can “transcend” their training data, outperforming the experts that generate their data, with specific focus on learning to play chess from game transcripts.

Organizer:  Kathleen Sullivan Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Lorenzo Rosasco

Sep 17, 2024 - 4:00 pm
Venue:  Room 45-792 Speaker/s:  Lorenzo Rosasco, Italian Institute of Technology (IIT), Università degli Studi di Genova

Abstract: Supervised learning is the problem of estimating a function from input and output samples. But how many samples are needed to achieve a prescribed accuracy?

This question can be answered only by restricting the class of problems—for example, considering functions that don’t vary much. But in even this case, we find that the number of needed samples depends exponentially on the dimensions of each input—the so-called curse of dimensionality.

Since neural nets seem to learn well with much less data, it is natural to postulate that the underlying problems (functions) have more structure beyond bounded variations.  The search for the right notion of “structure” has been quite elusive thus far, and I will discuss some recent results that emphasize the role of sparsity and compositions.

Bio: Lorenzo Rosasco is a professor at the University of Genova, a research affiliate at MIT, and a visiting scientist at the Italian Technological Institute (IIT). He is a founder and coordinator of the Machine Learning Genova center (MaLGa) and the Laboratory for Computational and Statistical Learning, focusing on the theory, algorithms, and applications of machine learning. He obtained his PhD in 2006 from the University of Genova and was a visiting student at the Center for Biological and Computational Learning at MIT, the Toyota Technological Institute at Chicago (TTI-Chicago), and the Johann Radon Institute for Computational and Applied Mathematics. From 2006 to 2013, he worked as a postdoc and research scientist at the Brain and Cognitive Sciences Department at MIT. He is a fellow at Ellis and serves as the co-director of the "Theory, Algorithms and Computations of Modern Learning Systems" program as well as the Ellis Genoa unit. Lorenzo has received several awards, including an ERC consolidator grant.

Organizer:  Kathleen Sullivan Organizer Email:  cbmm-contact@mit.edu

CBMM Research Meeting: Navigating the perceptual space with neural perturbations

Feb 27, 2024 - 3:00 pm
Venue:  McGovern Reading Room (46-5165) Speaker/s:  Arash Afraz Ph.D., Chief of unit on neurons, circuits and behavior, laboratory of neuropsychology, NIMH, NIH

Abstract: Local perturbation of neural activity in high-level visual cortical areas alters visual perception. Quantitative characterization of these perceptual alterations holds the key to understanding the mapping between patterns of neuronal activity and elements of perception. The complexity and subjectivity of these perceptual alterations makes them difficult to study. I introduce a new experimental approach, “Perceptography”, to develop “pictures” of the subjective experience induced by optogenetic cortical stimulation in the inferior temporal cortex of macaque monkeys. 

Bio: Dr. Arash Afraz received his MD from Tehran University of Medical Sciences in 2003. In 2005 he joined the Vision Science Laboratory at Harvard and studied spatial constraints of face recognition under the mentorship of Dr. Patrick Cavanagh. Dr. Afraz received his PhD in Psychology from Harvard University in 2009. Right after, he joined Dr. James DiCarlo’s group at MIT as a postdoctoral fellow to study the neural underpinnings of face and object recognition. Dr. Afraz started at NIMH as a principal investigator in 2017 to lead the unit on Neurons, Circuits and Behavior (Afraz group). Dr. Afraz’s group, Unit on Neurons, Circuits and Behavior, studies the neural mechanisms of visual object recognition. The research team is particularly interested in establishing causal links between the neural activity in the ventral stream of visual processing in the brain and object recognition behavior. The group combines visual psychophysics with conventional methods of single unit recording as well as microstimulation, drug microinjection and optogenetics to bridge the gap between the neural activity and visual perception.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Research Meeting: A Neural Hypothesis for Language

Dec 7, 2023 - 2:30 pm
Venue:  McGovern Seminar Room (46-3189) Speaker/s:  Daniel Mitropolsky, Columbia University

Abstract: How do neurons, in their collective action, beget cognition, as well as intelligence and reasoning? As Richard Axel recently put it, we do not have a logic for the transformation of neural activity into thought and action; discerning this logic as the most important future direction of neuroscience. I will present a mathematical neural model of brain computation called NEMO, whose key ingredients are spiking neurons, random synapses and weights, local inhibition, and Hebbian plasticity (no backpropagation). Concepts are represented by interconnected co-firing assemblies of neurons that emerge organically from the dynamical system of its equations. It turns out it is possible to carry out complex operations on these concept representations, such as copying, merging, completion from small subsets, and sequence memorization. NEMO is a neuromorphic computational system that, because of its simplifying assumptions, can be efficiently simulated on modern hardware. I will present how to use NEMO to implement an efficient parser of a small but non-trivial subset of English, and a more recent model of the language organ in the baby brain that learns the meaning of words, and basic syntax, from whole sentences with grounded input. In addition to constituting hypotheses as to the logic of the brain, we will discuss how principles from these brain-like models might be used to improve AI, which, despite astounding recent progress, still lags behind humans in several key dimensions such as creativity, hard constraints, energy consumption.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Weekly Research Meeting: Excitatory local lateral connectivity is sufficient for reproducing cortex-like topography

Nov 7, 2023 - 4:00 pm
Venue:  McGovern Reading Room (46-5165) Speaker/s:  Pouya Bashivan, McGill University

Abstract:

Across the primate neocortex, neurons that perform similar functions tend to be spatially grouped together. How such organization emerges and why have been debated extensively, with various models successfully replicating aspects of cortical topography using cost functions and learning rules designed to induce topographical structures. However, these models often compromise task learning capabilities and rely on strong assumptions about learning in neural circuits. I will introduce two new approaches for training topographically organized neural networks that substantially improve the trade-off between task performance and topography while also simplifying the assumptions about learning in neural circuits required to obtain brain-like topography. In particular, I will show that excitatory local lateral connectivity is sufficient for simulating cortex-like topographical organization without the need for any topography-promoting learning rules or objectives. I will also discuss the implications of this model for the link between topographical organization and robust representations. 

Bio:

Pouya Bashivan is an Assistant Professor at the Department of Physiology at McGill University, member of the Integrated Program in Neuroscience and an associate member of the Quebec AI Institute (MILA). Prior to joining McGill University, he was a postdoctoral fellow at MILA working with Drs. Irina Rish and Blake Richards. Prior to that he was a postdoctoral researcher at the Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research, MIT, working with Professor James DiCarlo. He received his PhD in computer engineering from the University of Memphis in 2016. Before that, Pouya studied control engineering and earned a B.Sc. and a M.Sc. degree in electrical and control engineering from KNT University (Tehran, Iran).

The goal of research in Bashivan lab is to develop neural network models that leverage memory to solve complex tasks. While we often rely on task-performance measures to find improved neural network models and learning algorithms, we also use neural and behavioral measurements from humans and other animal brains to evaluate the similarity of these models to biologically evolved brains. We believe that these additional constraints could expedite the progress towards engineering a human-level artificially-intelligent agent.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Research Meeting: Using Embodied AI to help answer “why” questions in systems neuroscience

Sep 19, 2023 - 4:00 pm
Venue:  MIBR Reading Room 46-5165 Speaker/s:  Aran Nayebi, ICoN Postdoctoral Fellow at MIT

Abstract:

Deep neural networks trained on high-variation tasks ("goals”) have had immense success as predictive models of the human and non-human primate visual pathways. More specifically, a positive relationship has been observed between model performance on ImageNet categorization and neural predictivity. Past a point, however, improved categorization performance on ImageNet does not yield improved neural predictivity, even between very different architectures. In this talk, I will present two case studies in both rodents and primates, that demonstrate a more general correspondence between self-supervised learning of visual representations relevant to high-dimensional embodied control and increased gains in neural predictivity.

In the first study, we develop the (currently) most precise model of the mouse visual system, and show that self-supervised, contrastive algorithms outperform supervised approaches in capturing neural response variance across visual areas. By “implanting” these visual networks into a biomechanically-realistic rodent body to navigate to rewards in a novel maze environment, we observe that the artificial rodent with a contrastively-optimized visual system is able to obtain more reward across episodes compared to its supervised counterpart. The second case study examines mental simulations in primates, wherein we show that self-supervised video foundation models that predict the future state of their environment in latent spaces that can support a wide range of sensorimotor tasks, align most closely with human error patterns and macaque frontal cortex neural dynamics. Taken together, our findings suggest that self-supervised learning of visual representations that are reusable for downstream Embodied AI tasks may be a promising way forward to study the evolutionary constraints of neural circuits in multiple species.

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM | Quest Research Meeting / Town Hall

Sep 13, 2022 - 4:00 pm
Venue:  McGovern Reading Room (fallback Singleton Auditorium 46-3002) Speaker/s:  Profs. Jim DiCarlo (Director, MIT Quest) and Tomaso Poggio (Director, CBMM)

Topic: New Home: CBMM is now part of the MIT Quest for Intelligence Initiative

We will have a combined CBMM | Quest research meeting to discuss CBMM’s recent move to the MIT Quest for Intelligence Initiative (MIT Quest.) A reception will be held immediately following the meeting. Hope you will be able to join us.

Organizer:  Kris Brewer Organizer Email:  brew@mit.edu

Research Meeting [Virtual]: "Using language to understand the world and the brain" by Dr. Andrei Barbu and Yen-Ling Kuo

Nov 30, 2021 - 4:00 pm
Photos of Dr. Andrei Barbu and Yen-Ling Kuo
Venue:  MIBR Seminar Room 46-3189 Address:  MIT Building 46 | Brain and Cognitive Sciences Complex, 43 Vassar Street, Cambridge MA 02139 Speaker/s:  Dr. Andrei Barbu, Research Scientist, InfoLab, CSAIL MIT Yen-Ling Kuo, InfoLab, CSAIL MIT

Please note the change in date and format for this research meeting. This meeting will be held on Tues., Nov. 30, 2021; (Previously scheduled for Nov. 23rd.) This meeting will also be held in a fully remote format via Zoom.

Abstract: Language, and more generally the principle of compositionality, provides a window through which humans generalize their knowledge between radically different settings. We will discuss how compositionality can be incorporated into robotic models to enable them to act rationally in new scenarios while following novel commands. We then apply this principle to creating machines that reason about sequences of actions and about social interactions. Finally, we will present a new dataset and analysis that helps shed light on how language is processed by the brain.

Zoom link: https://mit.zoom.us/j/99978802387?pwd=OVpQTjE0VkUwRmxWUnp5RWJQUVBUdz09

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Module 3 research presentation

Nov 2, 2021 - 4:00 pm
Vivian Paulun headshot
Venue:  MIBR Seminar Room 46-3189 Address:  MIT Building 46 | Brain and Cognitive Sciences Complex, 43 Vassar Street, Cambridge MA 02139 Speaker/s:  Vivian Paulun

Title: Wobbling, drooping, bouncing—Visual perception of materials and their properties

Abstract: Visual inference of material properties like mass, compliance, elasticity or fragility is crucial to predicting and interacting with our environment. Yet, it is unclear how the brain achieves this remarkable ability. How materials move, flow, fold or deform, depends not only on their internal properties but also on many external factors. For example, the observable behavior of an elastic bouncing object depends on its elasticity but also on its initial position and velocity. Estimating elasticity requires disentangling these different contributions to the observed motion. Predicting the future path of the object requires a forward simulation given the estimated latent parameters. I will present a set of experiments in which we investigated how accurately human observers estimate the elasticity of bouncing objects or predict their future path. Furthermore, I will discuss the nature of the visual information observers use as well as the limitations of their internal model.

The Fall 2021 CBMM Research Meetings will be hosted in a hybrid format. Please see the information included below regarding attending the event either in-person or remotely via Zoom connection

Guidance for attending in-person:

MIT attendees:
MIT attendees will need to be registered via the MIT COVIDpass system to have access to MIT Building 46.
Please visit URL https://covidpass.mit.edu/ for more information regarding MIT COVIDpass.

Non-MIT attendees:

MIT is currently welcoming visitors to attend talks in person. All visitors to the MIT campus are required to follow MIT COVID19 protocols, see URL https://now.mit.edu/policies/campus-access-and-visitors/.  Specifically, visitors are required to wear a face-covering/mask while indoors and use the new MIT TIM Ticket system for accessing MIT buildings. Per MIT’s event policy, use of the Tim Tickets system is required for all indoor events; for information about this and other current MIT policies, visit MIT Now.

Link to this event's MIT TIM TICKET: https://tim-tickets.atlas-apps.mit.edu/21MMMGhwm1nKfPsd8

To access MIT Bldg. 46 with a TIM Ticket, please enter the building via the McGovern/Main Street entrance - 524 Main Street (on GPS). This entrance is equipped with a QR reader that can read the TIM Ticket. A map of the location of, and an image of, this entrance is available at URL: https://mcgovern.mit.edu/contact-us/

General TIM Ticket information:

A visitor may use a Tim Ticket to access Bldg. 46 any time between 6 a.m. and 6 p.m., M-F

A Tim Ticket is a QR code that serves as a visitor pass. A Tim Ticket, named for MIT’s mascot, Tim the Beaver, is the equivalent of giving someone your key to unlock a building door, without actually giving up your keys.

This system allows MIT to collect basic information about visitors entering MIT buildings while providing MIT hosts a convenient way to invite visitors to safely access our campus.

Information collected by the TIM Ticket:

  • Name

  • Phone number

  • Email address

  • COVID-19 vaccination status (i.e., whether fully vaccinated or exempt)

  • Symptom status and wellness information for the day of visit

The Tim Tickets system can be accessed by invited guests through the MIT Tim Tickets mobile application (available for iOS 13+ or Android 7+) or on the web at visitors.mit.edu.

Visitors must acknowledge and agree to terms for campus access, confirm basic contact information, and submit a brief attestation about health and vaccination status. Visitors should complete these steps at least 30 minutes before scanning into an MIT building.

For more information on the TIM Tickets, please visit https://covidapps.mit.edu/visitors#for-access

Details to attend talk remotely via Zoom:

Zoom link: https://mit.zoom.us/j/95324900421?pwd=bUZzVFo4M2oyVTR3Skd3K1BSSlExZz09

 

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Module 2 research presentation by Drs. Jie Zheng and Mengmi Zhang

Oct 26, 2021 - 4:00 pm
CBMM logo
Venue:  MIBR Seminar Room 46-3189 Speaker/s:  Drs. Jie Zheng and Mengmi Zhang Please note, Dr. Zhang will be presenting remotely via Zoom.

Abstract:

Jie Zheng's presentation:

Title: Neurons that structure memories of ordered experience in human

 Abstract: The process of constructing temporal associations among related events is essential to episodic memory. However, what neural mechanism helps accomplish this function remains unclear. To address this question, we recorded single unit activity in humans while subjects performed a temporal order memory task. During encoding, subjects watched a series of clips (i.e., each clip consisted of 4 events) and were later instructed to retrieve the ordinal information of event sequences. We found that hippocampal neurons in humans could index specific orders of events with increased neuronal firings (i.e., rate order cells) or clustered spike timing relative to theta phases (i.e., phase order cells), which are transferrable across different encoding experiences (e.g., different clips). Rate order cells also increased their firing rates when subjects correctly retrieved the temporal information of their preferred ordered events. Phase order cells demonstrated stronger phase precessions at event transitions during encoding for clips whose ordinal information was subsequently correct retrieved. These results not only highlight the critical role of the hippocampus in structuring memories of continuous event sequences but also suggest a potential neural code representing temporal associations among events.

 

Mengmi Zhang's [virtual] presentation:

Title: Visual Search Asymmetry: Deep Nets and Humans Share Similar Inherent Biases

Abstract: Visual search is a ubiquitous and often challenging daily task, exemplified by looking for the car keys at home or a friend in a crowd. An intriguing property of some classical search tasks is an asymmetry such that finding a target A among distractors B can be easier than finding B among A. To elucidate the mechanisms responsible for asymmetry in visual search, we propose a computational model that takes a target and a search image as inputs and produces a sequence of eye movements until the target is found. The model integrates eccentricity-dependent visual recognition with target-dependent top-down cues. We compared the model against human behavior in six paradigmatic search tasks that show asymmetry in humans. Without prior exposure to the stimuli or task-specific training, the model provides a plausible mechanism for search asymmetry. We hypothesized that the polarity of search asymmetry arises from experience with the natural environment. We tested this hypothesis by training the model on an augmented version of ImageNet where the biases of natural images were either removed or reversed. The polarity of search asymmetry disappeared or was altered depending on the training protocol. This study highlights how classical perceptual properties can emerge in neural network models, without the need for task-specific training, but rather as a consequence of the statistical properties of the developmental diet fed to the model. Our work will be presented in the upcoming Neurips conference, 2021. All source code and stimuli are publicly available this https URL

---

The Fall 2021 CBMM Research Meetings will be hosted in a hybrid format. Please see the information included below regarding attending the event either in-person or remotely via Zoom connection

Details to attend talk remotely via Zoom:

Zoom connection link: https://mit.zoom.us/j/91580119583?pwd=Z212ZjM3MFNFSHNTYlcyaUJZbjQrQT09

Guidance for attending in-person:

MIT attendees:
MIT attendees will need to be registered via the MIT COVIDpass system to have access to MIT Building 46.
Please visit URL https://covidpass.mit.edu/ for more information regarding MIT COVIDpass.

Non-MIT attendees:

MIT is currently welcoming visitors to attend talks in person. All visitors to the MIT campus are required to follow MIT COVID19 protocols, see URL https://now.mit.edu/policies/campus-access-and-visitors/.  Specifically, visitors are required to wear a face-covering/mask while indoors and use the new MIT TIM Ticket system for accessing MIT buildings. Per MIT’s event policy, use of the Tim Tickets system is required for all indoor events; for information about this and other current MIT policies, visit MIT Now.

Link to this event's MIT TIM TICKET: https://tim-tickets.atlas-apps.mit.edu/o1T9dFk9TTcDfzKF8

To access MIT Bldg. 46 with a TIM Ticket, please enter the building via the McGovern/Main Street entrance - 524 Main Street (on GPS). This entrance is equipped with a QR reader that can read the TIM Ticket. A map of the location of, and an image of, this entrance is available at URL: https://mcgovern.mit.edu/contact-us/

General TIM Ticket information:

A visitor may use a Tim Ticket to access Bldg. 46 any time between 6 a.m. and 6 p.m., M-F

A Tim Ticket is a QR code that serves as a visitor pass. A Tim Ticket, named for MIT’s mascot, Tim the Beaver, is the equivalent of giving someone your key to unlock a building door, without actually giving up your keys.

This system allows MIT to collect basic information about visitors entering MIT buildings while providing MIT hosts a convenient way to invite visitors to safely access our campus.

Information collected by the TIM Ticket:

  • Name
  • Phone number
  • Email address
  • COVID-19 vaccination status (i.e., whether fully vaccinated or exempt)
  • Symptom status and wellness information for the day of visit

The Tim Tickets system can be accessed by invited guests through the MIT Tim Tickets mobile application (available for iOS 13+ or Android 7+) or on the web at visitors.mit.edu.

Visitors must acknowledge and agree to terms for campus access, confirm basic contact information, and submit a brief attestation about health and vaccination status. Visitors should complete these steps at least 30 minutes before scanning into an MIT building.

For more information on the TIM Tickets, please visit https://covidapps.mit.edu/visitors#for-access

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to Meetings