Meetings

Research Meeting: Module 2 research presentation by Trenton Bricken and Will Xiao

Oct 19, 2021 - 4:00 pm
Graphic image with a photo of a monkey
Venue:  MIBR Seminar Room 46-3189 Address:  MIT Building 46 | Brain and Cognitive Sciences Complex, 43 Vassar Street, Cambridge MA 02139 Speaker/s:  Trenton Bricken and Will Xiao, Kreiman Lab

Will Xiao's presentation

Title: What you see is what IT gets: Responses in primate visual cortex during natural viewing

Abstract: How does the brain support our ability to see? Studies of primate vision have typically focused on controlled viewing conditions exemplified by the rapid serial visual presentation (RSVP) task, where the subject must hold fixation while images are flashed briefly in randomized order. In contrast, during natural viewing, eyes move frequently, guided by subject-initiated saccades, resulting in a sequence of related sensory input. Thus, natural viewing departs from traditional assumptions of independent and unpredictable visual inputs, leaving it an open question how visual neurons respond in real life.

    We recorded responses of interior temporal (IT) cortex neurons in macaque monkeys freely viewing natural images. We first examined responses of face-selective neurons and found that face neurons responded according to whether individual fixations were near a face, meticulously distinguishing single fixations. Second, we considered repeated fixations on very close-by locations, termed ‘return fixations.’ Responses were more similar during return fixations, and again distinguished individual fixations. Third, computation models could partially explain neuronal responses from an image crop centered on each fixation.

    These results shed light on how the IT cortex does (and does not) contribute to our daily visual percept: a stable world despite frequent saccades.

Trenton Bricken's presentation:

Title: Attention Approximates Sparse Distributed Memory

Abstract: While Attention has come to be an important mechanism in deep learning, it emerged out of a heuristic process of trial and error, providing limited intuition for why it works so well. Here, we show that Transformer Attention closely approximates Sparse Distributed Memory (SDM), a biologically plausible associative memory model, under certain data conditions. We confirm that these conditions are satisfied in pre-trained GPT2 Transformer models. We discuss the implications of the Attention-SDM map and provide new computational and biological interpretations of Attention.

---

The Fall 2021 CBMM Research Meetings  will be hosted in a hybrid format. Please see the information included below regarding attending the event either in-person or remotely via Zoom connection

Details to attend talk remotely via Zoom:

Zoom connection link: https://mit.zoom.us/j/95527039951?pwd=T2cvYnRyQ0F6elVKWWdXNVg3UWhaZz09

Guidance for attending in-person:

MIT attendees:
MIT attendees will need to be registered via the MIT COVIDpass system to have access to MIT Building 46.
Please visit URL https://covidpass.mit.edu/ for more information regarding MIT COVIDpass.

Non-MIT attendees:

MIT is currently welcoming visitors to attend talks in person. All visitors to the MIT campus are required to follow MIT COVID19 protocols, see URL https://now.mit.edu/policies/campus-access-and-visitors/.  Specifically, visitors are required to wear a face-covering/mask while indoors and use the new MIT TIM Ticket system for accessing MIT buildings. Per MIT’s event policy, use of the Tim Tickets system is required for all indoor events; for information about this and other current MIT policies, visit MIT Now.

Link to this event's MIT TIM TICKET: https://tim-tickets.atlas-apps.mit.edu/LmU6ubyLqvMEGYYh6

To access MIT Bldg. 46 with a TIM Ticket, please enter the building via the McGovern/Main Street entrance - 524 Main Street (on GPS). This entrance is equipped with a QR reader that can read the TIM Ticket. A map of the location of, and an image of, this entrance is available at URL: https://mcgovern.mit.edu/contact-us/

General TIM Ticket information:

A visitor may use a Tim Ticket to access Bldg. 46 any time between 6 a.m. and 6 p.m., M-F

A Tim Ticket is a QR code that serves as a visitor pass. A Tim Ticket, named for MIT’s mascot, Tim the Beaver, is the equivalent of giving someone your key to unlock a building door, without actually giving up your keys.

This system allows MIT to collect basic information about visitors entering MIT buildings while providing MIT hosts a convenient way to invite visitors to safely access our campus.

Information collected by the TIM Ticket:

  • Name
  • Phone number
  • Email address
  • COVID-19 vaccination status (i.e., whether fully vaccinated or exempt)
  • Symptom status and wellness information for the day of visit

The Tim Tickets system can be accessed by invited guests through the MIT Tim Tickets mobile application (available for iOS 13+ or Android 7+) or on the web at visitors.mit.edu.

Visitors must acknowledge and agree to terms for campus access, confirm basic contact information, and submit a brief attestation about health and vaccination status. Visitors should complete these steps at least 30 minutes before scanning into an MIT building.

For more information on the TIM Tickets, please visit https://covidapps.mit.edu/visitors#for-access

 

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: "Probing the mechanisms of visual object recognition with reversible chemogenetic modulation of macaque V4 neurons" by Dr. Kohitij Kar

Feb 15, 2022 - 4:00 pm
Photo of Dr. Kohitij Kar (MIT)
Venue:  MIBR Seminar Room 46-3189 Address:  MIT Building 46 | Brain and Cognitive Sciences Complex, 43 Vassar Street, Cambridge MA 02139 Speaker/s:  Dr. Kohitij Kar, DiCarlo Lab, MIT

The Spring 2022 CBMM Research Meetings will be hosted in a hybrid format. Please see the information included below regarding attending the event either in-person or remotely via Zoom connection

Please note, MIT is requiring that all attendees, including MIT COVIDpass users, sign-in to the event prior to entering the auditorium.

Abstract: We can computationally approximate a visual object's identity from the distributed neural activity patterns across a series of hierarchically connected brain areas (e.g., V4, IT) in the primate ventral stream. However, testing whether these circuits indeed play a causal role requires targeted neural perturbation strategies that enable discrimination amongst competing models. Here we probed the role of macaque V4 and its primary feedforward target, the inferior temporal (IT) cortex, during object recognition. We combined DREADDs-based chemogenetic inhibition of V4 neurons with large-scale electrophysiology in V4 and IT while simultaneously measuring the monkeys' image-by-image object recognition behavior. Our results provide causal evidence linking the ventral stream hierarchy with core object recognition behavior. Also, in addition, to providing a “yes” vs. “no” answer to the involvement of a brain area in a behavior, we demonstrate how our approach allows us to use direct causal perturbation data to discriminate amongst competing mechanistic brain models.​

 

Link to attend talk remotely via Zoom:

Zoom link: https://mit.zoom.us/j/97558618808?pwd=b2RTVkZLUmpIL2Y3Szg2TG9RT1BoZz09

Guidance for attending in-person:

MIT attendees:
MIT attendees will need to be registered via the MIT COVIDpass system to have access to MIT Building 46.
Please visit URL https://covidpass.mit.edu/ for more information regarding MIT COVIDpass.

Non-MIT attendees:

MIT is currently welcoming visitors to attend talks in person. All visitors to the MIT campus are required to follow MIT COVID19 protocols, see URL https://now.mit.edu/policies/campus-access-and-visitors/.  Specifically, visitors are required to wear a face-covering/mask while indoors and use the new MIT TIM Ticket system for accessing MIT buildings. Per MIT’s event policy, use of the Tim Tickets system is required for all indoor events; for information about this and other current MIT policies, visit MIT Now.

Link to this event's MIT TIM TICKET: https://tim-tickets.atlas-apps.mit.edu/CmoDwauHJkuqMSBo8

To access MIT Bldg. 46 with a TIM Ticket, please enter the building via the McGovern/Main Street entrance - 524 Main Street (on GPS). This entrance is equipped with a QR reader that can read the TIM Ticket. A map of the location of, and an image of, this entrance is available at URL: https://mcgovern.mit.edu/contact-us/

General TIM Ticket information:

A visitor may use a Tim Ticket to access Bldg. 46 any time between 6 a.m. and 6 p.m., M-F

A Tim Ticket is a QR code that serves as a visitor pass. A Tim Ticket, named for MIT’s mascot, Tim the Beaver, is the equivalent of giving someone your key to unlock a building door, without actually giving up your keys.

This system allows MIT to collect basic information about visitors entering MIT buildings while providing MIT hosts a convenient way to invite visitors to safely access our campus.

Information collected by the TIM Ticket:

  • Name
  • Phone number
  • Email address
  • COVID-19 vaccination status (i.e., whether fully vaccinated or exempt)
  • Symptom status and wellness information for the day of visit

The Tim Tickets system can be accessed by invited guests through the MIT Tim Tickets mobile application (available for iOS 13+ or Android 7+) or on the web at visitors.mit.edu.

Visitors must acknowledge and agree to terms for campus access, confirm basic contact information, and submit a brief attestation about health and vaccination status. Visitors should complete these steps at least 30 minutes before scanning into an MIT building.

For more information on the TIM Tickets, please visit https://covidapps.mit.edu/visitors#for-access

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Annual Retreat

Aug 19, 2021 - 10:00 am

Tentative agenda (pdf) - updated 8/16/2021

Zoom connection details:
Topic: CBMM 2021 Retreat
Time: Aug 19, 2021 10:00 AM Eastern Time (US and Canada)

Join Zoom Meeting: https://mit.zoom.us/j/99434748270?pwd=Rm5LSjRDcUo0N2xtd2Z0NzlyMkJQQT09
Password: 123167

One tap mobile
+16465588656,,99434748270# US (New York)
+16699006833,,99434748270# US (San Jose)

Meeting ID: 994 3474 8270
US : +1 646 558 8656 or +1 669 900 6833
International Numbers: https://mit.zoom.us/u/ac3RVPJUxS
Join by SIP
99434748270@zoomcrc.com
Join by Skype for Business
https://mit.zoom.us/skype/99434748270

Organizer:  Kathleen Sullivan Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Module 2 Research Update

Feb 16, 2021 - 4:00 pm
Venue:  Hosted via Zoom Speaker/s:  Mengmi Zhang, Jie Zheng, and Will Xiao (Kreiman Lab)

Host: Prof. Gabriel Kreiman (Children's, Harvard)

Speaker: Mengmi Zhang
Title: The combination of eccentricity, bottom-up, and top-down cues explain conjunction and asymmetric visual search
Abstract: Visual search requires complex interactions between visual processing, eye movements, object recognition, memory, and decision making. Elegant psychophysics experiments have described the task characteristics and stimulus properties that facilitate or slow down visual search behavior. En route towards a quantitative framework that accounts for the mechanisms orchestrating visual search, here we propose an image-computable biologically-inspired computational model that takes a target and a search image as inputs and produces a sequence of eye movements. To compare the model against human behavior, we consider nine foundational experiments that demonstrate two intriguing principles of visual search: (i) asymmetric search costs when looking for a certain object A among distractors B versus the reverse situation of locating B among distractors A; (ii) the increase in search costs associated with feature conjunctions. The proposed computational model has three main components, an eccentricity-dependent visual feature processor learnt through natural image statistics, bottom-up saliency, and target-dependent top-down cues. Without any prior exposure to visual search stimuli or any task-specific training, the model demonstrates the essential properties of search asymmetries and slower reaction time in feature conjunction tasks. Furthermore, the model can generalize to real-world search tasks in complex natural environments. The proposed model unifies previous theoretical frameworks into an image-computable architecture that can be directly and quantitatively compared against psychophysics experiments and can also provide a mechanistic basis that can be evaluated in terms of the underlying neuronal circuits.

Speaker: Jie Zheng
Title: Neurons detect cognitive boundaries to structure episodic memories in humans
Abstract:While experience is continuous, memories are organized as discrete events. Cognitive boundaries are thought to segment experience and structure memory, but how this process is implemented remains unclear. We recorded the activity of single neurons in the human medial temporal lobe during the formation and retrieval of memories with complex narratives. Neurons responded to abstract cognitive boundaries between different episodes. Boundary-induced neural state changes during encoding predicted subsequent recognition accuracy but impaired event order memory, mirroring a fundamental behavioral tradeoff between content and time memory. Furthermore, the neural state following boundaries was reinstated during both successful retrieval and false memories. These findings reveal a neuronal substrate for detecting cognitive boundaries that transform experience into mnemonic episodes and structure mental time travel during retrieval.

Speaker: Will Xiao
Title: Adversarial images for the Primate Brain
Abstract: Deep artificial neural networks have been proposed as a model of primate vision. However, these networks are vulnerable to adversarial attacks, whereby introducing minimal noise can fool networks into misclassifying images. Primate vision is thought to be robust to such adversarial images. We evaluated this assumption by designing adversarial images to fool primate vision. To do so, we first trained a model to predict responses of face-selective neurons in macaque inferior temporal cortex. Next, we modified images, such as human faces, to match their model-predicted neuronal responses to a target category, such as monkey faces, with a small budget for pixel value change. These adversarial images elicited neuronal responses similar to the target category. Remarkably, the same images fooled monkeys and humans at the behavioral level. These results call for closer inspection of the adversarial sensitivity of primate vision, and show that a model of visual neuron activity can be used to specifically direct primate behavior.

 

Zoom link: https://mit.zoom.us/j/92930528137?pwd=Z2ZiQnl3RjFGYzAvdXpZbkppaG5Mdz09

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Modular learning and reasoning on ARC

Feb 9, 2021 - 4:00 pm
Venue:  Hosted via Zoom Speaker/s:  Dr. Andrzej Banburski and Simon Alford​ (Poggio Lab)

Host: Dr. Hector Penagos (MIT)

 

Abstract: Current machine learning algorithms are highly specialized to whatever it is they are meant to do — e.g. playing chess, picking up objects, or object recognition. How can we extend this to a system that could solve a wide range of problems? We argue that this can be achieved by a modular system — one that can adapt to solving different problems by changing only the modules chosen and the order in which those modules are applied to the problem. The recently introduced ARC (Abstraction and Reasoning Corpus) dataset serves as an excellent test of abstract reasoning. Suited to the modular approach, the tasks depend on a set of human Core Knowledge inbuilt priors. We implement these priors as the modules of a reasoning system and combine them using neural-guided program synthesis. We then discuss our ongoing efforts extending execution-guided program synthesis to a bidirectional search algorithm via function inverse semantics.​

 

Zoom link: https://mit.zoom.us/j/99905020719?pwd=aGJJZlhXR00vY1N5Qkl6Rm4wKzh5Zz09

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: What Do Our Models Learn? by Prof. Aleksander Mądry

Nov 24, 2020 - 4:00 pm
Photo of Prof. Aleksander Mądry
Speaker/s:  Prof. Aleksander Mądry, CSAIL, MIT

Abstract:  Large-scale vision benchmarks have driven---and often even defined---progress in machine learning. However, these benchmarks are merely proxies for the real-world tasks we actually care about. How well do our benchmarks capture such tasks?

In this talk, I will discuss the alignment between our benchmark-driven ML paradigm and the real-world uses cases that motivate it. First, we will explore examples of biases in the ImageNet dataset, and how state-of-the-art models exploit them. We will then demonstrate how these biases arise as a result of design choices in the data collection and curation processes.

Throughout, we illustrate how one can leverage relatively standard tools (e.g., crowdsourcing, image processing) to quantify the biases that we observe.

Based on joint works with Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Jacob Steinhardt, Dimitris Tsipras and Kai Xiao.

Speaker bio.: Prof. Mądry is the Director of the MIT Center for Deployable Machine Learning, the Faculty Lead of the CSAIL-MSR Trustworthy and Robust AI Collaboration, a Professor of Computer Science in the MIT EECS Department, and member of both CSAIL and the Theory of Computation group.

His research spans algorithmic graph theory, optimization and machine learning. In particular, I have a strong interest in building on the existing machine learning techniques to forge a decision-making toolkit that is reliable and well-understood enough to be safely and responsibly deployed in the real world.

Research lab website: http://madry-lab.ml/

 

This research meeting will be hosted remotely via Zoom.

Zoom link: https://mit.zoom.us/j/97149851234?pwd=czZGbWFhYU41MXlLUzh4ZVZVUmt1Zz09
Passcode: 644798

 

 

 

 

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Katharina Dobs (Kanwisher, Module 3)

Jun 11, 2019 - 4:00 pm
Address:  Harvard NW Building, Room 243 Speaker/s:  Katharina Dobs & Ratan Murty

Murty talk title: Does face selectivity arise without visual experience with faces in the human brain?

Dobs talk title: Testing functional segregation of face and object processing in deep convolutional neural networks

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Duncan Stothers, Will Xiao, and Nimrod Shaham

May 14, 2019 - 4:00 pm
Venue:  Harvard NW Building, Room 243 Address:  52 Oxford St, Cambridge, MA 02138 Speaker/s:  Duncan Stothers, Will Xiao, Nimrod Shaham

Duncan Stothers-

Title: Turing's Child Machine: A Deep Learning Model of Neural Development

Abstract:

Turing recognized development’s connection to intelligence when he proposed engineering a ‘child machine’ that becomes intelligent through a developmental process, instead of top-down hand-designing intelligence into an ‘adult machine’. We now know from neurobiology that the most important developmental process is the ‘critical period’ where the architecture (equivalently connectome or topology) expands in a random way and then prunes itself down based on activity. The computational role of this process is unknown, but we know it is connected to intelligence because deprivation during this period has permanent negative effects later in life. Further, the fact the connectome changes during this period through ‘architecture learning’ in addition to the synaptic weights changing through ‘synaptic weight learning’, set it apart from deep learning AI research where the architecture is hand designed and stays fixed during learning and only ‘synaptic weight learning’ takes place. To understand development’s connection to biological and artificial intelligence we model the critical period by adding random expansion and activity based pruning steps to deep neural network training. Results suggest the critical period is as an unsupervised architecture search process that finds exponentially small architectures that generalize well. Resultant architectures from this process also show similarities to hand-designed ones. 

 

Will Xiao-

Title: Uncovering preferred stimuli of visual neurons using generative neural networks

Abstract:

What information do neurons represent? This is a central question in neuroscience. Ever since Hubel and Wiesel discovered that neurons in primary visual cortex (V1) respond preferentially to bars of certain orientations, investigators have searched for preferred stimuli to reveal information encoded by neurons, leading to the discovery of cortical neurons that respond to specific motion directions (Hubel, 1959), color (Michael, 1978), binocular disparity (Barlow et al., 1967), curvature (Pasupathy & Connor, 1999), complex shapes such as hands or faces (Desimone et al., 1984; Gross et al., 1972), and even variations across faces (Chang & Tsao, 2017).

However, the classic approach for defining preferred stimuli depends on using a set of hand-picked stimuli, limiting possible answers to stimulus properties chosen by the investigator. Instead, we wanted to develop a method that is as general and free of investigator bias as possible. To that end, we used a generative deep neural network (Dosovitskiy & Brox, 2016) as a vast and diverse hypothesis space. A genetic algorithm guided by neuronal preferences searched this space for stimuli.

We evolved images to maximize firing rates of neurons in macaque inferior temporal cortex and V1. Evolved images often evoked higher firing rates than the best of thousands of natural images. Furthermore, evolved images revealed neuronal selective properties that were sometimes consistent with existing theories but sometimes also unexpected.

This generative evolutionary approach complements classical methods for defining neuronal selectivities, serving as an independent test and a hypothesis-generating tool. Moreover, the approach has the potential for uncovering internal representations in any modality that can be captured by generative neural networks.

 

Nimrod Shaham-

Title: Continual learning and replay in a sparse forgetful Hopfield model

Abstract:

The brain has a remarkable ability to deal with an endless, continuous stream of information, while storing new memories and learning to perform new tasks. This is done without losing previously learned knowledge, which can be stored to timescales of the order of the animal’s life. In contrast, current artificial neural network models suffer from limited capacity (associative memory network models) and acute loss of performance in previously learned tasks after learning new ones (deep neural networks). Overcoming this limitation, known as catastrophic interference, is one of the main challenges in machine learning and theoretical neuroscience.

Here, we study a recurrent neural network that continually learns and stores sparse patterns of activity, while forgetting old ones (a palimpsestic model). Time dependent forgetting is incorporated as a decay of old memories’ contributions to the weight matrix. We calculate the required forgetting rate in order to avoid catastrophic interference, and find the optimal decay rate that gives maximal number of retrievable memories. Then, we introduce replay to the system, in the form of reappearance of previously stored patterns, and calculate the enhancement of time for which a memory is retrievable due to different patterns of replays. Our model reveals in a tractable and illuminating way how a recurrent neural network can learn continuously and store selected information for lifelong timescales.     

Organizer:  Daniel Zysman Hector Penagos Frederico Azevedo Organizer Email:  cbmm-contact@mit.edu

CBMM Research Meeting: Neural Processing of Object Manifolds

Jun 1, 2018 - 4:00 pm
Dr. SueYeon Chung
Venue:  McGovern Reading Room (46-5165) Address:  Brain and Cognitive Sciences Complex (MIT Bldg 46), 43 Vassar St., Cambridge MA 02139 The McGovern Reading Room (46-5165) is located on the fifth floor of the McGovern Institute for Brain Research at MIT (Main St. side of the building.) Speaker/s:  Dr. SueYeon Chung (MIT, BCS Fellow in Computation)

Object manifolds arise when a neural population responds to an ensemble of sensory signals associated with different physical features (e.g., orientation, pose, scale, location, and intensity) of the same perceptual object. Object recognition and discrimination require classifying the manifolds in a manner that is insensitive to variability within a manifold. How neuronal systems give rise to invariant object classification and recognition is a fundamental problem in brain theory as well as in machine learning.

We studied the ability of a readout network to classify objects from their perceptual manifold representations. We developed a statistical mechanical theory for the linear classification of manifolds with arbitrary geometries. We show how special anchor points on the manifolds can be used to define novel geometrical measures of radius and dimension which can explain the linear separability of manifolds of various geometries. Theoretical predictions are corroborated by numerical simulations using recently developed algorithms to compute maximum margin solutions for manifold dichotomies.

Our theory and its extensions provide a powerful and rich framework for applying statistical mechanics of linear classification to data arising from perceptual neuronal responses as well as to artificial deep networks trained for object recognition tasks. We demonstrate results from applying our method to both neuronal networks and deep networks for visual object recognition tasks.

Exciting future work lies ahead as manifold representations of the sensory world are ubiquitous in both biological and artificial neural systems. Questions for future work include: How do neural manifold representations reformat in biological sensory hierarchies? Could we characterize dynamical neural manifolds for complex sequential stimuli and behaviors? How do neural manifold representations evolve during learning? Can neural manifold separability used as a design principle for artificial deep networks?

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to Meetings