Weekly Research Meetings

Research Meeting: "Parallel systems for social and spatial reasoning within the brain's apex network"

Apr 27, 2021 - 4:00 pm
Photo of Dr. Ben Deen
Venue:  Hosted via Zoom Speaker/s:  Dr. Ben Deen (Rockefeller University)

Host: Prof. Winrich Freiwald (Rockefeller University)

Abstract: What is the cognitive and neural architecture of core reasoning systems for understanding people and places? In this talk, we will outline a novel theoretical framework, arguing that internal models of people and places are implemented by two systems that are separate but parallel, both in cognitive structure and neural machinery. Both of these systems are anatomically positioned at the apex of the cortical hierarchy, and both interact closely with the medial temporal lobe declarative memory system, to update models of specific familiar people and places based on experience. Next, we test foundational predictions of this framework with a human fMRI experiment. Participants were scanned on tasks involving visual perception, semantic judgment, and episodic simulation of close familiar people and places. Across the three tasks, conditions involving familiar people and places elicited responses in distinct but parallel networks of association cortex, including zones within medial prefrontal cortex, medial parietal cortex, and the temporo-parietal junction. Lastly, we address the question of how these systems emerged in evolution. By assessing fMRI responses in nonhuman primates viewing images of familiar and unfamiliar animals and objects, we identify subregions of medial prefrontal cortex with a similar profile of functional response and anatomical organization to human social reasoning areas. These results indicate that the cognitive and neural architecture supporting human social understanding may have emerged by a modification of existing cortical systems for spatial cognition and long-term memory.

Zoom link: https://mit.zoom.us/j/96168764797?pwd=bExnNjZ6THFMLzArcHB4TzlNaFBNZz09

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: "Learning language like children and connecting linguistics to neuroscience," by Dr. Andrei Barbu (Infolab Lab, CSAIL)

Apr 6, 2021 - 4:00 pm
Photo of Dr. Andrei Barbu
Venue:  Hosted via Zoom Speaker/s:  Dr. Andrei Barbu, InfoLab, CSAIL

Host: Prof. Boris Katz (CSAIL, MIT)

Abstract:  Children acquire language from very little data by observing and interacting with other agents and their environment. We demonstrate how by combining methods from robotics, vision, and NLP with a compositional approach, we can create a semantic parser that acquires language with no direct supervision; just captioned videos and access to a physical simulator. Language that describes social situations is often overlooked, to fill this gap we develop a simulator that supports both physical and social interactions. Current models in NLP, despite seeing orders of magnitude more data than children, routinely make mistakes related to physical and social interactions; this approach may lead to filling in these gaps.

We will also discuss a new dataset and methodology for running large-scale experiments in the neuroscience of language; experiments on the scale of those performed with artificial language models in the NLP community. Being able to investigate the relationship between multiple linguistic concepts on the same neural data may reveal how parts of the language network relate to one another. We start by identifying parts of the language network which compute and predict the part of speech of an overhead word.​

Zoom link: https://mit.zoom.us/j/93605238695?pwd=ejBFcWxoUHNIejBEQ3NXdGQrS0dpQT09

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: "Invariant representation of physical stability in the human brain" by Pramod R.T. (Kanwisher Lab)

Mar 16, 2021 - 4:00 pm
Photo of Dr. R.T. Pramod
Venue:  Hosted via Zoom Speaker/s:  Dr. Pramod R.T., Kanwisher Lab

Host: Prof. Nancy Kanwisher (MIT)

Abstract: Successful engagement with the world requires the ability to predict what will happen next. Although some of our predictions are related to social situations concerning other people and what they will think and do, many of our predictions concern the physical world around us. We see not just a wineglass near the edge of the table but a wineglass about to smash on the floor; not just a plastic chair but one that can (or cannot) support our weights; not just a cup filled to the brim with coffee, but a cup at the risk of spilling over and scalding our hands. The most basic prediction we make about the physical world is whether it is stable, and hence unlikely to change in the near future, or unstable, and likely to change. In this talk, I will present our recent work where we asked if judgements of physical stability are supported by the kinds of representations that have proven to be highly effective at visual object recognition in both machines and brains, or if the ability to determine the physical stability of natural scenes may require running a simulation in our head to determine what, if anything, will happen next.​

 

This Research Meeting si being hosted remotely via zoom.

Zoom link: https://mit.zoom.us/j/94275354817?pwd=TzJ0bmlDc0FURHJ4OHhGT0FYeXQ5Zz09

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Module 2 Research Update

Feb 16, 2021 - 4:00 pm
Venue:  Hosted via Zoom Speaker/s:  Mengmi Zhang, Jie Zheng, and Will Xiao (Kreiman Lab)

Host: Prof. Gabriel Kreiman (Children's, Harvard)

Speaker: Mengmi Zhang
Title: The combination of eccentricity, bottom-up, and top-down cues explain conjunction and asymmetric visual search
Abstract: Visual search requires complex interactions between visual processing, eye movements, object recognition, memory, and decision making. Elegant psychophysics experiments have described the task characteristics and stimulus properties that facilitate or slow down visual search behavior. En route towards a quantitative framework that accounts for the mechanisms orchestrating visual search, here we propose an image-computable biologically-inspired computational model that takes a target and a search image as inputs and produces a sequence of eye movements. To compare the model against human behavior, we consider nine foundational experiments that demonstrate two intriguing principles of visual search: (i) asymmetric search costs when looking for a certain object A among distractors B versus the reverse situation of locating B among distractors A; (ii) the increase in search costs associated with feature conjunctions. The proposed computational model has three main components, an eccentricity-dependent visual feature processor learnt through natural image statistics, bottom-up saliency, and target-dependent top-down cues. Without any prior exposure to visual search stimuli or any task-specific training, the model demonstrates the essential properties of search asymmetries and slower reaction time in feature conjunction tasks. Furthermore, the model can generalize to real-world search tasks in complex natural environments. The proposed model unifies previous theoretical frameworks into an image-computable architecture that can be directly and quantitatively compared against psychophysics experiments and can also provide a mechanistic basis that can be evaluated in terms of the underlying neuronal circuits.

Speaker: Jie Zheng
Title: Neurons detect cognitive boundaries to structure episodic memories in humans
Abstract:While experience is continuous, memories are organized as discrete events. Cognitive boundaries are thought to segment experience and structure memory, but how this process is implemented remains unclear. We recorded the activity of single neurons in the human medial temporal lobe during the formation and retrieval of memories with complex narratives. Neurons responded to abstract cognitive boundaries between different episodes. Boundary-induced neural state changes during encoding predicted subsequent recognition accuracy but impaired event order memory, mirroring a fundamental behavioral tradeoff between content and time memory. Furthermore, the neural state following boundaries was reinstated during both successful retrieval and false memories. These findings reveal a neuronal substrate for detecting cognitive boundaries that transform experience into mnemonic episodes and structure mental time travel during retrieval.

Speaker: Will Xiao
Title: Adversarial images for the Primate Brain
Abstract: Deep artificial neural networks have been proposed as a model of primate vision. However, these networks are vulnerable to adversarial attacks, whereby introducing minimal noise can fool networks into misclassifying images. Primate vision is thought to be robust to such adversarial images. We evaluated this assumption by designing adversarial images to fool primate vision. To do so, we first trained a model to predict responses of face-selective neurons in macaque inferior temporal cortex. Next, we modified images, such as human faces, to match their model-predicted neuronal responses to a target category, such as monkey faces, with a small budget for pixel value change. These adversarial images elicited neuronal responses similar to the target category. Remarkably, the same images fooled monkeys and humans at the behavioral level. These results call for closer inspection of the adversarial sensitivity of primate vision, and show that a model of visual neuron activity can be used to specifically direct primate behavior.

 

Zoom link: https://mit.zoom.us/j/92930528137?pwd=Z2ZiQnl3RjFGYzAvdXpZbkppaG5Mdz09

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Modular learning and reasoning on ARC

Feb 9, 2021 - 4:00 pm
Venue:  Hosted via Zoom Speaker/s:  Dr. Andrzej Banburski and Simon Alford​ (Poggio Lab)

Host: Dr. Hector Penagos (MIT)

 

Abstract: Current machine learning algorithms are highly specialized to whatever it is they are meant to do — e.g. playing chess, picking up objects, or object recognition. How can we extend this to a system that could solve a wide range of problems? We argue that this can be achieved by a modular system — one that can adapt to solving different problems by changing only the modules chosen and the order in which those modules are applied to the problem. The recently introduced ARC (Abstraction and Reasoning Corpus) dataset serves as an excellent test of abstract reasoning. Suited to the modular approach, the tasks depend on a set of human Core Knowledge inbuilt priors. We implement these priors as the modules of a reasoning system and combine them using neural-guided program synthesis. We then discuss our ongoing efforts extending execution-guided program synthesis to a bidirectional search algorithm via function inverse semantics.​

 

Zoom link: https://mit.zoom.us/j/99905020719?pwd=aGJJZlhXR00vY1N5Qkl6Rm4wKzh5Zz09

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: What Do Our Models Learn? by Prof. Aleksander Mądry

Nov 24, 2020 - 4:00 pm
Photo of Prof. Aleksander Mądry
Speaker/s:  Prof. Aleksander Mądry, CSAIL, MIT

Abstract:  Large-scale vision benchmarks have driven---and often even defined---progress in machine learning. However, these benchmarks are merely proxies for the real-world tasks we actually care about. How well do our benchmarks capture such tasks?

In this talk, I will discuss the alignment between our benchmark-driven ML paradigm and the real-world uses cases that motivate it. First, we will explore examples of biases in the ImageNet dataset, and how state-of-the-art models exploit them. We will then demonstrate how these biases arise as a result of design choices in the data collection and curation processes.

Throughout, we illustrate how one can leverage relatively standard tools (e.g., crowdsourcing, image processing) to quantify the biases that we observe.

Based on joint works with Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Jacob Steinhardt, Dimitris Tsipras and Kai Xiao.

Speaker bio.: Prof. Mądry is the Director of the MIT Center for Deployable Machine Learning, the Faculty Lead of the CSAIL-MSR Trustworthy and Robust AI Collaboration, a Professor of Computer Science in the MIT EECS Department, and member of both CSAIL and the Theory of Computation group.

His research spans algorithmic graph theory, optimization and machine learning. In particular, I have a strong interest in building on the existing machine learning techniques to forge a decision-making toolkit that is reliable and well-understood enough to be safely and responsibly deployed in the real world.

Research lab website: http://madry-lab.ml/

 

This research meeting will be hosted remotely via Zoom.

Zoom link: https://mit.zoom.us/j/97149851234?pwd=czZGbWFhYU41MXlLUzh4ZVZVUmt1Zz09
Passcode: 644798

 

 

 

 

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: A deep generative model for isometric embedding to quantitative data analysis

Nov 10, 2020 - 6:00 pm
scientific figures
Speaker/s:  Dr. Keizo Kato, Fujitsu Laboratories Ltd., and Dr. Akira Nakagawa, Artificial Intelligence Laboratory, Fujitsu Laboratories Ltd.

Please note change in start time. This research meeting will start at 6pm EST.

 

Abstract: To analyze high-dimensional and complex data in the real world, deep generative models, such as variational autoencoder (VAE) embed data in a low-dimensional space (latent space) and learn a probabilistic model in the latent space. However, they struggle to accurately reproduce the probability distribution function (PDF) in the input space from that in the latent space. If the embedding were isometric, this issue can be solved, because the relation of PDFs can become tractable. To achieve isometric property, we propose Rate- Distortion Optimization guided autoencoder inspired by orthonormal transform coding. We show our method has the following properties: (i) the Jacobian matrix between the input space and a Euclidean latent space forms a constantly scaled orthonormal system and enables isometric data embedding; (ii) the relation of inner products, distances, and PDFs in both spaces can become tractable one such as proportional relation. Thanks to this property, our method outperforms state-of-the-art methods in unsupervised anomaly detection with four public datasets.

 

Furthermore, we also show that VAE can be mapped to an implicit isometric embedding with a scale factor derived from the posterior parameter.  By interpreting VAE as a non-linearly scaled isometric embedding, we provide a quantitative understanding of VAE property. From this analysis, we have found that previous discussions regarding rate-distortion trade-off in beta-VAE have been inconsistent with the rate-distortion theory of transform coding.

 

Our method and analysis will promote to develop quantitatively interpretable deep generative models. It’s time to be free from the stress to interpret the behavior of VAE.​

 

This research meeting will be hosted remotely via Zoom.

Zoom Webinar link: https://mit.zoom.us/j/97502771611?pwd=QkZ3cGkvQTM4OEt4N0R5Qmg5Q1Q0Zz09 

Passcode: 762312

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: "Successes and Failures of Neural Network Models of Hearing" by Prof. Josh McDermott

Oct 20, 2020 - 4:00 pm
Photo of Prof. Josh McDermott
Venue:  Hosted via Zoom Speaker/s:  Prof. Josh McDermott, Laboratory for Computational Audition, MIT

Speaker biography:

Josh McDermott obtained his PhD from MIT in 2006 and returned in January 2013 as an Assistant Professor in the Department of Brain and Cognitive Sciences, moving from Oxford University, where he was a visiting scientist during 2012. Prior to that, he was a research associate at New York University (2009-2012) and a postdoctoral fellow at the University of Minnesota (2007-2008). Dr. McDermott is the recipient of a Marshall Scholarship, a James S. McDonnell Foundation Scholar Award, and an NSF CAREER Award.

Dr. McDermott studies sound and hearing using tools from experimental psychology, engineering, and neuroscience. He seeks to understand how humans derive information from sound, and in particular how they succeed in real-world conditions that cause even the most powerful state-of-the-art computer algorithms to fail, for instance in recognizing speech amid background noise. He aims to use the contrast between biological and machine hearing systems to reveal the workings of biological hearing, to improve prosthetic devices for aiding those with hearing impairment, and to design better computer algorithms for analyzing sound. Research in his lab will explore how humans recognize real-world sound sources, segregate particular sounds from the mixture that enters the ear (the cocktail party problem), and remember and/or attend to particular sounds of interest. He also studies music perception and cognition.

Lab website: Laboratory for Computational Audition , McDermott Lab

 

This research meeting will be hosted remotely via Zoom.

Zoom Webinar link: https://mit.zoom.us/j/94850095309?pwd=Rkgrb1NMWXFTWjZ4ZUVpeUZGcVFldz09 

passcode 127615

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: "Emergence of Structure in Neural Network Learning" by Dr. Brian Cheung

Sep 22, 2020 - 4:00 pm
Venue:  Hosted via Zoom Speaker/s:  Dr. Brian Cheung, BCS Computational Fellow, MIT

Abstract: Learning is one  of the hallmarks  of human intelligence. It marks a level  of flexibility and adaptation to new information that no artificial model has achieved at this point. This remarkable ability to learn makes it possible to accomplish a multitude  of cognitive tasks without requiring a multitude  of information from any single task. As a new BCS Fellow in Computation, I will describe emergent phenomena that occur during  learning for  neural  network models. First, I will discuss how learning well-defined tasks can lead to the emergence of structured representations complementary to the original task. This emergent structure appears at multiple levels within these models. From semantic factors of variation occurring in the hidden units of an autoencoder to physical structure appearing at the sensory input of an attention model, learning seems to influence all parts of a model. Then I will introduce current and future work that aims to endow neural networks with greater flexibility and adaptation in learning over types of data more akin to what naturally intelligent models experience in the real world.

 

This research meeting will be hosted remotely via Zoom.

Zoom Webinar link: https://mit.zoom.us/j/92359755680?pwd=STNKU2x0S0RXSGthMXhtcmNndEgrUT09 

passcode 832098

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Virtual Research Meeting: Hossein Mobahi

Jun 9, 2020 - 2:00 pm
Venue:  Zoom Speaker/s:  Hossein Mobahi, Google Research

TITLE:

Improving Generalization Performance by Self-Training and Self-Distillation
 
ABSTRACT:

In supervised learning we often seek a model which minimizes (to epsilon optimality) a loss function over a training set, possibly subject to some (implicit or explicit) regularization. Suppose you train a model this way and read out the predictions it makes over the training inputs, which may slightly differ from the training targets due to the epsilon optimality. Now suppose you treat these predictions as new target values, and retrain another model from scratch using those predictions instead of the original target values. Surprisingly, the second model can often outperform the original model in terms of accuracy on the test set. Actually, we may repeat this loop a few times, and each time see an increase in the generalization performance. This might sound strange as such a supervised self-training process (aka self-distillation) does not receive any new information about the task and solely evolves by retraining itself. In this talk, I argue such self-training process induces additional regularization, which gets amplified in each round of retraining. In fact, I will rigorously characterize such regularization effects when learning the function in Hilbert space. The latter setting can relate to neural networks with infinite width. I will conclude by discussing some open problems in the area of self-training and self-distillation.

 

Link to Talk: https://mit.zoom.us/j/95694089706?pwd=TnplQVRTbWMxZmJKdS84NXRoY3k2QT09

Password: brains

Organizer:  Jean Lawrence Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to Weekly Research Meetings