Weekly Research Meetings

Research Meeting: Making a Science from the Computer Vision Zoo

Mar 26, 2019 - 4:00 pm
Venue:  Harvard NW Building, Room 243 Address:  52 Oxford Street, Cambridge, MA 02138 Speaker/s:  Xavier Boix Bosch

Abstract: Recent progress in computer vision has led to new unresolved questions about their emergent properties. Understanding the emergent behaviour of computer vision algorithms can fuel the engineering of computer vision and help understand biological intelligence. In this talk, I will discuss three recent contributions that advance our understanding of the generalization capabilities of deep neural networks: i) a failure mode shared with humans, in which the network's object recognition accuracy in natural images sharply drops due to small changes of the visible region; ii) theoretical and empirical results on the generalization capabilities of state-of-the-art networks for image segmentation; and iii) the correspondence of individual units in deep neural networks with neurons in the brain.

Organizer:  Hector Penagos Frederico Azevedo Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Hippocampal Remapping as Learned Clustering of Experiences (Honi Sanders)

Feb 6, 2019 - 4:00 pm
Venue:  MIT 46-6011 Address:  MIT Bldg 46, Rm 6011 43 Vassar St, Cambridge MA 02139 Speaker/s:  Honi Sanders (Wilson + Gershman labs)

The place cells of the hippocampus create distinct maps of each context, a process known as hippocampal remapping.  Past work has asked which environmental features determine which map is used, but no consistent answer has been reached. However, this approach has ignored the relevance of context identification as part of a larger learning process. 

In order to address the question of which features determine which map is used, we must explicitly raise some of the complexities that make the context learning problem an intrinsically difficult problem.  The brain does not know a priori what features of the environment will be relevant, nor does it have direct access to context identity labels.  Fundamentally, this corresponds to an unsupervised clustering problem, where the brain receives a stream of experiences and must cluster them in a data-driven manner. 

Our results emphasize that learning plays a large role in hippocampal remapping.  Formalizing context learning as a clustering problems allows us to capture a range of experimental results that have not yet been explained by a single theoretical framework.  This model also provides novel predictions including the effect of variability in training as well as providing novel analyses including characterizing animal-to-animal variability. 

Organizer:  Hector Penagos Frederico Azevedo Organizer Email:  cbmm-contact@mit.edu

CBMM Research Meeting: From agents, to actions, to interactions: multiple brain networks for perceiving social scenes

Sep 14, 2018 - 4:00 pm
human and monkey brain comparison
Venue:  Harvard, Northwest Building Rm 243 Address:  52 Oxford St, Cambridge MA 02138 Speaker/s:  Julia Sliwa, The Rockefeller University

Our brain continuously decodes the complex visual scenes unwinding in front of us: both the nature of material entities, such as individuals and objects, and their immaterial interactions. I will briefly talk about individual recognition in monkeys and next turn to interaction processing. Interactions are fundamental in that they reveal hidden properties of intentional agents, such as their thoughts and feelings and of objects, such as their mass or material. Where and how interaction analyses are implemented in the brain is unknown. Using whole-brain functional magnetic resonance imaging in macaque monkeys, we discovered a network centered in the medial and ventrolateral prefrontal cortex that is exclusively engaged in social interaction analysis. Two additional networks, a parieto-premotor and a temporal one, exhibited both social and physical interaction preference, which, in the temporal lobe, mapped onto a fine-grain pattern of object, body, and face selectivity. Extent and location of a dedicated system for social interaction analysis in monkeys suggest that this function is an evolutionary forerunner of human mind-reading capabilities.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Integration of high-throughput robotics with software for biological design and experimental planning.

Nov 9, 2018 - 4:00 pm
Photo of Dr. Marilene Pavan
Venue:  MIT 46-5165 Address:  MIT Bldg 46-5165, 43 Vassar Street, Cambridge MA 02139 Speaker/s:  Dr. Marilene Pavan, Boston University

Abstract: A fledgling biofoundry is taking shape within the new Biological Design Center at Boston University. The mission of the DAMP (Design, Automation, Manufacturing, and Prototyping) Laboratory is to develop novel biological systems using formal representations of protocols and experiments for the specify-design-build-test cycle. The ultimate goal is to produce faster, more scalable, and reproducible research results. Spearheaded by Dr. Douglas Densmore, an associate professor in the Department of Electrical and Computer Engineering, the facility is uniquely software-focused and plans to integrate high-throughput robotics with software for biological design and experimental planning. This approach is designed to meet the growing demand for standards and scalability across academia and industry in the field of synthetic biology.

Speaker biography: Marilene Pavan is the Scientific Manager on DAMP Lab (damplab.org) and Lab Manager of CIDAR (Cross-disciplinary Integration of Design Automation Research) lab at Boston University, working on synthetic biology & automation projects, mainly in the Living Computing Project (livingcomputing.org). Her main roles / expertise are related but not limited to: automation of genetic circuits assemblies, cell free systems, synthetic biology tools evaluation and implementation, microfluidics, science outreach programs (stempathways.org), IGEM mentorship, lab management, and training of new students.

Previously, she worked metabolic engineering, molecular and synthetic biology, as part of the research staff in leader companies like Monsanto S.A., Braskem S.A. (both in Brazil), and JBEI (jbei.org), CA.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Research Meeting: The neural basis of human social interaction perception

Jul 13, 2018 - 4:00 pm
Photo of Dr. Leyla Isik
Venue:  McGovern Reading Room (46-5165) Address:  43 Vassar St., Cambridge MA 02139 5th Floor of the McGovern Institute for Brain Research at MIT, on the Main St. side of the building. Speaker/s:  Dr. Leyla Isik, Kanwisher Lab (MIT) and Kreiman Lab (Children's Hospital Boston, Harvard Medical)

Abstract:

Social interaction perception is a crucial part of the human visual experience that develops early in infancy and is shared with other primates. However, it remains largely unknown how humans compute information about others’ social interactions from visual input. In the first part of my talk, I will present work identifying a neural correlate of social interaction perception in the human brain (Isik et al., 2017). Specifically, we observe a strong univariate response in the posterior superior temporal sulcus (pSTS) to stimuli depicting social interactions between two agents, compared with (i) pairs of agents not interacting with each other, (ii) physical interactions between inanimate objects, and (iii) individual animate agents pursuing goals and interacting with inanimate objects.​ ​This region may underlie our ability to understand the structure of our social world and navigate within it. 

This work underscores the importance of social interaction perception, but leave unanswered the question of how quickly and automatically it occurs. Is social interaction detection a rapid, feedforward perceptual process, or a slower post-perceptual inference? To answer this question, we used magnetoencephalography (MEG) decoding to ask when the human brain detects third-party social interactions​. ​In particular, subjects in the MEG viewed snapshots of​ ​real-world scenes containing a pair of people who were either engaged in a social interaction or acting independently. We could read out the presence versus absence of a social interaction from subjects’ MEG data extremely quickly, as early as 150 ms after stimulus onset. This decoding latency is very similar to previously reported decoding latencies of primarily feedforward visual processes, such as invariant object recognition. Importantly, this decoding does not seem to be based on low-level image properties: these images are not decodable based on pixel intensity or the output of a V1-like model, and the social interaction decoding we observed occurs considerably later than the decoding of low-level image identity we observed in the same subjects. These results suggest that the detection of social interactions is a rapid feedforward perceptual process, rather than a slow post-perceptual inference.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to Weekly Research Meetings