Weekly Research Meetings

CBMM Weekly Research Meeting: Scale-invariant representation of space, time, and number

May 5, 2015 - 4:00 pm
descartesfofx
Venue:  Harvard University: Northwest Bldg. Room 243 Address:  52 Oxford Street, Harvard University Northwest Building, Cambridge, 02138 Speaker/s:  Speaker: Marc Howard (BU) Host: Sam Gershman

 

Abstract: The Weber-Fechner law is a foundational rule of psychophysics that applies to many sensory dimensions.  Biologically, the Weber-Fechner law can be implemented by a set of cells with receptive fields supporting a logarithmic scale.  Psychologically, we have the ability to preferentially access subsets of these scales, for instance directing attention to a particular part of retinal space or a particular range of frequencies.  We propose that many forms of memory utilize an analogous scale-invariant representation for time. However, it is computationally non-trivial to construct and update a Weber-Fechner timeline.  A scale-invariant timeline could be constructed by updating the Laplace transform of history in real time and then approximating the inverse Laplace transform with lateral inhibition circuits.  This formalism can be adapted to compute scale-invariant representations of other variables, such as spatial location or number, to which we have access to the time derivative.  We review neurophysiological evidence, including data from hippocampal place cells and time cells consistent with this view. Given the ubiquity of Weber-Fechner scale representations, this raises the question of how the brain might compute with Weber-Fechner registers.  We suggest that many of the elemental operations necessary for flexible cognitive computation could be readily accomplished in the Laplace domain.

CBMM Weekly Research Meeting: Infants' Understanding of Social Actions

Apr 7, 2015 - 4:15 pm
Venue:  MIT Address:  Room: 46-3310, Cambridge, MA, 02138 United States Speaker/s:  Lindsey Powell (CBMM Thrust 1, CBMM Thrust 4)

Topic: Infants' Understanding of Social Actions

Abstract: Intentional human actions fall into at least two partially separable classes -- actions aimed at interacting with objects and actions aimed at interacting with people. The principles by which these two types of actions are effective vary substantially, and thus the means by which people recognize object- vs. socially-directed actions and the inferences they make when observing them are likely to differ substantially.  Research with both infants and adults suggest that they use a principle of rational efficiency with respect to physical constraints in identifying actions aimed at object-based goals. In contrast, socially meaningful actions (including gestures, vocalizations, and ritual behaviors) are typically inefficient as means toward physical outcomes. Instead, they gain effectiveness by being shared by, and thus mutually interpretable to, both the actor and the social partner toward whom the action is directed. Building on past work showing that infants expect members of social groups to engage in the same behaviors, I will present several completed studies supporting the conclusion that infants only expect such shared behaviors (a) when the actions are not efficient means toward external changes in the world, and (b) when infants have previously seen two social partners both engage in the action. These results suggest that even in the first year of life infants have some understanding of the principles by which social actions work, as well as an expectation that they will be non-overlapping with object-directed actions. I will also present ongoing and future work exploring (1) an early-developing gender difference in the tendency to interpret actions as social or physically causal and (2) how infants might learn about actions that have both physical and social functions.  Finally, I will discuss the importance of these results for understanding early social learning, a key component of the development of human intelligence.

 

 

CBMM Weekly Research Meeting: Implementing Probabilistic Graphical Models with Chemical Reaction Networks

Mar 31, 2015 - 4:00 pm
Ryan Adams
Venue:  Harvard University: Northwest Bldg. Room 243 Address:  52 Oxford Street, Harvard University Northwest Building, Cambridge, 02138 Speaker/s:  Ryan Adams, Harvard University

Abstract:

Recent work on molecular programming has explored new possibilities for computational abstractions with biomolecules, including logic gates, neural networks, and linear systems.  In the future such abstractions might enable nanoscale devices that can sense and control the world at a molecular scale.  Just as in macroscale robotics, it is critical that such devices can learn about their environment and reason under uncertainty. At this small scale, systems are often modeled as chemical reaction networks.  I will describe a procedure by which arbitrary probabilistic graphical models, represented as factor graphs over discrete random variables, can be compiled into chemical reaction networks that implement inference.  I will show how marginalization based on sum-product message passing can be implemented in terms of reactions between chemical species whose concentrations represent probabilities.  Tthe steady state concentrations of these species correspond to the marginal distributions of the random variables in the graph.  As with standard sum-product inference, this procedure yields exact results for tree-structured graphs, and approximate solutions for loopy graphs.

This is joint work with Nils Napp.

CBMM Weekly Research Meeting: Tools for Brain-Wide Mapping of the Computations of Intelligence

Mar 10, 2015 - 4:00 pm
 Tools for Brain-Wide Mapping of the Computations of Intelligence
Venue:  Harvard University Northwest Bldg, Room 243 Address:  52 Oxford Street, Harvard University Northwest Building, Cambridge, 02138 Speaker/s:  Ed Boyden, CBMM Thrust 2: Circuits for Intelligence

Abstract:
Ideally we would have maps of the molecular and anatomical circuitry of the brain, as well as of the dynamic activity of the brain, with sufficient detail to reveal how brain circuits generate the computations that support intelligent behavior.  Our group is working
on three new approaches to address this need.

First, we have developed a fundamentally new super-resolution light microscopy technology that is faster than any other super-resolution
technology, on a per-voxel basis. We anticipate that our new microscopy method, and improved versions we are working on currently, will enable imaging of molecular and anatomical information throughout entire brain circuits, and perhaps even entire brains.

Second, we have adopted to neuroscience the technology of plenoptic or lightfield microscopy, a technology that enables single-shot 3-D
images to be acquired without moving parts, and thus can be used to record high-speed movies of neural activity (Nature Methods
11:727-730).  We are continuing to improve such microscopes, to the point where they may be useful for imaging the entire mammalian
cortex.

Finally, we are working to get the world’s smallest mammal, the Etruscan shrew, going as a model system in visual neuroscience.  The
Etruscan shrew has a small brain, with a six-layer cortex just a few hundred microns thick, and a visual cortex with perhaps just 75,000 neurons — less than the larval zebrafish.  It is small enough that entire molecular and anatomical maps, as well as dynamic activity maps, of the visual cortex might be feasible using the above tools, in the near future. We will seek to answer the CBMM challenge questions in the context of the Etruscan shrew visual system.

CBMM Weekly Research Meeting: Thinking in patterns: representations in the neural basis of theory of mind

Feb 24, 2015 - 4:00 pm
Venue:  Harvard University: Northwest Bldg, Room 243 Address:  52 Oxford Street, Harvard University Northwest Building, Cambridge, 02138 Speaker/s:  Jorie Koster-Hale, CBMM Thrust 4 (MIT, Saxe Lab), Moral Psychology Lab (Harvard U.)

Topic: Thinking in patterns: representations in the neural basis of theory of mind

Abstract: Social life depends on understanding other people’s behavior: why they do the things they do, and what they are likely to do next. These actions are just observable consequences of an unobservable, internal causal structure: the person’s intentions, beliefs, and goals. A cornerstone of the human capacity for social cognition is the ability to reason about these invisible causes; having a “theory of mind”. A remarkable body of evidence has demonstrated that social cognition reliably and selectively recruits a specific group of brain regions. Building on prior work, which has for the most part focused on where in the brain mental state reasoning occurs, the research here investigates how neural populations encode concepts underlying mental state inference.

I demonstrate that functional neuroimaging can find behaviorally relevant features of mental state representation within the cortical regions that support social cognition, in three domains: intention (the relationship between beliefs and action), knowledge source (the relationship between beliefs and perceptual evidence), and emotion (the relationship between beliefs and feelings).  I argue that these features are abstract, continuous, and related to human behavior.  This work provides a key next step in understanding the neural basis of social cognition, by demonstrating that it is possible to find abstract features of mental state inferences inside "social" brain regions, and taking a first step in characterizing their content and format.

*Talk was rescheduled to March 10th* CBMM Weekly Research Meeting: Tools for Brain-Wide Mapping of the Computations of Intelligence

Feb 10, 2015 - 4:00 pm
Tools for Brain-Wide Mapping of the Computations of Intelligence
Venue:  Harvard University: Northwest Bldg, Room 243 Address:  52 Oxford Street, Harvard University Northwest Building, Cambridge, 02138 Speaker/s:  Ed Boyden, CBMM Thrust 2: Circuits for Intelligence

*Talk was rescheduled to March 10th*

Topic: Progress on the CBMM challenge questions: What is there? What’s happening now? And why?

Abstract:
Ideally we would have maps of the molecular and anatomical circuitry of the brain, as well as of the dynamic activity of the brain, with sufficient detail to reveal how brain circuits generate the computations that support intelligent behavior.  Our group is working
on three new approaches to address this need.

First, we have developed a fundamentally new super-resolution light microscopy technology that is faster than any other super-resolution
technology, on a per-voxel basis. We anticipate that our new microscopy method, and improved versions we are working on currently, will enable imaging of molecular and anatomical information throughout entire brain circuits, and perhaps even entire brains.

Second, we have adopted to neuroscience the technology of plenoptic or lightfield microscopy, a technology that enables single-shot 3-D
images to be acquired without moving parts, and thus can be used to record high-speed movies of neural activity (Nature Methods
11:727-730).  We are continuing to improve such microscopes, to the point where they may be useful for imaging the entire mammalian
cortex.

Finally, we are working to get the world’s smallest mammal, the Etruscan shrew, going as a model system in visual neuroscience.  The
Etruscan shrew has a small brain, with a six-layer cortex just a few hundred microns thick, and a visual cortex with perhaps just 75,000 neurons — less than the larval zebrafish.  It is small enough that entire molecular and anatomical maps, as well as dynamic activity maps, of the visual cortex might be feasible using the above tools, in the near future. We will seek to answer the CBMM challenge questions in the context of the Etruscan shrew visual system.

Organizer:  Gabriel Kreiman

CBMM Weekly Research Meeting: Demystifying depth: Learning dynamics in deep linear neural networks

Nov 4, 2014 - 4:00 pm
Venue:  Harvard University: Northwest Bldg, Room 243 Address:  52 Oxford Street, Harvard University Northwest Building, Cambridge, 02138 Speaker/s:  Andrew Saxe

Abstract:

Humans and other organisms show an incredibly sophisticated ability to learn about their environments during their lifetimes. This learning is thought to alter the strength of connections between neurons in the brain, but we still do not understand the principles linking synaptic changes at the neural level to behavioral changes at the psychological level. Part of the difficulty stems from depth: the brain has a deep, many-layered structure that substantially complicates the learning process. To understand the specific impact of depth, I develop the theory of gradient descent learning in deep linear neural networks. Despite their linearity, the learning problem in these networks remains nonconvex and exhibits rich nonlinear learning dynamics. I give new exact solutions to the dynamics that quantitatively answer fundamental theoretical questions such as how learning speed scales with depth. These solutions revise the basic conceptual picture underlying deep learning systems—both engineered and biological—with ramifications for a variety of phenomena. In this talk I will highlight two consequences at different levels of detail. First, the theory suggests that depth influences the size and timing of receptive field changes in visual perceptual learning. And second, by considering data drawn from structured probabilistic graphical models, the theory reveals that only deep (and not shallow) networks undergo quasi stage-like transitions during learning reminiscent of those found in infant semantic development. These applications span levels of analysis from single neurons to cognitive psychology, demonstrating the potential of deep linear networks to connect detailed changes in neuronal networks to changes in high-level behavior and cognition.

CBMM Weekly Research Meeting: Using computational models to predict neural responses in higher visual cortex

Oct 21, 2014 - 4:00 pm
Venue:  Harvard University: Northwest Bldg, Room 243 Address:  52 Oxford Street, Harvard University Northwest Building, Cambridge, 02138 Speaker/s:  Dr. Dan Yamins

Abstract:
The ventral visual stream underlies key human visual object recognition abilities. However, neural encoding in the higher areas of the ventral stream remains poorly understood. Here, we describe a modeling approach that yields a quantitatively accurate model of inferior temporal (IT) cortex, the highest ventral cortical area. Our key idea is to leverage recent advances in high-performance computing to optimize neural networks for object recognition performance, and then use these high-performing networks as the basis of neural models. We found that, across a wide class of Hierarchical Convolutional Neural Networks (HCNNs), there is a strong correlation between a model’s categorization performance and its ability to predict IT neural response data.

Pursuing this idea further, we then identified an HCNN that matches human performance on a range of recognition tasks. Critically, even though we did not constrain this model to match neural data, its top output layer turns out to be highly predictive of IT spiking responses to complex naturalistic images at both the single site and population levels. The model’s intermediate layers are highly predictive of neural responses in the V4 cortex, a midlevel visual area that provides the dominant cortical input to IT. Moreover, lower layers are highly predictive of voxel responses from fMRI data in V1 and V2. These results show that performance optimization — applied in a biologically appropriate model class — can be used to build quantitative predictive models of neural processing.

If time allows, I will also discuss recent (experimental and modeling) extensions of these results to tasks outside of categorization, shedding light on how cortex jointly represents the categorical and non-categorical visual properties that together underlie general scene parsing.

CBMM Weekly Research Meeting: Explaining human-level visual recognition as deep inverse graphics

Oct 14, 2014 - 4:00 pm
Venue:  MIT: McGovern Institute Seminar Room, 46-3189 Address:  43 Vassar Street, MIT Bldg 46, Cambridge, MA 02139 United States Speaker/s:  Ilker Yildirim and Tejas Kulkarni

Research Thrust: Development of Intelligence, CBMM Thrust 1

Abstract:

In recent years, there has been remarkable progress in the field of Computational Vision due to powerful feed-forward architectures that build classifiers for individual scene elements and learn features automatically from data. However, in comparison to humans, these architectures face a lot of difficulties in the presence of occlusion, pose-light variability, and extreme scale variations. These difficulties support the common intuition that the goal of closing the gap between computational models of vision and human performance will require architectures that go beyond bottom-up computational elements. Building upon this observation, we systematically evaluated computational models of different architectures and humans in a variant of the Visual Turing task for face analysis. We tested people’s invariance to facial identity under light and pose variability in a same/different judgment task. We tested common bottom-up architectures on the same task. We also developed and tested an inverse graphics model of face perception, which integrates a Convolutional Neural Network (CNN) with the top-down generative model. We found that there was a gap of about 20% between people and the best performing feed-forward model. On the other hand, our inverse graphics model achieved human-level recognition performance (both perform at 78%). More importantly, our model accounts for people’s judgments beyond just achieving equal performance: the latent variables inferred by the inverse graphics model capture the variability in subjects’ responses much better than the best performing baseline model. Our experiment suggests that charting people’s performance on such challenging and highly relevant tasks can lead to fruitful combinations of top-down generative models with bottom-up computational pipelines. We hope that such computational and behavioral insights will lead to new ways of investigating the neural bases of generative models of vision in the brain.

Pages

Subscribe to Weekly Research Meetings