Weekly Research Meetings

CBMM Research Meeting: Neural Processing of Object Manifolds

Jun 1, 2018 - 4:00 pm
Dr. SueYeon Chung
Venue:  McGovern Reading Room (46-5165) Address:  Brain and Cognitive Sciences Complex (MIT Bldg 46), 43 Vassar St., Cambridge MA 02139 The McGovern Reading Room (46-5165) is located on the fifth floor of the McGovern Institute for Brain Research at MIT (Main St. side of the building.) Speaker/s:  Dr. SueYeon Chung (MIT, BCS Fellow in Computation)

Object manifolds arise when a neural population responds to an ensemble of sensory signals associated with different physical features (e.g., orientation, pose, scale, location, and intensity) of the same perceptual object. Object recognition and discrimination require classifying the manifolds in a manner that is insensitive to variability within a manifold. How neuronal systems give rise to invariant object classification and recognition is a fundamental problem in brain theory as well as in machine learning.

We studied the ability of a readout network to classify objects from their perceptual manifold representations. We developed a statistical mechanical theory for the linear classification of manifolds with arbitrary geometries. We show how special anchor points on the manifolds can be used to define novel geometrical measures of radius and dimension which can explain the linear separability of manifolds of various geometries. Theoretical predictions are corroborated by numerical simulations using recently developed algorithms to compute maximum margin solutions for manifold dichotomies.

Our theory and its extensions provide a powerful and rich framework for applying statistical mechanics of linear classification to data arising from perceptual neuronal responses as well as to artificial deep networks trained for object recognition tasks. We demonstrate results from applying our method to both neuronal networks and deep networks for visual object recognition tasks.

Exciting future work lies ahead as manifold representations of the sensory world are ubiquitous in both biological and artificial neural systems. Questions for future work include: How do neural manifold representations reformat in biological sensory hierarchies? Could we characterize dynamical neural manifolds for complex sequential stimuli and behaviors? How do neural manifold representations evolve during learning? Can neural manifold separability used as a design principle for artificial deep networks?

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Research Meeting: Nancy Lynch

May 11, 2018 - 4:00 pm
Venue:  MIT-46-5165 (MIBR Reading Room) Speaker/s:  Prof. Nancy Lynch, MIT CSAIL

Title: An Algorithmic Theory of Brain Networks

This talk will describe my recent work with Cameron Musco and Merav Parter, on studying neural networks from the perspective of the field of Distributed Algorithms.   In our project, we aim both to obtain interesting, elegant theoretical results, and also to draw relevant biological conclusions.

We base our work on simple Stochastic Spiking Neural Network (SSN) models, in which probabilistic neural components are organized into weighted directed graphs and execute in a synchronized fashion.  Our model captures the spiking behavior observed in real neural networks and reflects the widely accepted notion that spike responses, and neural computation in general, are inherently stochastic.  In most of our work so far, we have considered static networks, but the model would allow us to also consider learning by means of weight adjustments.

Specifically, we consider the implementation of various algorithmic primitives using stochastic SNNs.  We first consider a basic symmetry-breaking task that has been well studied in the computational neuroscience community:  the Winner-Take-All  (WTA)  problem.  WTA is believed to serve as a basic building block for many other tasks, such as learning, pattern recognition, and clustering.  In a simple version of the problem, we are given neurons with identical firing rates, and want to select a distinguished one.  Our main contribution is the explicit construction of a simple and efficient WTA network containing only two inhibitory neurons; our construction uses the stochastic behavior of SNNs in an essential way.  We give a complete proof of correctness and analysis of convergence time, using distributed algorithms proof methods.  In related results, we give an optimization of the simple two-inhibitor network that achieves better convergence time at the cost of more inhibitory neurons.  We also give lower bound results that show inherent limitations on the convergence time achievable with small numbers of inhibitory neurons.

We also consider the use of stochastic behavior in neural algorithms for Similarity Testing.   In this problem, the network is supposed to distinguish, with high reliability, between input vectors that are identical and input vectors that are significantly different.  We construct a compact stochastic network that solves the Similarity Testing problem, based on randomly sampling positions in the vectors. At the heart of our solution is the design of a compact and fast-converging neural Random Access Memory (neuro-RAM)  indexing mechanism.

In this talk, I will describe our SNN model and our work on Winner-Take-All, in some detail.  I will also summarize our work on Similarity Testing, discuss some important general issues such as compositionality, and suggest directions for future work.

 

Organizer:  Hector Penagos Kathleen Sullivan Frederico Azevedo Organizer Email:  cbmm-contact@mit.edu

CBMM Research Meeting: A computational perspective of the role of Thalamus in cognition

Jun 8, 2018 - 4:00 pm
Photo of Dr. Nima Dehghani
Venue:  McGovern Reading Room (46-5165) Address:  43 Vassar St., Cambridge MA 02139 Speaker/s:  Dr. Nima Deghani (MIT Physics)

Abstract:  Thalamus has traditionally been considered as only a relay source of cortical inputs, with hierarchically organized cortical circuits serially transforming thalamic signals to cognitively-relevant representations. Given the absence of local excitatory connections within the thalamus, the notion of thalamic `relay' seemed like a reasonable description over the last several decades. Recent advances in experimental approaches and theory provide a broader perspective on the role of the thalamus in cognitively-relevant cortical computations, and suggest that only a subset of thalamic circuit motifs fit the relay description. Here, we discuss this perspective and highlight the potential role for the thalamus in dynamic selection of cortical representations through a combination of intrinsic thalamic computations and output signals that change cortical network functional parameters. We suggest that through the contextual modulation of cortical computation, thalamus and cortex jointly optimize the information/cost tradeoff in an emergent fashion. We emphasize that coordinated experimental and theoretical efforts will provide a path to understanding the role of the thalamus in cognition, along with an understanding to augment cognitive capacity in health and disease.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

CBMM Research Meeting: The MIT Quest for Intelligence discussion

Mar 2, 2018 - 4:00 pm
Venue:  McGovern Reading Room (46-5165) Address:  43 Vassar St., Cambridge MA 02139 5th Floor of the McGovern Institute for Brain Research at MIT, on the Main St. side of the building. Speaker/s:  Discussion will be moderated by Prof. Tomaso Poggio

MIT has announced a new institute wide initiative: The MIT Quest for Intelligence

Prof. Poggio will lead a discussion about the new initiative, the upcoming launch on March 1, 2018, and possible involvement of CBMM community members. 

MIT Quest mission statement:

Forging connections between human and machine intelligence research, its applications, and its bearing on society.

The MIT Quest for Intelligence will advance the science and engineering of both human and machine intelligence. Launched on February 1, 2018, MIT Quest seeks to discover the foundations of human intelligence and drive the development of technological tools that can positively influence virtually every aspect of society.

The Institute’s culture of collaboration will encourage life scientists, computer scientists, social scientists, and engineers to join forces to investigate the societal implications of their work as they pursue hard problems lying beyond the current horizon of intelligence research. By uniting diverse fields and capitalizing on what they can teach each other, we seek to answer the deepest questions about intelligence.

For more information on MIT Quest, please visit URL: https://quest.mit.edu/

 

Organizer:  Tomaso Poggio Hector Penagos Frederico Azevedo Organizer Email:  cbmm-contact@mit.edu

CBMM Research Meeting: Two Talks from UMass Boston

Dec 1, 2017 - 4:00 pm
Wei Ding & Akram Bayat
Venue:  MIT-46-5165 (MIBR Reading Room) Speaker/s:  Akram Bayat (UMass Boston, Visual Attention Lab) Wei Ding (UMass Boston, Knowledge Discovery Lab)

Host: Mandana Sassanfar

Akram Bayat: From Motor Control to Scene Perception: Using Machine Learning to Study Human Behavior and Cognition

Abstract:

In this presentation, as part of my work at UMass Boston, two dimensions of implementing machine learning algorithms for solving two important real world problems are discussed. In the first part, we model human eye movements in order to identify different individuals during reading activity. As an important part of our pattern recognition process we extract multiple low-level features in the scan path including fixation features, saccadic features, pupillary response features, and spatial reading features.

While capturing eye movement during reading is desirable because it is a very common task, the text content influences the reading process, making it very challenging to obtain invariant features from eye-movement data. We address this issue with a novel idea for a user identification algorithm that benefits from extracting high level features that combines eye movements with syntactic and semantic word relationships in a text. The promising results of our identification method make eye-movement based identification an excellent approach for various applications such as personalized user interfaces.

The second part of my work focuses on scene perception and object recognition using deep convolutional neural networks. We investigate to which extent computer vision based systems for scene classification and object recognition resemble human mechanisms for scene perception. Employing global properties for scene classification, scene grammar, and top-down control of visual attention for object detection are three methodologies which we evaluate in humans and deep convolutional networks. We also evaluate the performance of deep object recognition networks (e.g., Faster R-CNN) under various conditions of image filtering in the frequency domain and compare it with the human visual system in terms of internal representation.  We then show that fine-tuning the Faster-RCNN to filtered data improves network performance over a range of spatial frequencies.

 

Wei Ding: REND: A Reinforced Network-Based Model for Clustering Sparse Data with Application to Cancer Subtype Discovery

Abstract:

We will discuss a new algorithm, called Reinforced Network-Based Model for Clustering Sparse Data (REND), for finding unknown groups of similar data objects in sparse and largely non-overlapping feature space where a network structure among features can be observed. REND is an autoencoder neural network alternative to non-negative matrix factorization (NMF). NMF has made significant advancements in various clustering tasks with great practical success. The use of neural networks over NMF allows the implementation of non-negative model variants with multi-layered, arbitrarily non-linear structures, which is much needed to handle nonlinearity in complex real data. However, standard neural networks cannot achieve its full potential when data is sparse and the sample size is hundreds of orders of magnitude smaller than the dimension of the feature space. To address these issues, we present a model consisting of integrated layers of reinforced network smoothing and an sparse autoencoder. The architecture of hidden layers incorporates existing network dependency in the feature space. The reinforced network layers smooth sparse data over the network structure. Most importantly, through backpropagation, the weights of the reinforced smoothing layers are simultaneously constrained by the remaining sparse autoencoder layers that set the target values to be equal to the inputs. Our approach integrates physically meaningful feature dependencies into model design and efficiently clusters sparse data through integrated smoothing and sparse autoencoder learning. Empirical results demonstrate that REND achieves improved accuracy and render physically meaningful clustering results.

Speaker Bio:

Wei Ding received her Ph.D. degree in Computer Science from the University of Houston in 2008. She is an Associate Professor of Computer Science at the University of Massachusetts Boston. Her research interests include data mining, machine learning, artificial intelligence, computational semantics, and with applications to health sciences, astronomy, geosciences, and environmental sciences. She has published more than 122 referred research papers, 1 book, and has 2 patents. She is an Associate Editor of the ACM Transaction on Knowledge Discovery from Data (TKDD), Knowledge and Information Systems (KAIS) and an editorial board member of the Journal of Information System Education (JISE), the Journal of Big Data, and the Social Network Analysis and Mining Journal. Her research projects are sponsored by NSF, NIH, NASA, and DOE. She is an IEEE senior member and an ACM senior member.

Organizer:  Joel Oller Organizer Email:  cbmm-contact@mit.edu

CBMM Research Meeting: Panel discussion with Niko Kriegeskorte - Deep networks, the brain and AI

Oct 27, 2017 - 4:00 pm
Niko Kriegeskorte
Venue:  MIT 46-3189 (MIBR Seminar Room) Speaker/s:  Niko Kriegeskorte Host/Moderator: Josh Tenenbaum Panelists (CBMM faculty): Tomaso Poggio, Nancy Kanwisher, James DiCarlo, Josh McDermott, Sam Gershman, ...

Questions that the panel will focus on include:
 

- What does it mean to "understand" the brain, and especially the ventral stream, in the era of deep networks?  Can we come up with better, richer ways of comparing representations in models and brains, which are more revealing of how brains actually compute?

- How should we understand the top-down and recurrent processing that occurs in the vision, and its interaction with feedforward bottom up processing?  What are the best ways to compare integrated models of bottom-up, top-down and recurrent processing to neural data, in humans and nonhuman primates?

- To what extent can we understand vision as inverting a generative model?  Where is the generative model in the brain, and how is it used, in online perception, learning, or imagination?

- How can we scale the successes we've had in understanding object recognition in the brain, to other aspects of vision, other perceptual modalities, and beyond perception?

 

Organizer:  Joel Oller Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to Weekly Research Meetings