Weekly Research Meetings

CBMM Virtual Research Meeting: Noga Zaslavsky

May 19, 2020 - 2:00 pm
Photo of Noga Zaslavsky
Address:  https://mit.zoom.us/j/99968215057?pwd=dVJsRzFXcFVYNzZnSUY1d05lcDVRdz09 Password: compress Speaker/s:  Noga Zaslavsky

 

Title: Efficient compression and linguistic meaning in humans and machines

 

Abstract: In this talk, I will argue that efficient compression may provide a fundamental principle underlying the human capacity to communicate and reason about meaning, and may help to inform machines with similar linguistic abilities. I will first address this at the population level, showing that pressure for efficient compression may drive the evolution of word meanings across languages, and may give rise to human-like semantic representations in artificial neural networks trained for vision. I will then address this at the agent level, where local context-dependent interactions influence the meaning of utterances. I will show that efficient compression may give rise to human pragmatic reasoning in reference games, suggesting a novel and principled approach to informing machine learning systems with pragmatic skills.

 

https://mit.zoom.us/j/99968215057?pwd=dVJsRzFXcFVYNzZnSUY1d05lcDVRdz09

Password: compress

Organizer:  Jean Lawrence Organizer Email:  cbmm-contact@mit.edu

CBMM Virtual Research Meeting: Max Tegmark

May 5, 2020 - 4:00 pm
Photo of Max Tegmark
Venue:  Zoom Address:  Link for Talk: https://mit.zoom.us/j/94413961955?pwd=Ni9TeSt3a2xpajkraGlJanJkOERBQT09 Speaker/s:  Max Tegmark, MIT

 

Title: AI for physics & physics for AI

 

Abstract: After briefly reviewing how machine learning is becoming ever-more widely used in physics, I explore how ideas and methods from physics can help improve machine learning, focusing on automated discovery of mathematical formulas from data. I present a method for unsupervised learning of equations of motion for objects in raw and optionally distorted unlabeled video. I also describe progress on symbolic regression, i.e.,  finding a symbolic expression that matches data from an unknown function. Although this problem is likely to be NP-hard in general, functions of practical interest often exhibit symmetries, separability, compositionality and other simplifying properties. In this spirit, we have developed a recursive multidimensional symbolic regression algorithm that combines neural network fitting with a suite of physics-inspired techniques that discover and exploit these simplifying properties, enabling significant improvement of state-of-the-art performance.

Link for talk: https://mit.zoom.us/j/94413961955?pwd=Ni9TeSt3a2xpajkraGlJanJkOERBQT09

Password included in announcement email

Organizer:  Jean Lawrence Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Youssef Mroueh

Apr 28, 2020 - 2:00 pm
Photo of Youssef Mroueh
Venue:  Zoom Address:  https://mit.zoom.us/j/94030585358?pwd=bWVwaXQ5RE5NNC9mbU5JT0UzT1lzZz09 Speaker/s:  Youssef Mroueh, MIT-IBM Watson AI lab

 

 

Title of the talk: Sobolev Independence Criterion: Non-Linear Feature Selection with False Discovery Control.    

 

Abstract: In this talk I will show how learning gradients help us designing new non-linear algorithms for feature selection, black box sampling and also, in understanding neural style transfer. In the first part of the talk, I will present Sobolev Independence Criterion (SIC), that relates to saliency based method in deep learning. SIC is an interpretable dependency measure that gives rise to feature importance scores. Sparsity inducing gradient penalties are crucial regularizers for the SIC objective and in promoting the desired non-linear sparsity. SIC can subsequently be used in feature selection and false discovery rate control.

 

Paper: http://papers.nips.cc/paper/9147-sobolev-independence-criterion.pdf Joint work with Tom Sercu, Mattia Rigotti, Inkit Padhi and  Cicero Dos Santos

 

Bio: Youssef Mroueh is a research staff member in IBM Research and a principal investigator in the MIT-IBM Watson AI lab. He received his PhD in computer science in February 2015 from MIT, CSAIL, where he was advised by   Professors Tomaso Poggio and Lorenzo Rosasco. In 2011, he obtained his engineering diploma from Ecole Polytechnique Paris France, and a master of science in Applied Maths from Ecole des Mines de Paris. He is interested in Deep Learning, Machine Learning, Statistical Learning Theory, Computer Vision.  He conducts Modeling and Algorithmic research in Multimodal Deep Learning.

 

Link to Talk: https://mit.zoom.us/j/94030585358?pwd=bWVwaXQ5RE5NNC9mbU5JT0UzT1lzZz09

Password for Talk in Email Annoucement 

Organizer:  Jean Lawrence Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Virtual Panel Discussion: Stability of overparametrized learning models

Apr 14, 2020 - 2:00 pm
Venue:  Zoom Address:  Join Zoom Meeting https://mit.zoom.us/j/603597866 Speaker/s:  Tomaso Poggio (CBMM), Mikhail Belkin (Ohio State University), Constantinos Daskalakis (CSAIL), Gil Strang (Mathematics) and Lorenzo Rosasco (University of Genova)

 

Abstract:

Developing theoretical foundations for learning is a key step towards understanding intelligence. Supervised learning is a paradigm in which natural or artificial networks learn a functional relationship from a set of n input-output training examples. A main challenge for the theory is to determine conditions under which a learning algorithm will be able to predict well on new inputs after training on a finite training set, i.e. generalization. In classical learning theory, this was accomplished by appropriately restricting the space of functions represented by the networks (the hypothesis space), characterizing a regime in which the number of training examples (n) is greater than the number of parameters to be learned (d). Here we will discuss the regime in which networks remain overparametrized, i.e. d > n as n grows, and in which the hypothesis space is not fixed. Our panel discussion will center on key stability properties of general algorithms, rather than the hypothesis space, that are necessary to achieve learnability.

Join Zoom Meeting
https://mit.zoom.us/j/603597866

Organizer:  Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Virtual Research Meeting: Predictive maps in the brain (Zoom)

Apr 7, 2020 - 2:00 pm
Venue:  Zoom Address:  Link to Meeting: https://harvard.zoom.us/j/301130600​ Speaker/s:  Sam Gershman, Harvard/CBMM

 

Abstract: In this talk, I will present a theory of reinforcement learning that falls in between "model-based" and "model-free" approaches. The key idea is to represent a "predictive map" of the environment, which can then be used to efficiently compute values. I show how such a map explains many aspects of the hippocampal representation of space, and the map's eigendecomposition reveals latent structure resembling entorhinal grid cells. I will then present evidence, using a novel revaluation task, that humans employ such a predictive map to solve reinforcement learning tasks. Finally, I will discuss the role of dopamine error signals in learning the predictive map.

Link to Meeting: https://harvard.zoom.us/j/301130600

Organizer:  Jean Lawrence Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Tiago Marques

Feb 11, 2020 - 4:00 pm
Venue:  MIT 46-5165 Address:  MIT Bldg 46-5165, 43 Vassar Street, Cambridge MA 02139 Speaker/s:  Tiago Marques

Abstract: Object recognition relies on the hierarchical processing of visual information along the primate ventral stream. Artificial neural networks (ANNs) recently achieved unprecedented accuracy in predicting neuronal responses in different cortical areas and primate behavior. In this talk, I will present an extension of this approach, in which hundreds of different hierarchical models were tested to quantitatively assess how well they explain primate primary visual cortex (V1) across a wide range of experimentally characterized functional properties. We found that, for some ANNs, individual artificial neurons in early and intermediate layers have functional properties that are remarkably similar to their biological counterparts, and that the distributions of these properties over all neurons approximately match the corresponding distributions in primate V1. Still, none of the candidate models was able to account for all the functional properties, suggesting that current network architectures might not be capable of fully explaining primate V1 at the single neuron level. Since some ANNs have “V1 areas” that more precisely approximate primate V1 than others, we investigated whether a more brain-like V1 model also leads to better models of object recognition behavior. Indeed, over a set of 48 ANN models optimized for object recognition, V1 similarity was positively correlated with behavioral predictivity. This result supports the widespread view that the complex visual representations required for object recognition are derived from low-level functional properties, but it also demonstrates – for the first time - that working to build better models of low-level vision has tangible payoffs in explaining complex visual behaviors. Moreover, the set of functional V1 benchmarks presented here can be used as a gradient to search for better models of V1, which will likely result in better models of the primate ventral stream.

Organizer:  Jean Lawrence Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Language as a scaffold for learning

Dec 17, 2019 - 4:00 pm
Jacob Andreas
Venue:  MIT 46-5165 Address:  MIT Bldg 46-5165, 43 Vassar Street, Cambridge MA 02139 Speaker/s:  Jacob Andreas

Title: Language as a scaffold for learning

 

Abstract:

 

Research on constructing and evaluating machine learning models is driven

almost exclusively by examples. We specify the behavior of sentiment classifiers
with labeled documents, guide learning of robot policies by assigning scores to
rollouts, and interpret learned image representations by retrieving salient
training images. Humans are able to learn from richer sources of supervision,
and in the real world this supervision often takes the form of natural language:
we learn word meanings from dictionaries and policies from cookbooks; we show
understanding by explaining rather than demonstrating.

This talk will explore two ways of leveraging language data to train and
interpret machine learning models: using linguistic supervision to structure

policy search and few-shot learning; and representation translation to generate 

textual explanations of learned models.

Organizer:  Hector Penagos Frederico Azevedo Organizer Email:  cbmm-contact@mit.edu

Research Meeting: Module 3- Nick Waters

Nov 26, 2019 - 4:00 pm
Venue:  MIT 46-5165 Address:  MIT Bldg 46-5165, 43 Vassar Street, Cambridge MA 02139 Speaker/s:  Nick Watters (Tenenbaum Lab)

Title:  Unsupervised Learning and Structured Representations in Neural Networks

 

Abstract:

Sample efficiency, transfer, and flexibility are hallmarks of biological intelligence and long-standing challenges for artificial learning systems. Core to these capacities is the reuse of structured knowledge. One form of knowledge reuse is compositionality, the ability to represent data as a combination of primitives and recombine primitives into novel composites. Compositionality arises in many forms, such as feature compositionality (e.g. pink elephant), object/relationship compositionality (e.g. elephant on an iceberg), and the structure of natural language.

 

This talk will summarize a few recent approaches to learning compositional representations without supervision. It will focus primarily on techniques that learn factorized representations of features, objects, and relations in visual scenes and will briefly touch on the use of such representations for sample-efficient model-based reinforcement learning.  There will also be some discussion about connections to neuroscience.

Organizer:  Hector Penagos Frederico Azevedo Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to Weekly Research Meetings