No
ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models
Finding Friend and Foe in Multi-Agent Games
Implicit Regularization of Accelerated Methods in Hilbert Spaces
Beating SGD Saturation with Tail-Averaging and Minibatching
Write, Execute, Assess: Program Synthesis with a REPL
Modeling Expectation Violation in Intuitive Physics with Coarse Probabilistic Object Representations
Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs
Research Meeting: Language as a scaffold for learning
Title: Language as a scaffold for learning
Abstract:
Research on constructing and evaluating machine learning models is driven
almost exclusively by examples. We specify the behavior of sentiment classifiers
with labeled documents, guide learning of robot policies by assigning scores to
rollouts, and interpret learned image representations by retrieving salient
training images. Humans are able to learn from richer sources of supervision,
and in the real world this supervision often takes the form of natural language:
we learn word meanings from dictionaries and policies from cookbooks; we show
understanding by explaining rather than demonstrating.
This talk will explore two ways of leveraging language data to train and
interpret machine learning models: using linguistic supervision to structure
policy search and few-shot learning; and representation translation to generate
textual explanations of learned models.
Organizer: Hector Penagos Frederico Azevedo Organizer Email: cbmm-contact@mit.edu