Title: Language as a scaffold for learning
Research on constructing and evaluating machine learning models is driven
almost exclusively by examples. We specify the behavior of sentiment classifiers
with labeled documents, guide learning of robot policies by assigning scores to
rollouts, and interpret learned image representations by retrieving salient
training images. Humans are able to learn from richer sources of supervision,
and in the real world this supervision often takes the form of natural language:
we learn word meanings from dictionaries and policies from cookbooks; we show
understanding by explaining rather than demonstrating.
This talk will explore two ways of leveraging language data to train and
interpret machine learning models: using linguistic supervision to structure
policy search and few-shot learning; and representation translation to generate
textual explanations of learned models.
MIT Bldg 46-5165, 43 Vassar Street, Cambridge MA 02139