Nick Watters (Tenenbaum Lab)
Title: Unsupervised Learning and Structured Representations in Neural Networks
Sample efficiency, transfer, and flexibility are hallmarks of biological intelligence and long-standing challenges for artificial learning systems. Core to these capacities is the reuse of structured knowledge. One form of knowledge reuse is compositionality, the ability to represent data as a combination of primitives and recombine primitives into novel composites. Compositionality arises in many forms, such as feature compositionality (e.g. pink elephant), object/relationship compositionality (e.g. elephant on an iceberg), and the structure of natural language.
This talk will summarize a few recent approaches to learning compositional representations without supervision. It will focus primarily on techniques that learn factorized representations of features, objects, and relations in visual scenes and will briefly touch on the use of such representations for sample-efficient model-based reinforcement learning. There will also be some discussion about connections to neuroscience.
MIT Bldg 46-5165, 43 Vassar Street, Cambridge MA 02139