Yes

Brains, Minds + Machines Seminar Series: Calibrating Generative Models: The Probabilistic Chomsky-Schützenberger Hierarchy

Oct 29, 2019 - 4:00 pm
Photo of Thomas Icard
Venue:  Star Seminar Room (Stata D463) Address:  Stata D463, Building 32, 32 Vassar Street Cambridge, MA 02139 Speaker/s:  Thomas Icard, Stanford

Abstract: How might we assess the expressive capacity of different classes of probabilistic generative models? The subject of this talk is an approach that appeals to machines of increasing strength (finite-state, recursive, etc.), or equivalently, by probabilistic grammars of increasing complexity, giving rise to a probabilistic version of the familiar Chomsky hierarchy. Many common probabilistic models — hidden Markov models, generative neural networks, probabilistic programming languages, etc. — naturally fit into the hierarchy. The aim of the talk is to give as comprehensive a picture as possible of the landscape of distributions that can be expressed at each level in the hierarchy. Of special interest is what this pattern of results might mean for cognitive modeling.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Brains, Minds + Machines Seminar Series: A distributional point of view on hierarchy

Sep 17, 2019 - 4:00 pm
Photo of Maia Fraser
Venue:  MIT Building 46-3002 (Singleton Auditorium) Speaker/s:  Maia Fraser, Assistant Professor University of Ottawa

Abstract: Hierarchical learning is found widely in biological organisms. There are several compelling arguments for advantages of this structure. Modularity (reusable components) and function approximation are two where theoretical support is readily available. Other, more statistical, arguments are surely also relevant, in particular there's a sense that "hierarchy reduces generalization error". In this talk, I will bolster this from a distributional point of view and show how this gives rise to deep vs. shallow regret bounds in semi-supervised learning that can also be carried over to some reinforcement learning settings. The argument in both paradigms deals with partial observation, namely partially labeled data resp. partially observed states, and useful representations that can be learned therefrom. Examples include manifold learning and group-invariant features.

Organizer:  Frederico Azevedo Hector Penagos Organizer Email:  cbmm-contact@mit.edu

Pages

Subscribe to Yes