A distributional point of view on hierarchy

A distributional point of view on hierarchy

Date Posted:  September 18, 2019
Date Recorded:  September 17, 2019
Speaker(s):  Maia Fraser,
  • Brains, Minds and Machines Seminar Series
Description: 

Maia Fraser, Assistant Professor University of Ottawa Abstract: Hierarchical learning is found widely in biological organisms. There are several compelling arguments for advantages of this structure. Modularity (reusable components) and function approximation are two where theoretical support is readily available. Other, more statistical, arguments are surely also relevant, in particular there's a sense that "hierarchy reduces generalization error". In this talk, I will bolster this from a distributional point of view and show how this gives rise to deep vs. shallow regret bounds in semi-supervised learning that can also be carried over to some reinforcement learning settings. The argument in both paradigms deals with partial observation, namely partially labeled data resp. partially observed states, and useful representations that can be learned therefrom. Examples include manifold learning and group-invariant features.