Stability of overparametrized learning models

Stability of overparametrized learning models

Date Posted:  April 15, 2020
Date Recorded:  April 15, 2020
CBMM Speaker(s):  Tomaso Poggio, Lorenzo Rosasco Speaker(s):  Mikhail Belkin, Constantinos Daskalakis, Gil Strang
  • All Captioned Videos
  • CBMM Research
Description: 

A panel discussion featuring Tomaso Poggio (CBMM), Mikhail Belkin (Ohio State University), Constantinos Daskalakis (CSAIL), Gil Strang (Mathematics) and Lorenzo Rosasco (University of Genova).

Abstract: Developing theoretical foundations for learning is a key step towards understanding intelligence. Supervised learning is a paradigm in which natural or artificial networks learn a functional relationship from a set of n input-output training examples. A main challenge for the theory is to determine conditions under which a learning algorithm will be able to predict well on new inputs after training on a finite training set, i.e. generalization. In classical learning theory, this was accomplished by appropriately restricting the space of functions represented by the networks (the hypothesis space), characterizing a regime in which the number of training examples (n) is greater than the number of parameters to be learned (d). Here we will discuss the regime in which networks remain overparametrized, i.e. d > n as n grows, and in which the hypothesis space is not fixed. Our panel discussion will center on key stability properties of general algorithms, rather than the hypothesis space, that are necessary to achieve learnability.