ML seminar: Is Learning Compatible with (Over)fitting to the Training Data?

Prof. Sasha Rakhlin November 14, 2018 - 4:30 am to 5:30 am
Speaker/s: 

Sasha Rakhlin, MIT, LIDS, CBMM

Organizer: 

Prof. Rakhlin will be presenting the ML seminar talk next week in CSAIL, MIT Bldg 32.

Abstract: We revisit the basic question: can a learning method be successful if it perfectly fits (interpolates/memorizes) the data? The question is motivated by the good out-of-sample performance of ``overparametrized'' deep neural networks that have the capacity to fit training data exactly, even if labels are randomized. The conventional wisdom in Statistics and ML is to regularize the solution and avoid data interpolation. We challenge this wisdom and propose several interpolation methods that work well, both in theory and in practice. In particular, we present a study of kernel ``ridgeless'' regression and describe a new phenomenon of implicit regularization, even in the absence of explicit bias-variance trade-off. We will discuss the nature of successful learning with interpolation, both in regression and classification._

Details

Date: 
November 14, 2018
Time: 
4:30 am to 5:30 am
Venue: 
Stata Blgd, 32-155