Tomaso Poggio: Learning as the Prototypical Inverse Problem

Tomaso Poggio: Learning as the Prototypical Inverse Problem

Topics: Overview of learning tasks and methods, ill-posedness and regularization, basic concepts and notation; supervised learning: given training set of labeled examples drawn from a probability distribution, find a function that predicts the outputs from the inputs; noise and sampling issues; goal is to make predictions about future data (generalization); loss function measures the error between actual and predicted values; examples of loss functions for regression and binary classification; expected risk measures loss averaged over unknown distribution; empirical risk as proxy for expected risk; hypothesis space of functions or models to search (e.g. linear functions, polynomial, RBFs, Sobolev spaces); minimizing empirical risk; learning algorithm should generalize and be well-posed, e.g. stable; regularization: classical way to restore well-posedness and ensure generalization; Tikhonov regularization; intelligent behavior optimises under constraints that are critical to problem solving and generalization