Class 02: The Learning Problem and Regularization
Instructor: Lorenzo Rosasco
Description
We formalize the problem of learning from examples in the framework of statistical learning theory and introduce key terms and concepts such as loss functions, empirical and excess risk, generalization error and consistency. We briefly describe foundational results and introduce the concepts of hypothesis space and regularization.
Slides
Class Reference Material
L. Rosasco, T. Poggio, Machine Learning: a Regularization Approach, MIT-9.520 Lectures Notes, Manuscript, Dec. 2017
Chapter 1 - Statistical Learning Theory
Note: The course notes, in the form of the circulated book draft is the reference material for this class. Related and older material can be accessed through previous year offerings of the course.
Further Reading
- F. Cucker and S. Smale, On the mathematical foundations of learning, Bulletin of the American Mathematical Society, 2002.
- T. Evgeniou, M. Pontil and T. Poggio, Regularization networks and support vector machines, Advances in Computational Mathematics, 2000.
- S. Villa, L. Rosasco and T. Poggio, On learnability, complexity and stability, "Empirical Inference, Festschrift in Honor of Vladimir N. Vapnik." Springer-Verlag, Chapter 7, 2013.
- V. Vapnik, An overview of statistical learning theory, IEEE Trans. on Neural Networks , 10(5), 1999.