An analysis of training and generalization errors in shallow and deep networks

TitleAn analysis of training and generalization errors in shallow and deep networks
Publication TypeCBMM Memos
Year of Publication2018
AuthorsMhaskar, H, Poggio, T
Date Published02/2018
Keywordsdeep learning, generalization error, interpolatory approximation
Abstract

An open problem around deep networks is the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we explain this phenomenon when each unit evaluates a trigonometric polynomial. It is well understood in the theory of function approximation that ap- proximation by trigonometric polynomials is a “role model” for many other processes of approximation that have inspired many theoretical constructions also in the context of approximation by neural and RBF networks. In this paper, we argue that the maximum loss functional is necessary to measure the generalization error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error, and how much error to expect at which test data. An interesting feature of our new method is that the variance in the training data is no longer an insurmountable lower bound on the generalization error.

arXiv

arXiv:1802.06266

DSpace@MIT

http://hdl.handle.net/1721.1/113843

CBMM Memo No:  076

Research Area: 

CBMM Relationship: 

  • CBMM Funded