This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.

%B Neural Networks %V 121 %P 229 - 241 %8 01/2020 %G eng %U https://www.sciencedirect.com/science/article/abs/pii/S0893608019302552 %! Neural Networks %R 10.1016/j.neunet.2019.08.028 %0 Generic %D 2019 %T An analysis of training and generalization errors in shallow and deep networks %A Hrushikesh Mhaskar %A Tomaso Poggio %K deep learning %K generalization error %K interpolatory approximation %XThis paper is motivated by an open problem around deep networks, namely, the apparent absence of overfitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.

%8 05/2019 %1https://arxiv.org/abs/1802.06266

%2https://hdl.handle.net/1721.1/121183

%0 Generic %D 2018 %T An analysis of training and generalization errors in shallow and deep networks %A Hrushikesh Mhaskar %A Tomaso Poggio %K deep learning %K generalization error %K interpolatory approximation %XAn open problem around deep networks is the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we explain this phenomenon when each unit evaluates a trigonometric polynomial. It is well understood in the theory of function approximation that ap- proximation by trigonometric polynomials is a “role model” for many other processes of approximation that have inspired many theoretical constructions also in the context of approximation by neural and RBF networks. In this paper, we argue that the maximum loss functional is necessary to measure the generalization error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error, and how much error to expect at which test data. An interesting feature of our new method is that the variance in the training data is no longer an insurmountable lower bound on the generalization error.

%8 02/2018 %1 %2http://hdl.handle.net/1721.1/113843

%0 Journal Article %J Bulletin of the Polish Academy of Sciences: Technical Sciences %D 2018 %T Theory I: Deep networks and the curse of dimensionality %A Tomaso Poggio %A Qianli Liao %K convolutional neural networks %K deep and shallow networks %K deep learning %K function approximation %XWe review recent work characterizing the classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage.

%B Bulletin of the Polish Academy of Sciences: Technical Sciences %V 66 %G eng %N 6 %0 Report %D 2017 %T Fisher-Rao Metric, Geometry, and Complexity of Neural Networks %A Liang, Tengyuan %A Tomaso Poggio %A Alexander Rakhlin %A Stokes, James %K capacity control %K deep learning %K Fisher-Rao metric %K generalization error %K information geometry %K Invariance %K natural gradient %K ReLU activation %K statistical learning theory %XWe study the relationship between geometry and capacity measures for deep neural networks from an invariance viewpoint. We introduce a new notion of capacity — the Fisher-Rao norm — that possesses desirable in- variance properties and is motivated by Information Geometry. We discover an analytical characterization of the new capacity measure, through which we establish norm-comparison inequalities and further show that the new measure serves as an umbrella for several existing norm-based complexity measures. We discuss upper bounds on the generalization error induced by the proposed measure. Extensive numerical experiments on CIFAR-10 support our theoretical findings. Our theoretical analysis rests on a key structural lemma about partial derivatives of multi-layer rectifier networks.

%B arXiv.org %8 11/2017 %G eng %U https://arxiv.org/abs/1711.01530 %0 Journal Article %J International Journal of Automation and Computing %D 2017 %T Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review %A Tomaso Poggio %A Hrushikesh Mhaskar %A Lorenzo Rosasco %A Brando Miranda %A Qianli Liao %K convolutional neural networks %K deep and shallow networks %K deep learning %K function approximation %K Machine Learning %K Neural Networks %XThe paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning. A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage. Implications of a few key theorems are discussed, together with new results, open problems and conjectures.

%B International Journal of Automation and Computing %P 1-17 %8 03/2017 %G eng %U http://link.springer.com/article/10.1007/s11633-017-1054-2?wt_mc=Internal.Event.1.SEM.ArticleAuthorOnlineFirst %R 10.1007/s11633-017-1054-2