This paper is motivated by an open problem around deep networks, namely, the apparent absence of overfitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.

%8 05/2019 %1https://arxiv.org/abs/1802.06266

%2https://hdl.handle.net/1721.1/121183

%0 Journal Article %J Analysis and Applications %D 2016 %T Deep vs. shallow networks: An approximation theory perspective %A Mhaskar, H. N. %A Tomaso Poggio %K blessed representation %K deep and shallow networks %K Gaussian networks %K ReLU networks %XThe paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than shallow, one-hidden layer architectures. The paper announces new results for a non-smooth activation function — the ReLU function — used in present-day neural networks, as well as for the Gaussian networks. We propose a new definition of *relative dimension* to encapsulate different notions of sparsity of a function class that can possibly be exploited by deep networks but not by shallow ones to drastically reduce the complexity required for approximation and learning.

%B Analysis and Applications
%V 14
%P 829 - 848
%8 01/2016
%G eng
%U http://www.worldscientific.com/doi/abs/10.1142/S0219530516400042
%N 06
%! Anal. Appl.
%R 10.1142/S0219530516400042