Theory III: Dynamics and Generalization in Deep Networks

TitleTheory III: Dynamics and Generalization in Deep Networks
Publication TypeCBMM Memos
Year of Publication2018
AuthorsBanburski, A, Liao, Q, Miranda, B, Poggio, T, Rosasco, L, Liang, B, Hidary, J
Date Published06/2018
Abstract

We review recent observations on the dynamical systems induced by gradient descent methods used for training deep networks and summarize properties of the solutions they converge to. Recent results illuminate the puzzle in the special case of linear networks for binary classification. They prove that minimization of loss functions such as the logistic, the crossentropy and the exponential loss yields asymptotic convergence to the maximum margin solution for linearly separable datasets, independently of the initial conditions. Here we discuss the case of nonlinear multilayer DNNs near zero minima of the empirical loss, under exponential-type losses and square loss, for several variations of the basic gradient descent algorithm, including a new NMGD (norm minimizing gradient descent) version that converges to the minimum norm fixed points of the gradient descent iteration. Our main results are:

  •  gradient descent algorithms with weight normalization constraint achieve generalization;
  • the fundamental reason for the effectiveness of existing weight normalization and batch normalization techniques is that they are approximate implementations of maximizing the margin under unit norm constraint;
  • without unit norm constraints some level of generalization can still be obtained for not-too-deep networks because the balance of the weights across different layers, if present at initialization, is maintained by the gradient flow.

In the perspective of these theoretical results, we discuss experimental evidence around the apparent absence of “overfitting”, that is the observation that the expected classification error does not get worse when increasing the number of parameters. Our explanation focuses on the implicit normalization enforced by algorithms such as batch normalization, since the control of the norm of the weights is related to Halpern iterations for minimum norm solutions which are equivalent to regularization with vanishing λ(t). 

1This replaces previous versions of Theory IIIa and Theory IIIb.

DSpace@MIT

http://hdl.handle.net/1721.1/116692

Download:  PDF icon CBMM-Memo-090.pdf PDF icon CBMM Memo 090 v2 (revised on 1/3/2019) PDF icon CBMM Memo 090 v3 (revised on 2/19/2019) PDF icon CBMM Memo 90 v4 (revised on 3/3/2019) PDF icon CBMM Memo 90 v6 (revised on 4/11/2019) CBMM Memo No:  090

Research Area: 

CBMM Relationship: 

  • CBMM Funded