Theory III: Dynamics and Generalization in Deep Networks

TitleTheory III: Dynamics and Generalization in Deep Networks
Publication TypeCBMM Memos
Year of Publication2018
AuthorsPoggio, T, Liao, Q, Miranda, B, Banburski, A, Boix, X, Hidary, J
Date Published06/2018
Abstract

We review recent observations on the dynamical systems induced by gradient descent methods used for training deep networks and summarize what is known about the solutions they converge to. Recent results illuminate the puzzle in the special case of linear networks for binary classification. They prove that minimization of loss functions such as the logistic, the cross-entropy and the exponential loss yields asymptotic convergence to the maximum margin solution for linearly separable datasets, independently of the initial conditions. Here we discuss the case of nonlinear multilayer DNNs near zero minima of the empirical loss, under exponential-type losses and square loss, for several variations of the basic gradient descent algorithm, including a new NMGD (norm minimizing gradient descent) version that converges to the minimum norm fixed points of the gradient descent iteration. Our main results are:

  • generalization bounds for classification lead to maximize the margin under a unit norm constraint for the product of the Frobenius norms of the weights at different layers;
  • gradient descent algorithms on exponential-type loss functions can achieve this goal with appropriate weight normalization;
  • existing weight normalization as well as batch normalization techniques can be regarded as approximate implementations of the correct minimization algorithm and this is the fundamental reason for their effectiveness;
  • the control of the norm of the weights is related to regularization with vanishing λ(t) and to Halpern iterations for minimum norm solutions.

Finally, we show and discuss experimental evidence around the apparent absence of “overfitting”, that is the ob-servation that the expected error does not get worse when increasing the number of parameters. Our explanation focuses on the implicit normalization enforced by algorithms such as batch normalization.

1This replaces previous versions of Theory IIIa and Theory IIIb.

DSpace@MIT

http://hdl.handle.net/1721.1/116692

CBMM Memo No: 
090

Research Area: 

CBMM Relationship: 

  • CBMM Funded