|Title||Complexity Control by Gradient Descent in Deep Networks|
|Publication Type||Journal Article|
|Year of Publication||2020|
|Authors||Poggio, T, Liao, Q, Banburski, A|
Overparametrized deep network predict well despite the lack of an explicit complexity control during training such as an explicit regularization term. For exponential-type loss functions, we solve this puzzle by showing an effective regularization effect of gradient descent in terms of the normalized weights that are relevant for classification.
- CBMM Funded