Title | Theory II: Deep learning and optimization |
Publication Type | Journal Article |
Year of Publication | 2018 |
Authors | Poggio, T, Liao, Q |
Journal | Bulletin of the Polish Academy of Sciences: Technical Sciences |
Volume | 66 |
Issue | 6 |
Abstract | The landscape of the empirical risk of overparametrized deep convolutional neural networks (DCNNs) is characterized with a mix of theory and experiments. In part A we show the existence of a large number of global minimizers with zero empirical error (modulo inconsistent equations). The argument which relies on the use of Bezout theorem is rigorous when the RELUs are replaced by a polynomial nonlinearity. We show with simulations that the corresponding polynomial network is indistinguishable from the RELU network. According to Bezout theorem, the global minimizers are degenerate unlike the local minima which in general should be non-degenerate. Further we experimentally analyzed and visualized the landscape of empirical risk of DCNNs on CIFAR-10 dataset. Based on above theoretical and experimental observations, we propose a simple model of the landscape of empirical risk. In part B, we characterize the optimization properties of stochastic gradient descent applied to deep networks. The main claim here consists of theoretical and experimental evidence for the following property of SGD: SGD concentrates in probability – like the classical Langevin equation – on large volume, ”flat” minima, selecting with high probability degenerate minimizers which are typically global minimizers. |
DOI | 10.24425/bpas.2018.125925 |
Associated Module:
Research Area:
CBMM Relationship:
- CBMM Funded