Why does deep and cheap learning work so well?

TitleWhy does deep and cheap learning work so well?
Publication TypeJournal Article
Year of Publication2017
AuthorsLin, H, Tegmark, M
JournalJournal of Statistical Physics
Volume168
Issue6
Start Page1223
Pagination1223–1247
Date Published09/2017
KeywordsArtificial neural networks, deep learning, Statistical physics
Abstract

We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through “cheap learning” with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various “no-flattening theorems” showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss; for example, we show that n variables cannot be multiplied using fewer than 2n neurons in a single hidden layer.

URLhttps://link.springer.com/article/10.1007/s10955-017-1836-5
DOI10.1007/s10955-017-1836-5
Download:  PDF icon 1608.08225.pdf

Research Area: 

CBMM Relationship: 

  • CBMM Related