Learning Functions: When Is Deep Better Than Shallow

TitleLearning Functions: When Is Deep Better Than Shallow
Publication TypeCBMM Memos
Year of Publication2016
AuthorsMhaskar, H, Liao, Q, Poggio, T
Abstract

While the universal approximation property holds both for hierarchical and shallow networks, we prove that deep (hierarchical) networks can approximate the class of compositional functions with the same accuracy as shallow networks but with exponentially lower number of training parameters as well as VC-dimension. This theorem settles an old conjecture by Bengio on the role of depth in networks. We then define a general class of scalable, shift-invariant algorithms to show a simple and natural set of requirements that justify deep convolutional networks.

URLhttps://arxiv.org/pdf/1603.00988v4.pdf
arXiv

arXiv:1603.00988

DSpace@MIT

http://hdl.handle.net/1721.1/101635

CBMM Memo No:  045

Research Area: 

CBMM Relationship: 

  • CBMM Funded