Deep vs. shallow networks: An approximation theory perspective

TitleDeep vs. shallow networks: An approximation theory perspective
Publication TypeJournal Article
Year of Publication2016
AuthorsMhaskar, H, Poggio, T
JournalAnalysis and Applications
Volume14
Issue06
Pagination829 - 848
Date Published01/2016
ISSN0219-5305
Keywordsblessed representation, deep and shallow networks, Gaussian networks, ReLU networks
Abstract

The paper briefly reviews several recent results on hierarchical architectures for learning from examples, that may formally explain the conditions under which Deep Convolutional Neural Networks perform much better in function approximation problems than shallow, one-hidden layer architectures. The paper announces new results for a non-smooth activation function — the ReLU function — used in present-day neural networks, as well as for the Gaussian networks. We propose a new definition of relative dimension to encapsulate different notions of sparsity of a function class that can possibly be exploited by deep networks but not by shallow ones to drastically reduce the complexity required for approximation and learning.

URLhttp://www.worldscientific.com/doi/abs/10.1142/S0219530516400042
DOI10.1142/S0219530516400042
Short TitleAnal. Appl.

Research Area: 

CBMM Relationship: 

  • CBMM Funded