I-theory on depth vs width: hierarchical function composition

TitleI-theory on depth vs width: hierarchical function composition
Publication TypeCBMM Memos
Year of Publication2015
AuthorsPoggio, T, Anselmi, F, Rosasco, L
Date Published12/29/2015
Abstract

Deep learning networks with convolution, pooling and subsampling are a special case of hierarchical architectures, which can be represented by trees (such as binary trees). Hierarchical as well as shallow networks can approximate functions of several variables, in particular those that are compositions of low dimensional functions. We show that the power of a deep network architecture with respect to a shallow network is rather independent of the specific nonlinear operations in the network and depends instead on the the behavior of the VC-dimension. A shallow network can approximate compositional functions with the same error of a deep network but at the cost of a VC-dimension that is exponential instead than quadratic in the dimensionality of the function. To complete the argument we argue that there exist visual computations that are intrinsically compositional. In particular, we prove that recognition invariant to translation cannot be computed by shallow networks in the presence of clutter. Finally, a general framework that includes the compositional case is sketched. The key condition that allows tall, thin networks to be nicer that short, fat networks is that the target input-output function must be sparse in a certain technical sense.

DSpace@MIT

http://hdl.handle.net/1721.1/100559

Download:  PDF icon cbmm_memo_041.pdf
CBMM Memo No:  041

Research Area: 

CBMM Relationship: 

  • CBMM Funded