Foundations of Deep Learning: Compositional Sparsity of Computable Functions

TitleFoundations of Deep Learning: Compositional Sparsity of Computable Functions
Publication TypeCBMM Memos
Year of Publication2022
AuthorsPoggio, TA
Date Published10/2022
Abstract

  Why do deep networks work as well as they do? The main 
  reason is that certain deep architectures -- such as CNNs and
  transformers -- are ideally suited to exploit a
  general property of all efficiently computable functions: their
  compositional sparsity.

DSpace@MIT

https://hdl.handle.net/1721.1/145776

CBMM Memo No:  138

Research Area: 

CBMM Relationship: 

  • CBMM Funded