A Perspective: Sparse Compositionality and Efficiently Computable Intelligence

TitleA Perspective: Sparse Compositionality and Efficiently Computable Intelligence
Publication TypeCBMM Memos
Year of Publication2026
AuthorsPoggio, T
Date Published01/2026
Abstract

We argue that sparse compositionality is a fundamental structural property of functions that can be computed or learned efficiently by digital systems. Any function that is efficiently computable in the Church–Turing sense admits a representation as a composition of low-arity constituent functions arranged in a bounded-fan-in directed acyclic graph, and can therefore be approximated by a deep network with sparse connectivity. This principle provides a unified explanation for why deep learning avoids the curse of dimensionality, why optimization remains tractable despite nonconvexity, and why generalization often exceeds classical capacity-based predictions. Because all artificial intelligence systems are implemented on digital computers, the theory applies fully to machine learning. In contrast, it is not known whether all components of biological intelligence are efficiently computable, even if they are Turing computable in principle. This distinction suggests that some higher cognitive functions, such as language and abstraction, may be more accessible to computational modeling than evolutionarily older processes related to affect, motivation, or internal bodily states. Sparse compositionality thus clarifies both the power and the limits of machine intelligence, and points toward a theory of what can be learned efficiently from data.

Associated Module: 

CBMM Relationship: 

  • CBMM Funded