Publication
Deep Leaning: Mathematics and Neuroscience. A Sponsored Supplement to Science Brain-Inspired intelligent robotics: The intersection of robotics and neuroscience, 9-12 (2016).
From Associative Memories to Powerful Machines. (2021).
v1.0 (1.01 MB)
v1.3Section added August 6 on self attention (3.9 MB)
A Perspective: Sparse Compositionality and Efficiently Computable Intelligence. (2026).
Perspective_SPCOMP-9.pdf (170.23 KB)
Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)
Notes on Hierarchical Splines, DCLNs and i-theory. (2015).
CBMM Memo 037 (1.83 MB)
Compositional sparsity of learnable functions. Bulletin of the American Mathematical Society 61, 438-456 (2024).
On efficiently computable functions, deep networks and sparse compositionality. (2025).
Deep_sparse_networks_approximate_efficiently_computable_functions.pdf (223.15 KB)
An Overview of Some Issues in the Theory of Deep Networks. IEEJ Transactions on Electrical and Electronic Engineering 15, 1560 - 1571 (2020).
The History of Neuroscience in Autobiography Volume 8 8, (Society for Neuroscience, 2014).
Volume Introduction and Preface (232.8 KB)
TomasoPoggio.pdf (1.43 MB)
Theoretical Issues in Deep Networks. (2019).
CBMM Memo 100 v1 (1.71 MB)
CBMM Memo 100 v3 (8/25/2019) (1.31 MB)
CBMM Memo 100 v4 (11/19/2019) (1008.23 KB)
Theory II: Landscape of the Empirical Risk in Deep Learning. (2017).
CBMM Memo 066_1703.09833v2.pdf (5.56 MB)
Theoretical issues in deep networks. Proceedings of the National Academy of Sciences 201907369 (2020). doi:10.1073/pnas.1907369117
PNASlast.pdf (915.3 KB)
Associative Memory as the Core of Intelligence in Technology and Evolution. (2026).
Review_On_Associative_Memories-14.pdf (245.78 KB)
Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf (1.18 MB)
Is Research in Intelligence an Existential Risk?. (2014).
Is Research in Intelligence an Existential Risk.pdf (571.42 KB)
Loss landscape: SGD has a better view. (2020).
CBMM-Memo-107.pdf (1.03 MB)
Typos and small edits, ver11 (955.08 KB)
Small edits, corrected Hessian for spurious case (337.19 KB)
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2
art%3A10.1007%2Fs11633-017-1054-2.pdf (1.68 MB)
Turing++ Questions: A Test for the Science of (Human) Intelligence. AI Magazine 37 , 73-77 (2016).
Turing_Plus_Questions.pdf (424.91 KB)
Double descent in the condition number. (2019).
Fixing typos, clarifying error in y, best approach is crossvalidation (837.18 KB)
Incorporated footnote in text plus other edits (854.05 KB)
Deleted previous discussion on kernel regression and deep nets: it will appear, extended, in a separate paper (795.28 KB)
correcting a bad typo (261.24 KB)
Deleted plot of condition number of kernel matrix: we cannot get a double descent curve (769.32 KB)
Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?. (2016).
CBMM-Memo-058v1.pdf (2.42 MB)
CBMM-Memo-058v5.pdf (2.45 MB)
CBMM-Memo-058-v6.pdf (2.74 MB)
Proposition 4 has been deleted (2.75 MB)
Compositional Sparsity of Learnable Functions. (2024).
This is an update of the AMS paper (230.72 KB)
Explicit regularization and implicit bias in deep network classifiers trained with the square loss. arXiv (2020). at <https://arxiv.org/abs/2101.00072>
What if.. (2015).
What if.pdf (2.09 MB)
How Deep Sparse Networks Avoid the Curse of Dimensionality: Efficiently Computable Functions are Compositionally Sparse. (2022).
v1.0 (984.15 KB)
v5.7 adding in context learning etc (1.16 MB)
I-theory on depth vs width: hierarchical function composition. (2015).
cbmm_memo_041.pdf (1.18 MB)
]