Publication
Complexity Control by Gradient Descent in Deep Networks. Nature Communications 11, (2020).
s41467-020-14663-9.pdf (431.68 KB)
I-theory on depth vs width: hierarchical function composition. (2015).
cbmm_memo_041.pdf (1.18 MB)
Cervelli menti algoritmi. 272 (Sperling & Kupfer, 2023). at <https://www.sperling.it/libri/cervelli-menti-algoritmi-marco-magrini>
Theory II: Landscape of the Empirical Risk in Deep Learning. (2017).
CBMM Memo 066_1703.09833v2.pdf (5.56 MB)
Turing++ Questions: A Test for the Science of (Human) Intelligence. AI Magazine 37 , 73-77 (2016).
Turing_Plus_Questions.pdf (424.91 KB)
Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?. (2016).
CBMM-Memo-058v1.pdf (2.42 MB)
CBMM-Memo-058v5.pdf (2.45 MB)
CBMM-Memo-058-v6.pdf (2.74 MB)
Proposition 4 has been deleted (2.75 MB)
What if.. (2015).
What if.pdf (2.09 MB)
Computational role of eccentricity dependent cortical magnification. (2014).
CBMM-Memo-017.pdf (1.04 MB)
From Associative Memories to Powerful Machines. (2021).
v1.0 (1.01 MB)
v1.3Section added August 6 on self attention (3.9 MB)
Deep Learning: mathematics and neuroscience. (2016).
Deep Learning- mathematics and neuroscience.pdf (1.25 MB)
A Perspective: Sparse Compositionality and Efficiently Computable Intelligence. (2026).
Perspective_SPCOMP-9.pdf (170.23 KB)
Visual Cortex and Deep Networks: Learning Invariant Representations. 136 (The MIT Press, 2016). at <https://mitpress.mit.edu/books/visual-cortex-and-deep-networks>
Notes on Hierarchical Splines, DCLNs and i-theory. (2015).
CBMM Memo 037 (1.83 MB)
An Overview of Some Issues in the Theory of Deep Networks. IEEJ Transactions on Electrical and Electronic Engineering 15, 1560 - 1571 (2020).
Compositional sparsity of learnable functions. Bulletin of the American Mathematical Society 61, 438-456 (2024).
Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)
Deep Leaning: Mathematics and Neuroscience. A Sponsored Supplement to Science Brain-Inspired intelligent robotics: The intersection of robotics and neuroscience, 9-12 (2016).
The History of Neuroscience in Autobiography Volume 8 8, (Society for Neuroscience, 2014).
Volume Introduction and Preface (232.8 KB)
TomasoPoggio.pdf (1.43 MB)
On efficiently computable functions, deep networks and sparse compositionality. (2025).
Deep_sparse_networks_approximate_efficiently_computable_functions.pdf (223.15 KB)
Theoretical issues in deep networks. Proceedings of the National Academy of Sciences 201907369 (2020). doi:10.1073/pnas.1907369117
PNASlast.pdf (915.3 KB)
Theoretical Issues in Deep Networks. (2019).
CBMM Memo 100 v1 (1.71 MB)
CBMM Memo 100 v3 (8/25/2019) (1.31 MB)
CBMM Memo 100 v4 (11/19/2019) (1008.23 KB)
Explicit regularization and implicit bias in deep network classifiers trained with the square loss. arXiv (2020). at <https://arxiv.org/abs/2101.00072>
How Deep Sparse Networks Avoid the Curse of Dimensionality: Efficiently Computable Functions are Compositionally Sparse. (2022).
v1.0 (984.15 KB)
v5.7 adding in context learning etc (1.16 MB)
From Marr’s Vision to the Problem of Human Intelligence. (2021).
CBMM-Memo-118.pdf (362.19 KB)
]