Publication
Turing++ Questions: A Test for the Science of (Human) Intelligence. AI Magazine 37 , 73-77 (2016).
Turing_Plus_Questions.pdf (424.91 KB)
Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?. (2016).
CBMM-Memo-058v1.pdf (2.42 MB)
CBMM-Memo-058v5.pdf (2.45 MB)
CBMM-Memo-058-v6.pdf (2.74 MB)
Proposition 4 has been deleted (2.75 MB)
A Perspective: Sparse Compositionality and Efficiently Computable Intelligence. (2026).
Perspective_SPCOMP-9.pdf (170.23 KB)
Loss landscape: SGD has a better view. (2020).
CBMM-Memo-107.pdf (1.03 MB)
Typos and small edits, ver11 (955.08 KB)
Small edits, corrected Hessian for spurious case (337.19 KB)
What if.. (2015).
What if.pdf (2.09 MB)
Compositional sparsity of learnable functions. Bulletin of the American Mathematical Society 61, 438-456 (2024).
On efficiently computable functions, deep networks and sparse compositionality. (2025).
Deep_sparse_networks_approximate_efficiently_computable_functions.pdf (223.15 KB)
Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)
I-theory on depth vs width: hierarchical function composition. (2015).
cbmm_memo_041.pdf (1.18 MB)
Deep Learning: mathematics and neuroscience. (2016).
Deep Learning- mathematics and neuroscience.pdf (1.25 MB)
Theory II: Landscape of the Empirical Risk in Deep Learning. (2017).
CBMM Memo 066_1703.09833v2.pdf (5.56 MB)
Visual Cortex and Deep Networks: Learning Invariant Representations. 136 (The MIT Press, 2016). at <https://mitpress.mit.edu/books/visual-cortex-and-deep-networks>
Associative Memory as the Core of Intelligence in Technology and Evolution. (2026).
Review_On_Associative_Memories-14.pdf (245.78 KB)
Double descent in the condition number. (2019).
Fixing typos, clarifying error in y, best approach is crossvalidation (837.18 KB)
Incorporated footnote in text plus other edits (854.05 KB)
Deleted previous discussion on kernel regression and deep nets: it will appear, extended, in a separate paper (795.28 KB)
correcting a bad typo (261.24 KB)
Deleted plot of condition number of kernel matrix: we cannot get a double descent curve (769.32 KB)
Deep Leaning: Mathematics and Neuroscience. A Sponsored Supplement to Science Brain-Inspired intelligent robotics: The intersection of robotics and neuroscience, 9-12 (2016).
Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf (1.18 MB)
How Deep Sparse Networks Avoid the Curse of Dimensionality: Efficiently Computable Functions are Compositionally Sparse. (2022).
v1.0 (984.15 KB)
v5.7 adding in context learning etc (1.16 MB)
Implicit dynamic regularization in deep networks. (2020).
v1.2 (2.29 MB)
v.59 Update on rank (2.43 MB)
Explicit regularization and implicit bias in deep network classifiers trained with the square loss. arXiv (2020). at <https://arxiv.org/abs/2101.00072>
Stable Foundations for Learning: a framework for learning theory (in both the classical and modern regime). (2020).
Original file (584.54 KB)
Corrected typos and details of "equivalence" CV stability and expected error for interpolating machines. Added Appendix on SGD. (905.29 KB)
Edited Appendix on SGD. (909.19 KB)
Deleted Appendix. Corrected typos etc (880.27 KB)
Added result about square loss and min norm (898.03 KB)
Computational role of eccentricity dependent cortical magnification. (2014).
CBMM-Memo-017.pdf (1.04 MB)
Complexity Control by Gradient Descent in Deep Networks. Nature Communications 11, (2020).
s41467-020-14663-9.pdf (431.68 KB)
From Marr’s Vision to the Problem of Human Intelligence. (2021).
CBMM-Memo-118.pdf (362.19 KB)
Notes on Hierarchical Splines, DCLNs and i-theory. (2015).
CBMM Memo 037 (1.83 MB)
Compositional Sparsity of Learnable Functions. (2024).
This is an update of the AMS paper (230.72 KB)
]