Publication
Word-level Invariant Representations From Acoustic Waveforms. INTERSPEECH 2014 - 15th Annual Conf. of the International Speech Communication Association (International Speech Communication Association (ISCA), 2014). at <http://www.isca-speech.org/archive/interspeech_2014/i14_2385.html>
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2
art%3A10.1007%2Fs11633-017-1054-2.pdf (1.68 MB)
When and Why Are Deep Networks Better Than Shallow Ones?. AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence (2017).
What if Eye..? Computationally Recreating Vision Evolution. arXiv (2025). at <https://arxiv.org/abs/2501.15001>
2501.15001v1.pdf (5.2 MB)
What if.. (2015).
What if.pdf (2.09 MB)
Visual Cortex and Deep Networks: Learning Invariant Representations. 136 (The MIT Press, 2016). at <https://mitpress.mit.edu/books/visual-cortex-and-deep-networks>
View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation. Current Biology 27, 1-6 (2017).
Unsupervised learning of invariant representations with low sample complexity: the magic of sensory cortex or a new framework for machine learning?. (2014).
CBMM Memo No. 001 (940.36 KB)
Unsupervised Learning of Invariant Representations in Hierarchical Architectures. (2013).
1311.4158v2.pdf (3.78 MB)
Unsupervised learning of invariant representations. Theoretical Computer Science (2015). doi:10.1016/j.tcs.2015.06.048
Turing++ Questions: A Test for the Science of (Human) Intelligence. AI Magazine 37 , 73-77 (2016).
Turing_Plus_Questions.pdf (424.91 KB)
The History of Neuroscience in Autobiography Volume 8 8, (Society for Neuroscience, 2014).
Volume Introduction and Preface (232.8 KB)
TomasoPoggio.pdf (1.43 MB)
Theory of Deep Learning III: explaining the non-overfitting puzzle. (2017).
CBMM-Memo-073.pdf (2.65 MB)
CBMM Memo 073 v2 (revised 1/15/2018) (2.81 MB)
CBMM Memo 073 v3 (revised 1/30/2018) (2.72 MB)
CBMM Memo 073 v4 (revised 12/30/2018) (575.72 KB)
Theory of Deep Learning IIb: Optimization Properties of SGD. (2017).
CBMM-Memo-072.pdf (3.66 MB)
Theory III: Dynamics and Generalization in Deep Networks. (2018).
Original, intermediate versions are available under request (2.67 MB)
CBMM Memo 90 v12.pdf (4.74 MB)
Theory_III_ver44.pdf Update Hessian (4.12 MB)
Theory_III_ver48 (Updated discussion of convergence to max margin) (2.56 MB)
fixing errors and sharpening some proofs (2.45 MB)
Theory II: Landscape of the Empirical Risk in Deep Learning. (2017).
CBMM Memo 066_1703.09833v2.pdf (5.56 MB)
Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)
Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?. (2016).
CBMM-Memo-058v1.pdf (2.42 MB)
CBMM-Memo-058v5.pdf (2.45 MB)
CBMM-Memo-058-v6.pdf (2.74 MB)
Proposition 4 has been deleted (2.75 MB)
Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf (1.18 MB)
Theories of Deep Learning: Approximation, Optimization and Generalization . TECHCON 2019 (2019).
Theoretical issues in deep networks. Proceedings of the National Academy of Sciences 201907369 (2020). doi:10.1073/pnas.1907369117
PNASlast.pdf (915.3 KB)
Theoretical Issues in Deep Networks. (2019).
CBMM Memo 100 v1 (1.71 MB)
CBMM Memo 100 v3 (8/25/2019) (1.31 MB)
CBMM Memo 100 v4 (11/19/2019) (1008.23 KB)
]