Publication
Stable Foundations for Learning: a framework for learning theory (in both the classical and modern regime). (2020).
Original file (584.54 KB)
Corrected typos and details of "equivalence" CV stability and expected error for interpolating machines. Added Appendix on SGD. (905.29 KB)
Edited Appendix on SGD. (909.19 KB)
Deleted Appendix. Corrected typos etc (880.27 KB)
Added result about square loss and min norm (898.03 KB)
Streaming Normalization: Towards Simpler and More Biologically-plausible Normalizations for Online and Recurrent Learning. (2016).
CBMM-Memo-057.pdf (1.27 MB)
Subtasks of Unconstrained Face Recognition. (2014).
Leibo_Liao_Poggio_subtasks_VISAPP_2014.pdf (268.69 KB)
Symmetry Regularization. (2017).
CBMM-Memo-063.pdf (6.1 MB)
System identification of neural systems: If we got it right, would we know?. (2022).
CBMM-Memo-136.pdf (1.75 MB)
System Identification of Neural Systems: If We Got It Right, Would We Know?. Proceedings of the 40th International Conference on Machine Learning, PMLR 202, 12430-12444 (2023).
han23d.pdf (797.48 KB)
Theoretical Issues in Deep Networks. (2019).
CBMM Memo 100 v1 (1.71 MB)
CBMM Memo 100 v3 (8/25/2019) (1.31 MB)
CBMM Memo 100 v4 (11/19/2019) (1008.23 KB)
Theoretical issues in deep networks. Proceedings of the National Academy of Sciences 201907369 (2020). doi:10.1073/pnas.1907369117
PNASlast.pdf (915.3 KB)
Theories of Deep Learning: Approximation, Optimization and Generalization . TECHCON 2019 (2019).
Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf (1.18 MB)
Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?. (2016).
CBMM-Memo-058v1.pdf (2.42 MB)
CBMM-Memo-058v5.pdf (2.45 MB)
CBMM-Memo-058-v6.pdf (2.74 MB)
Proposition 4 has been deleted (2.75 MB)
Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)
Theory II: Landscape of the Empirical Risk in Deep Learning. (2017).
CBMM Memo 066_1703.09833v2.pdf (5.56 MB)
Theory III: Dynamics and Generalization in Deep Networks. (2018).
Original, intermediate versions are available under request (2.67 MB)
CBMM Memo 90 v12.pdf (4.74 MB)
Theory_III_ver44.pdf Update Hessian (4.12 MB)
Theory_III_ver48 (Updated discussion of convergence to max margin) (2.56 MB)
fixing errors and sharpening some proofs (2.45 MB)
Theory of Deep Learning IIb: Optimization Properties of SGD. (2017).
CBMM-Memo-072.pdf (3.66 MB)
Theory of Deep Learning III: explaining the non-overfitting puzzle. (2017).
CBMM-Memo-073.pdf (2.65 MB)
CBMM Memo 073 v2 (revised 1/15/2018) (2.81 MB)
CBMM Memo 073 v3 (revised 1/30/2018) (2.72 MB)
CBMM Memo 073 v4 (revised 12/30/2018) (575.72 KB)
The History of Neuroscience in Autobiography Volume 8 8, (Society for Neuroscience, 2014).
Volume Introduction and Preface (232.8 KB)
TomasoPoggio.pdf (1.43 MB)
Turing++ Questions: A Test for the Science of (Human) Intelligence. AI Magazine 37 , 73-77 (2016).
Turing_Plus_Questions.pdf (424.91 KB)
Unsupervised learning of clutter-resistant visual representations from natural videos. (2014).
1409.3879v2.pdf (3.64 MB)
Unsupervised learning of invariant representations. Theoretical Computer Science (2015). doi:10.1016/j.tcs.2015.06.048
]