Publication

Found 906 results
[ Author(Desc)] Title Type Year
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
P
Poggio, T. A. & Xu, M. On efficiently computable functions, deep networks and sparse compositionality. (2025).PDF icon Deep_sparse_networks_approximate_efficiently_computable_functions.pdf (223.15 KB)
Poggio, T. Deep Leaning: Mathematics and Neuroscience. A Sponsored Supplement to Science Brain-Inspired intelligent robotics: The intersection of robotics and neuroscience, 9-12 (2016).
Poggio, T., Liao, Q. & Xu, M. Implicit dynamic regularization in deep networks. (2020).PDF icon v1.2 (2.29 MB)PDF icon v.59 Update on rank (2.43 MB)
Poggio, T. Stable Foundations for Learning: a framework for learning theory (in both the classical and modern regime). (2020).PDF icon Original file (584.54 KB)PDF icon Corrected typos and details of "equivalence" CV stability and expected error for interpolating machines. Added Appendix on SGD.  (905.29 KB)PDF icon Edited Appendix on SGD. (909.19 KB)PDF icon Deleted Appendix. Corrected typos etc (880.27 KB)PDF icon Added result about square loss and min norm (898.03 KB)
Poggio, T., Liao, Q. & Banburski, A. Complexity Control by Gradient Descent in Deep Networks. Nature Communications 11, (2020).PDF icon s41467-020-14663-9.pdf (431.68 KB)
Poggio, T. Associative Memory as the Core of Intelligence in Technology and Evolution. (2026).PDF icon Review_On_Associative_Memories-14.pdf (245.78 KB)
Poggio, T., Mutch, J. & Isik, L. Computational role of eccentricity dependent cortical magnification. (2014).PDF icon CBMM-Memo-017.pdf (1.04 MB)
Poggio, T. How Deep Sparse Networks Avoid the Curse of Dimensionality: Efficiently Computable Functions are Compositionally Sparse. (2022).PDF icon v1.0 (984.15 KB)PDF icon v5.7 adding in context learning etc (1.16 MB)
Poggio, T. & Liao, Q. Explicit regularization and implicit bias in deep network classifiers trained with the square loss. arXiv (2020). at <https://arxiv.org/abs/2101.00072>
Poggio, T. From Marr’s Vision to the Problem of Human Intelligence. (2021).PDF icon CBMM-Memo-118.pdf (362.19 KB)
Poggio, T., Rosasco, L., Shashua, A., Cohen, N. & Anselmi, F. Notes on Hierarchical Splines, DCLNs and i-theory. (2015).PDF icon CBMM Memo 037 (1.83 MB)
Poggio, T. & Liao, Q. Theory II: Landscape of the Empirical Risk in Deep Learning. (2017).PDF icon CBMM Memo 066_1703.09833v2.pdf (5.56 MB)
Poggio, T. From Associative Memories to Powerful Machines. (2021).PDF icon v1.0 (1.01 MB)PDF icon v1.3Section added August 6 on self attention (3.9 MB)
Poggio, T. Is Research in Intelligence an Existential Risk?. (2014).PDF icon Is Research in Intelligence an Existential Risk.pdf (571.42 KB)
Poggio, T. & Fraser, M. Compositional Sparsity of Learnable Functions. (2024).PDF icon This is an update of the AMS paper (230.72 KB)
Poggio, T. & Liao, Q. Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).PDF icon 03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)
Poggio, T. & Banburski, A. An Overview of Some Issues in the Theory of Deep Networks. IEEJ Transactions on Electrical and Electronic Engineering 15, 1560 - 1571 (2020).
Poggio, T. & Squire, L. R. The History of Neuroscience in Autobiography Volume 8 8, (Society for Neuroscience, 2014).PDF icon Volume Introduction and Preface (232.8 KB)PDF icon TomasoPoggio.pdf (1.43 MB)
Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B. & Liao, Q. Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2PDF icon art%3A10.1007%2Fs11633-017-1054-2.pdf (1.68 MB)
Poggio, T., Banburski, A. & Liao, Q. Theoretical Issues in Deep Networks. (2019).PDF icon CBMM Memo 100 v1 (1.71 MB)PDF icon CBMM Memo 100 v3 (8/25/2019) (1.31 MB)PDF icon CBMM Memo 100 v4 (11/19/2019) (1008.23 KB)
Poggio, T., Banburski, A. & Liao, Q. Theoretical issues in deep networks. Proceedings of the National Academy of Sciences 201907369 (2020). doi:10.1073/pnas.1907369117PDF icon PNASlast.pdf (915.3 KB)
Poggio, T. & Magrini, M. Cervelli menti algoritmi. 272 (Sperling & Kupfer, 2023). at <https://www.sperling.it/libri/cervelli-menti-algoritmi-marco-magrini>
Poggio, T. & Meyers, E. Turing++ Questions: A Test for the Science of (Human) Intelligence. AI Magazine 37 , 73-77 (2016).PDF icon Turing_Plus_Questions.pdf (424.91 KB)
Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B. & Liao, Q. Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?. (2016).PDF icon CBMM-Memo-058v1.pdf (2.42 MB)PDF icon CBMM-Memo-058v5.pdf (2.45 MB)PDF icon CBMM-Memo-058-v6.pdf (2.74 MB)PDF icon Proposition 4 has been deleted (2.75 MB)
Poggio, T. & Cooper, Y. Loss landscape: SGD has a better view. (2020).PDF icon CBMM-Memo-107.pdf (1.03 MB)PDF icon Typos and small edits, ver11 (955.08 KB)PDF icon Small edits, corrected Hessian for spurious case (337.19 KB)

Pages