Publication

Found 908 results
[ Author(Asc)] Title Type Year
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
P
Pouncy, T. & Gershman, S. J. Inductive biases in theory-based reinforcement learning. Cognitive Psychology 138, 101509 (2022).
Pouncy, T., Tsividis, P. & Gershman, S. J. What Is the Model in Model‐Based Planning?. Cognitive Science 45, (2021).
Ponce, C. R. et al. Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences. Cell 177, 1009 (2019).PDF icon Author's last draft (20.26 MB)
Poggio, T. & Fraser, M. Compositional Sparsity of Learnable Functions. (2024).PDF icon This is an update of the AMS paper (230.72 KB)
Poggio, T. & Banburski, A. An Overview of Some Issues in the Theory of Deep Networks. IEEJ Transactions on Electrical and Electronic Engineering 15, 1560 - 1571 (2020).
Poggio, T. & Squire, L. R. The History of Neuroscience in Autobiography Volume 8 8, (Society for Neuroscience, 2014).PDF icon Volume Introduction and Preface (232.8 KB)PDF icon TomasoPoggio.pdf (1.43 MB)
Poggio, T., Banburski, A. & Liao, Q. Theoretical issues in deep networks. Proceedings of the National Academy of Sciences 201907369 (2020). doi:10.1073/pnas.1907369117PDF icon PNASlast.pdf (915.3 KB)
Poggio, T. & Magrini, M. Cervelli menti algoritmi. 272 (Sperling & Kupfer, 2023). at <https://www.sperling.it/libri/cervelli-menti-algoritmi-marco-magrini>
Poggio, T., Banburski, A. & Liao, Q. Theoretical Issues in Deep Networks. (2019).PDF icon CBMM Memo 100 v1 (1.71 MB)PDF icon CBMM Memo 100 v3 (8/25/2019) (1.31 MB)PDF icon CBMM Memo 100 v4 (11/19/2019) (1008.23 KB)
Poggio, T., Anselmi, F. & Rosasco, L. I-theory on depth vs width: hierarchical function composition. (2015).PDF icon cbmm_memo_041.pdf (1.18 MB)
Poggio, T. Is Research in Intelligence an Existential Risk?. (2014).PDF icon Is Research in Intelligence an Existential Risk.pdf (571.42 KB)
Poggio, T. & Cooper, Y. Loss landscape: SGD has a better view. (2020).PDF icon CBMM-Memo-107.pdf (1.03 MB)PDF icon Typos and small edits, ver11 (955.08 KB)PDF icon Small edits, corrected Hessian for spurious case (337.19 KB)
Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B. & Liao, Q. Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2PDF icon art%3A10.1007%2Fs11633-017-1054-2.pdf (1.68 MB)
Poggio, T. & Liao, Q. Theory II: Landscape of the Empirical Risk in Deep Learning. (2017).PDF icon CBMM Memo 066_1703.09833v2.pdf (5.56 MB)
Poggio, T., Mutch, J. & Isik, L. Computational role of eccentricity dependent cortical magnification. (2014).PDF icon CBMM-Memo-017.pdf (1.04 MB)
Poggio, T. & Meyers, E. Turing++ Questions: A Test for the Science of (Human) Intelligence. AI Magazine 37 , 73-77 (2016).PDF icon Turing_Plus_Questions.pdf (424.91 KB)
Poggio, T. A Perspective: Sparse Compositionality and Efficiently Computable Intelligence. (2026).PDF icon Perspective_SPCOMP-9.pdf (170.23 KB)
Poggio, T., Mhaskar, H., Rosasco, L., Miranda, B. & Liao, Q. Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?. (2016).PDF icon CBMM-Memo-058v1.pdf (2.42 MB)PDF icon CBMM-Memo-058v5.pdf (2.45 MB)PDF icon CBMM-Memo-058-v6.pdf (2.74 MB)PDF icon Proposition 4 has been deleted (2.75 MB)
Poggio, T., Kur, G. & Banburski, A. Double descent in the condition number. (2019).PDF icon Fixing typos, clarifying error in y, best approach is crossvalidation (837.18 KB)PDF icon Incorporated footnote in text plus other edits (854.05 KB)PDF icon Deleted previous discussion on kernel regression and deep nets: it will appear, extended, in a separate paper (795.28 KB)PDF icon correcting a bad typo (261.24 KB)PDF icon Deleted plot of condition number of kernel matrix: we cannot get a double descent curve  (769.32 KB)
Poggio, T. & Fraser, M. Compositional sparsity of learnable functions. Bulletin of the American Mathematical Society 61, 438-456 (2024).
Poggio, T. What if.. (2015).PDF icon What if.pdf (2.09 MB)
Poggio, T. A. & Xu, M. On efficiently computable functions, deep networks and sparse compositionality. (2025).PDF icon Deep_sparse_networks_approximate_efficiently_computable_functions.pdf (223.15 KB)
Poggio, T., Rosasco, L., Shashua, A., Cohen, N. & Anselmi, F. Notes on Hierarchical Splines, DCLNs and i-theory. (2015).PDF icon CBMM Memo 037 (1.83 MB)
Poggio, T., Liao, Q. & Xu, M. Implicit dynamic regularization in deep networks. (2020).PDF icon v1.2 (2.29 MB)PDF icon v.59 Update on rank (2.43 MB)
Poggio, T. Stable Foundations for Learning: a framework for learning theory (in both the classical and modern regime). (2020).PDF icon Original file (584.54 KB)PDF icon Corrected typos and details of "equivalence" CV stability and expected error for interpolating machines. Added Appendix on SGD.  (905.29 KB)PDF icon Edited Appendix on SGD. (909.19 KB)PDF icon Deleted Appendix. Corrected typos etc (880.27 KB)PDF icon Added result about square loss and min norm (898.03 KB)

Pages