Publication
Export 139 results:
Filters: Author is Tomaso Poggio [Clear All Filters]
An Overview of Some Issues in the Theory of Deep Networks. IEEJ Transactions on Electrical and Electronic Engineering 15, 1560 - 1571 (2020).
Scale and translation-invariance for novel objects in human vision. Scientific Reports 10, (2020).
s41598-019-57261-6.pdf (1.46 MB)

Stable Foundations for Learning: a framework for learning theory (in both the classical and modern regime). (2020).
Original file (584.54 KB)
Corrected typos and details of "equivalence" CV stability and expected error for interpolating machines. Added Appendix on SGD. (905.29 KB)
Edited Appendix on SGD. (909.19 KB)
Deleted Appendix. Corrected typos etc (880.27 KB)
Added result about square loss and min norm (898.03 KB)





Theoretical issues in deep networks. Proceedings of the National Academy of Sciences 201907369 (2020). doi:10.1073/pnas.1907369117
PNASlast.pdf (915.3 KB)

An analysis of training and generalization errors in shallow and deep networks. (2019).
CBMM-Memo-098.pdf (687.36 KB)
CBMM Memo 098 v4 (08/2019) (2.63 MB)


Biologically-plausible learning algorithms can scale to large datasets. International Conference on Learning Representations, (ICLR 2019) (2019).
gk7779.pdf (721.53 KB)

Deep Recurrent Architectures for Seismic Tomography. 81st EAGE Conference and Exhibition 2019 (2019).
Double descent in the condition number. (2019).
Fixing typos, clarifying error in y, best approach is crossvalidation (837.18 KB)
Incorporated footnote in text plus other edits (854.05 KB)
Deleted previous discussion on kernel regression and deep nets: it will appear, extended, in a separate paper (795.28 KB)
correcting a bad typo (261.24 KB)
Deleted plot of condition number of kernel matrix: we cannot get a double descent curve (769.32 KB)





Dynamics & Generalization in Deep Networks -Minimizing the Norm. NAS Sackler Colloquium on Science of Deep Learning (2019).
Eccentricity Dependent Neural Network with Recurrent Attention for Scale, Translation and Clutter Invariance . Vision Science Society (2019).
Properties of invariant object recognition in human one-shot learning suggests a hierarchical architecture different from deep convolutional neural networks. Vision Science Society (2019).
Properties of invariant object recognition in human oneshot learning suggests a hierarchical architecture different from deep convolutional neural networks . Vision Science Society (2019). doi:10.1167/19.10.28d
Theoretical Issues in Deep Networks. (2019).
CBMM Memo 100 v1 (1.71 MB)
CBMM Memo 100 v3 (8/25/2019) (1.31 MB)
CBMM Memo 100 v4 (11/19/2019) (1008.23 KB)



Theories of Deep Learning: Approximation, Optimization and Generalization . TECHCON 2019 (2019).
An analysis of training and generalization errors in shallow and deep networks. (2018).
CBMM-Memo-076.pdf (772.61 KB)
CBMM-Memo-076v2.pdf (2.67 MB)


Biologically-plausible learning algorithms can scale to large datasets. (2018).
CBMM-Memo-092.pdf (1.31 MB)

Can Deep Neural Networks Do Image Segmentation by Understanding Insideness?. (2018).
CBMM-Memo-095.pdf (1.96 MB)

Classical generalization bounds are surprisingly tight for Deep Networks. (2018).
CBMM-Memo-091.pdf (1.43 MB)
CBMM-Memo-091-v2.pdf (1.88 MB)


A fast, invariant representation for human action in the visual system. Journal of Neurophysiology (2018). doi:https://doi.org/10.1152/jn.00642.2017
Invariant Recognition Shapes Neural Representations of Visual Input. Annual Review of Vision Science 4, 403 - 422 (2018).
annurev-vision-091517-034103.pdf (1.55 MB)

Single units in a deep neural network functionally correspond with neurons in the brain: preliminary results. (2018).
CBMM-Memo-093.pdf (2.99 MB)

Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf (1.18 MB)

Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)

Theory III: Dynamics and Generalization in Deep Networks. (2018).
Original, intermediate versions are available under request (2.67 MB)
CBMM Memo 90 v12.pdf (4.74 MB)
Theory_III_ver44.pdf Update Hessian (4.12 MB)
Theory_III_ver48 (Updated discussion of convergence to max margin) (2.56 MB)
fixing errors and sharpening some proofs (2.45 MB)




