Publication
Export 113 results:
Filters: Author is Poggio, Tomaso A [Clear All Filters]
Distribution of Classification Margins: Are All Data Equal?. (2021).
CBMM Memo 115.pdf (9.56 MB)

From Associative Memories to Deep Networks. (2021).
CBMM-Memo-114.pdf (1.01 MB)
A few edits (1.03 MB)


Biologically Inspired Mechanisms for Adversarial Robustness. (2020).
CBMM_Memo_110.pdf (3.14 MB)

Dreaming with ARC. Learning Meets Combinatorial Algorithms workshop at NeurIPS 2020 (2020).
CBMM Memo 113.pdf (1019.64 KB)

For interpolating kernel machines, the minimum norm ERM solution is the most stable. (2020).
CBMM_Memo_108.pdf (1015.14 KB)
Better bound (without inequalities!) (1.03 MB)


Hierarchically Local Tasks and Deep Convolutional Networks. (2020).
CBMM_Memo_109.pdf (2.12 MB)

Biologically-plausible learning algorithms can scale to large datasets. International Conference on Learning Representations, (ICLR 2019) (2019).
gk7779.pdf (721.53 KB)

Deep Recurrent Architectures for Seismic Tomography. 81st EAGE Conference and Exhibition 2019 (2019).
Double descent in the condition number. (2019).
Fixing typos, clarifying error in y, best approach is crossvalidation (837.18 KB)
Incorporated footnote in text plus other edits (854.05 KB)
Deleted previous discussion on kernel regression and deep nets: it will appear, extended, in a separate paper (795.28 KB)
RevisedPNASV2.pdf (261.24 KB)




Dynamics & Generalization in Deep Networks -Minimizing the Norm. NAS Sackler Colloquium on Science of Deep Learning (2019).
Eccentricity Dependent Neural Network with Recurrent Attention for Scale, Translation and Clutter Invariance . Vision Science Society (2019).
Properties of invariant object recognition in human oneshot learning suggests a hierarchical architecture different from deep convolutional neural networks . Vision Science Society (2019). doi:10.1167/19.10.28d
Properties of invariant object recognition in human one-shot learning suggests a hierarchical architecture different from deep convolutional neural networks. Vision Science Society (2019).
Theoretical Issues in Deep Networks. (2019).
CBMM Memo 100 v1 (1.71 MB)
CBMM Memo 100 v3 (8/25/2019) (1.31 MB)
CBMM Memo 100 v4 (11/19/2019) (1008.23 KB)



Theories of Deep Learning: Approximation, Optimization and Generalization . TECHCON 2019 (2019).
An analysis of training and generalization errors in shallow and deep networks. (2018).
CBMM-Memo-076.pdf (772.61 KB)
CBMM-Memo-076v2.pdf (2.67 MB)


Biologically-plausible learning algorithms can scale to large datasets. (2018).
CBMM-Memo-092.pdf (1.31 MB)

Can Deep Neural Networks Do Image Segmentation by Understanding Insideness?. (2018).
CBMM-Memo-095.pdf (1.96 MB)

Classical generalization bounds are surprisingly tight for Deep Networks. (2018).
CBMM-Memo-091.pdf (1.43 MB)
CBMM-Memo-091-v2.pdf (1.88 MB)


A fast, invariant representation for human action in the visual system. Journal of Neurophysiology (2018). doi:https://doi.org/10.1152/jn.00642.2017
Invariant Recognition Shapes Neural Representations of Visual Input. Annual Review of Vision Science 4, 403 - 422 (2018).
annurev-vision-091517-034103.pdf (1.55 MB)

Single units in a deep neural network functionally correspond with neurons in the brain: preliminary results. (2018).
CBMM-Memo-093.pdf (2.99 MB)

Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf (1.18 MB)

Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)
