Publication

Export 113 results:
Filters: Author is Tomaso A. Poggio  [Clear All Filters]
2020
Reddy, M. Vuyyuru, Banburski, A., Pant, N. & Poggio, T. Biologically Inspired Mechanisms for Adversarial Robustness. (2020).PDF icon CBMM_Memo_110.pdf (3.14 MB)
Poggio, T. A., Liao, Q. & Banburski, A. Complexity Control by Gradient Descent in Deep Networks. Nature Communications 11, (2020).PDF icon s41467-020-14663-9.pdf (431.68 KB)
Rangamani, A., Rosasco, L. & Poggio, T. For interpolating kernel machines, the minimum norm ERM solution is the most stable. (2020).PDF icon CBMM_Memo_108.pdf (1015.14 KB)
Deza, A., Liao, Q., Banburski, A. & Poggio, T. Hierarchically Local Tasks and Deep Convolutional Networks. (2020).PDF icon CBMM_Memo_109.pdf (2.12 MB)
Poggio, T. A. & Liao, Q. Implicit dynamic regularization in deep networks. (2020).PDF icon TPR_ver2.pdf (2.29 MB)
Poggio, T. A. & Cooper, Y. Loss landscape: SGD has a better view. (2020).PDF icon CBMM-Memo-107.pdf (1.03 MB)PDF icon Typos and small edits, ver11 (955.08 KB)
Poggio, T. A. Stable Foundations for Learning: a framework for learning theory (in both the classical and modern regime). (2020).PDF icon Original file (584.54 KB)PDF icon Corrected typos and details of "equivalence" CV stability and expected error for interpolating machines. Added Appendix on SGD.  (905.29 KB)PDF icon Edited Appendix on SGD. (909.19 KB)PDF icon Deleted Appendix. Corrected typos etc (880.27 KB)PDF icon Added result about square loss and min norm (898.03 KB)
2019
Xiao, W., Chen, H., Liao, Q. & Poggio, T. Biologically-plausible learning algorithms can scale to large datasets. International Conference on Learning Representations, (ICLR 2019) (2019).PDF icon gk7779.pdf (721.53 KB)
Adler, A., Araya-Polo, M. & Poggio, T. Deep Recurrent Architectures for Seismic Tomography. 81st EAGE Conference and Exhibition 2019 (2019).
Poggio, T., Kur, G. & Banburski, A. Double descent in the condition number. (2019).PDF icon Fixing typos, clarifying error in y, best approach is crossvalidation (837.18 KB)PDF icon Incorporated footnote in text plus other edits (854.05 KB)PDF icon Deleted previous discussion on kernel regression and deep nets: it will appear, extended, in a separate paper (795.28 KB)PDF icon RevisedPNASV2.pdf (261.24 KB)
Banburski, A. et al. Dynamics & Generalization in Deep Networks -Minimizing the Norm. NAS Sackler Colloquium on Science of Deep Learning (2019).
Han, Y., Roig, G., Geiger, G. & Poggio, T. Properties of invariant object recognition in human one-shot learning suggests a hierarchical architecture different from deep convolutional neural networks. Vision Science Society (2019).
Poggio, T., Banburski, A. & Liao, Q. Theoretical Issues in Deep Networks. (2019).PDF icon CBMM Memo 100 v1 (1.71 MB)PDF icon CBMM Memo 100 v3 (8/25/2019) (1.31 MB)PDF icon CBMM Memo 100 v4 (11/19/2019) (1008.23 KB)
Liao, Q., Banburski, A. & Poggio, T. Theories of Deep Learning: Approximation, Optimization and Generalization . TECHCON 2019 (2019).
Banburski, A. et al. Weight and Batch Normalization implement Classical Generalization Bounds . ICML (2019).
2018
Mhaskar, H. & Poggio, T. An analysis of training and generalization errors in shallow and deep networks. (2018).PDF icon CBMM-Memo-076.pdf (772.61 KB)PDF icon CBMM-Memo-076v2.pdf (2.67 MB)
Xiao, W., Chen, H., Liao, Q. & Poggio, T. Biologically-plausible learning algorithms can scale to large datasets. (2018).PDF icon CBMM-Memo-092.pdf (1.31 MB)
Villalobos, K. M. et al. Can Deep Neural Networks Do Image Segmentation by Understanding Insideness?. (2018).PDF icon CBMM-Memo-095.pdf (1.96 MB)
Liao, Q., Miranda, B., Hidary, J. & Poggio, T. Classical generalization bounds are surprisingly tight for Deep Networks. (2018).PDF icon CBMM-Memo-091.pdf (1.43 MB)PDF icon CBMM-Memo-091-v2.pdf (1.88 MB)
Isik, L., Tacchetti, A. & Poggio, T. A fast, invariant representation for human action in the visual system. Journal of Neurophysiology (2018). doi:https://doi.org/10.1152/jn.00642.2017
Tacchetti, A., Isik, L. & Poggio, T. Invariant Recognition Shapes Neural Representations of Visual Input. Annual Review of Vision Science 4, 403 - 422 (2018).PDF icon annurev-vision-091517-034103.pdf (1.55 MB)
Arend, L. et al. Single units in a deep neural network functionally correspond with neurons in the brain: preliminary results. (2018).PDF icon CBMM-Memo-093.pdf (2.99 MB)
Poggio, T. & Liao, Q. Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).PDF icon 02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf (1.18 MB)
Poggio, T. & Liao, Q. Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).PDF icon 03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)
Banburski, A. et al. Theory III: Dynamics and Generalization in Deep Networks. (2018).PDF icon Original, intermediate versions are available under request (2.67 MB)PDF icon CBMM Memo 90 v12.pdf (4.74 MB)PDF icon Theory_III_ver44.pdf Update Hessian (4.12 MB)PDF icon Theory_III_ver48 (Updated discussion of convergence to max margin) (2.56 MB)

Pages