Export 143 results:
Filters: Author is Tomaso A. Poggio [Clear All Filters]
Function approximation by deep networks. Communications on Pure & Applied Analysis 19, 4085 - 4095 (2020).
An Overview of Some Issues in the Theory of Deep Networks. IEEJ Transactions on Electrical and Electronic Engineering 15, 1560 - 1571 (2020).
Scale and translation-invariance for novel objects in human vision. Scientific Reports 10, (2020).
Stable Foundations for Learning: a framework for learning theory (in both the classical and modern regime). (2020).
Theoretical issues in deep networks. Proceedings of the National Academy of Sciences 201907369 (2020). doi:10.1073/pnas.1907369117
Biologically-plausible learning algorithms can scale to large datasets. International Conference on Learning Representations, (ICLR 2019) (2019).
Deep Recurrent Architectures for Seismic Tomography. 81st EAGE Conference and Exhibition 2019 (2019).
Dynamics & Generalization in Deep Networks -Minimizing the Norm. NAS Sackler Colloquium on Science of Deep Learning (2019).
Eccentricity Dependent Neural Network with Recurrent Attention for Scale, Translation and Clutter Invariance . Vision Science Society (2019).
Properties of invariant object recognition in human oneshot learning suggests a hierarchical architecture different from deep convolutional neural networks . Vision Science Society (2019). doi:10.1167/19.10.28d
Properties of invariant object recognition in human one-shot learning suggests a hierarchical architecture different from deep convolutional neural networks. Vision Science Society (2019).
Theoretical Issues in Deep Networks. (2019).
Theories of Deep Learning: Approximation, Optimization and Generalization . TECHCON 2019 (2019).
A fast, invariant representation for human action in the visual system. Journal of Neurophysiology (2018). doi:https://doi.org/10.1152/jn.00642.2017
Invariant Recognition Shapes Neural Representations of Visual Input. Annual Review of Vision Science 4, 403 - 422 (2018).