Export 109 results:
Filters: Author is T. Poggio [Clear All Filters]
Biologically-plausible learning algorithms can scale to large datasets. International Conference on Learning Representations, (ICLR 2019) (2019).
Deep Recurrent Architectures for Seismic Tomography. 81st EAGE Conference and Exhibition 2019 (2019).
Dynamics & Generalization in Deep Networks -Minimizing the Norm. NAS Sackler Colloquium on Science of Deep Learning (2019).
Properties of invariant object recognition in human one-shot learning suggests a hierarchical architecture different from deep convolutional neural networks. Vision Science Society (2019).
Theoretical Issues in Deep Networks. (2019).
Theories of Deep Learning: Approximation, Optimization and Generalization . TECHCON 2019 (2019).
A fast, invariant representation for human action in the visual system. Journal of Neurophysiology (2018). doi:https://doi.org/10.1152/jn.00642.2017
Invariant Recognition Shapes Neural Representations of Visual Input. Annual Review of Vision Science 4, 403 - 422 (2018).
Single units in a deep neural network functionally correspond with neurons in the brain: preliminary results. (2018).
Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
Compression of Deep Neural Networks for Image Instance Retrieval. (2017). at <https://arxiv.org/abs/1701.04923>
Eccentricity Dependent Deep Neural Networks for Modeling Human Vision. Vision Sciences Society (2017).