Export 36 results:
Filters: Author is Qianli Liao [Clear All Filters]
Complexity Control by Gradient Descent in Deep Networks. Nature Communications 11, (2020).
Biologically-plausible learning algorithms can scale to large datasets. International Conference on Learning Representations, (ICLR 2019) (2019).
Dynamics & Generalization in Deep Networks -Minimizing the Norm. NAS Sackler Colloquium on Science of Deep Learning (2019).
Theoretical Issues in Deep Networks. (2019).
Theories of Deep Learning: Approximation, Optimization and Generalization . TECHCON 2019 (2019).
Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
Compression of Deep Neural Networks for Image Instance Retrieval. (2017). at <https://arxiv.org/abs/1701.04923>
Object-Oriented Deep Learning. (2017).
View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation. Current Biology 27, 1-6 (2017).
When and Why Are Deep Networks Better Than Shallow Ones?. AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence (2017).
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2
How Important Is Weight Symmetry in Backpropagation?. Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) (Association for the Advancement of Artificial Intelligence, 2016).
How Important Is Weight Symmetry in Backpropagation?. Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) (2016). at <https://cbmm.mit.edu/sites/default/files/publications/liao-leibo-poggio.pdf>
Learning Functions: When Is Deep Better Than Shallow. (2016). at <https://arxiv.org/pdf/1603.00988v4.pdf>