Export 40 results:
Filters: Author is Qianli Liao [Clear All Filters]
Complexity Control by Gradient Descent in Deep Networks. Nature Communications 11, (2020).
Explicit regularization and implicit bias in deep network classifiers trained with the square loss. arXiv (2020). at <https://arxiv.org/abs/2101.00072>
Theoretical issues in deep networks. Proceedings of the National Academy of Sciences 201907369 (2020). doi:10.1073/pnas.1907369117
Biologically-plausible learning algorithms can scale to large datasets. International Conference on Learning Representations, (ICLR 2019) (2019).
Dynamics & Generalization in Deep Networks -Minimizing the Norm. NAS Sackler Colloquium on Science of Deep Learning (2019).
Theoretical Issues in Deep Networks. (2019).
Theories of Deep Learning: Approximation, Optimization and Generalization . TECHCON 2019 (2019).
Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
Compression of Deep Neural Networks for Image Instance Retrieval. (2017). at <https://arxiv.org/abs/1701.04923>
Object-Oriented Deep Learning. (2017).
View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation. Current Biology 27, 1-6 (2017).
When and Why Are Deep Networks Better Than Shallow Ones?. AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence (2017).
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2