Export 31 results:
Filters: Author is Qianli Liao [Clear All Filters]
Biologically-Plausible Learning Algorithms Can Scale to Large Datasets. International Conference on Learning Representations (2019).
Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
Compression of Deep Neural Networks for Image Instance Retrieval. (2017). at <https://arxiv.org/abs/1701.04923>
Object-Oriented Deep Learning. (2017).
View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation. Current Biology 27, 1-6 (2017).
When and Why Are Deep Networks Better Than Shallow Ones?. AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence (2017).
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2
How Important Is Weight Symmetry in Backpropagation?. Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) (Association for the Advancement of Artificial Intelligence, 2016).
How Important Is Weight Symmetry in Backpropagation?. Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) (2016). at <https://cbmm.mit.edu/sites/default/files/publications/liao-leibo-poggio.pdf>
Learning Functions: When Is Deep Better Than Shallow. (2016). at <https://arxiv.org/pdf/1603.00988v4.pdf>
Streaming Normalization: Towards Simpler and More Biologically-plausible Normalizations for Online and Recurrent Learning. (2016).
The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex. PLOS Computational Biology 11, e1004390 (2015).