Export 21 results:
Filters: Author is Qianli Liao [Clear All Filters]
View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation. Current Biology 27, 1-6 (2017).
When and Why Are Deep Networks Better Than Shallow Ones?. AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence (2017).
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2
How Important Is Weight Symmetry in Backpropagation?. Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) (Association for the Advancement of Artificial Intelligence, 2016).
How Important Is Weight Symmetry in Backpropagation?. Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) (2016). at <https://cbmm.mit.edu/sites/default/files/publications/liao-leibo-poggio.pdf>
Learning Functions: When Is Deep Better Than Shallow. (2016). at <https://arxiv.org/pdf/1603.00988v4.pdf>
Streaming Normalization: Towards Simpler and More Biologically-plausible Normalizations for Online and Recurrent Learning. (2016).
View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation. (2016).
The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex. PLOS Computational Biology 11, e1004390 (2015).
Can a biologically-plausible hierarchy effectively replace face detection, alignment, and recognition pipelines?. (2014).
The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex. (2014). doi:10.1101/004473
Learning invariant representations and applications to face verification. NIPS 2013 (Advances in Neural Information Processing Systems 26, 2014). at <http://nips.cc/Conferences/2013/Program/event.php?ID=4074>