Publication
Can a biologically-plausible hierarchy effectively replace face detection, alignment, and recognition pipelines?. (2014).
CBMM-Memo-003.pdf (963.66 KB)

The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex. (2014). doi:10.1101/004473
CBMM Memo 004_new.pdf (2.25 MB)

Learning invariant representations and applications to face verification. NIPS 2013 (Advances in Neural Information Processing Systems 26, 2014). at <http://nips.cc/Conferences/2013/Program/event.php?ID=4074>
Liao_Leibo_Poggio_NIPS_2013.pdf (687.06 KB)

Subtasks of Unconstrained Face Recognition. (2014).
Leibo_Liao_Poggio_subtasks_VISAPP_2014.pdf (268.69 KB)

How Important is Weight Symmetry in Backpropagation?. (2015).
1510.05067v3.pdf (615.32 KB)

The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex. (2015).
modularity_dataset_ver1.tar.gz (36.14 MB)

The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex. PLOS Computational Biology 11, e1004390 (2015).
journal.pcbi_.1004390.pdf (2.04 MB)

Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex. (2016).
CBMM Memo No. 047 (1.29 MB)

How Important Is Weight Symmetry in Backpropagation?. Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) (Association for the Advancement of Artificial Intelligence, 2016).
liao-leibo-poggio.pdf (191.91 KB)

How Important Is Weight Symmetry in Backpropagation?. Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) (2016). at <https://cbmm.mit.edu/sites/default/files/publications/liao-leibo-poggio.pdf>
Learning Functions: When Is Deep Better Than Shallow. (2016). at <https://arxiv.org/pdf/1603.00988v4.pdf>
Streaming Normalization: Towards Simpler and More Biologically-plausible Normalizations for Online and Recurrent Learning. (2016).
CBMM-Memo-057.pdf (1.27 MB)

Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?. (2016).
CBMM-Memo-058v1.pdf (2.42 MB)
CBMM-Memo-058v5.pdf (2.45 MB)
CBMM-Memo-058-v6.pdf (2.74 MB)
Proposition 4 has been deleted (2.75 MB)




Compression of Deep Neural Networks for Image Instance Retrieval. (2017). at <https://arxiv.org/abs/1701.04923>
1701.04923.pdf (614.33 KB)

Musings on Deep Learning: Properties of SGD. (2017).
CBMM Memo 067 v2 (revised 7/19/2017) (5.88 MB)
CBMM Memo 067 v3 (revised 9/15/2017) (5.89 MB)
CBMM Memo 067 v4 (revised 12/26/2017) (5.57 MB)



Object-Oriented Deep Learning. (2017).
CBMM-Memo-070.pdf (963.54 KB)

Theory II: Landscape of the Empirical Risk in Deep Learning. (2017).
CBMM Memo 066_1703.09833v2.pdf (5.56 MB)

Theory of Deep Learning IIb: Optimization Properties of SGD. (2017).
CBMM-Memo-072.pdf (3.66 MB)

Theory of Deep Learning III: explaining the non-overfitting puzzle. (2017).
CBMM-Memo-073.pdf (2.65 MB)
CBMM Memo 073 v2 (revised 1/15/2018) (2.81 MB)
CBMM Memo 073 v3 (revised 1/30/2018) (2.72 MB)
CBMM Memo 073 v4 (revised 12/30/2018) (575.72 KB)




View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation. Current Biology 27, 1-6 (2017).
When and Why Are Deep Networks Better Than Shallow Ones?. AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence (2017).
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2
art%3A10.1007%2Fs11633-017-1054-2.pdf (1.68 MB)
