Publication
The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex. PLOS Computational Biology 11, e1004390 (2015).
journal.pcbi_.1004390.pdf (2.04 MB)
Invariant recognition drives neural representations of action sequences. PLOS Computational Biology 13, e1005859 (2017).
journal.pcbi_.1005859.pdf (9.24 MB)
Invariant recognition drives neural representations of action sequences. PLoS Comp. Bio (2017).
Computational and Cognitive Neuroscience of Vision 85-104 (Springer, 2017).
Invariant Recognition Shapes Neural Representations of Visual Input. Annual Review of Vision Science 4, 403 - 422 (2018).
annurev-vision-091517-034103.pdf (1.55 MB)
Invariant representations for action recognition in the visual system. Vision Sciences Society 15, (2015).
Invariant representations for action recognition in the visual system. Computational and Systems Neuroscience (2015).
I-theory on depth vs width: hierarchical function composition. (2015).
cbmm_memo_041.pdf (1.18 MB)
A Large Video Database for Human Motion Recognition. (2011).
Kuehne_etal_ICCV2011.pdf (433.27 KB)
Empirical Inference 59 - 69 (Springer Berlin Heidelberg, 2013). doi:10.1007/978-3-642-41136-610.1007/978-3-642-41136-6_7
Author's Version (147.25 KB)
Learning An Invariant Speech Representation. (2014).
CBMM-Memo-022-1406.3884v1.pdf (1.81 MB)
Learning Functions: When Is Deep Better Than Shallow. (2016). at <https://arxiv.org/pdf/1603.00988v4.pdf>
Learning invariant representations and applications to face verification. NIPS 2013 (Advances in Neural Information Processing Systems 26, 2014). at <http://nips.cc/Conferences/2013/Program/event.php?ID=4074>
Liao_Leibo_Poggio_NIPS_2013.pdf (687.06 KB)
Learning manifolds with k-means and k-flats. Advances in Neural Information Processing Systems 25 (NIPS 2012) (2012). at <https://papers.nips.cc/paper/2012/hash/b20bb95ab626d93fd976af958fbc61ba-Abstract.html>
Learning with a Wasserstein Loss. Advances in Neural Information Processing Systems (NIPS 2015) 28 (2015). at <http://arxiv.org/abs/1506.05439>
Learning with a Wasserstein Loss_1506.05439v2.pdf (2.57 MB)
Learning with Group Invariant Features: A Kernel Perspective. NIPS 2015 (2015). at <https://papers.nips.cc/paper/5798-learning-with-group-invariant-features-a-kernel-perspective>
LearningInvarianceKernel_NIPS2015.pdf (292.18 KB)
Loss landscape: SGD has a better view. (2020).
CBMM-Memo-107.pdf (1.03 MB)
Typos and small edits, ver11 (955.08 KB)
Small edits, corrected Hessian for spurious case (337.19 KB)
Multiplicative Regularization Generalizes Better Than Additive Regularization. (2025).
CBMM Memo 158.pdf (4.8 MB)
Musings on Deep Learning: Properties of SGD. (2017).
CBMM Memo 067 v2 (revised 7/19/2017) (5.88 MB)
CBMM Memo 067 v3 (revised 9/15/2017) (5.89 MB)
CBMM Memo 067 v4 (revised 12/26/2017) (5.57 MB)
Nested Invariance Pooling and RBM Hashing for Image Instance Retrieval. arXiv.org (2016). at <https://arxiv.org/abs/1603.04595>
1603.04595.pdf (2.9 MB)
Neural Tuning Size in a Model of Primate Visual Processing Accounts for Three Key Markers of Holistic Face Processing. Public Library of Science | PLoS ONE 1(3): e0150980, (2016).
journal.pone_.0150980.PDF (384.15 KB)
]