Publication
Export 32 results:
Filters: Author is Qianli Liao [Clear All Filters]
Biologically-Plausible Learning Algorithms Can Scale to Large Datasets. International Conference on Learning Representations (2019).
Theoretical Issues in Deep Networks. (2019).
CBMM Memo 100 v1 (1.71 MB)
CBMM Memo 100 v3 (8/25/2019) (1.31 MB)
CBMM Memo 100 v4 (11/19/2019) (1008.23 KB)



Biologically-plausible learning algorithms can scale to large datasets. (2018).
CBMM-Memo-092.pdf (1.31 MB)

Classical generalization bounds are surprisingly tight for Deep Networks. (2018).
CBMM-Memo-091.pdf (1.43 MB)
CBMM-Memo-091-v2.pdf (1.88 MB)


Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf (1.18 MB)

Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)

Theory III: Dynamics and Generalization in Deep Networks. (2018).
TheoryIII_ver2 (2.67 MB)
TheoryIII_ver11 (4.17 MB)
TheoryIII_ver12 (4.74 MB)
TheoryIII_ver13 (4.75 MB)
TheoryIII_ver14 (3.89 MB)
TheoryIII_ver15 (3.9 MB)
TheoryIII_ver20 (3.91 MB)
TheoryIII_ver22 (4.97 MB)
TheoryIII_ver25 (1.19 MB)
TheoryIII_ver28 (1.17 MB)
TheoryIII_ver29 (1.17 MB)
TheoryIII_ver30 (1.17 MB)
TheoryIII_ver31 (most typos and other errors corrected in main text) (1.18 MB)
TheoryIII_ver35 (more edits; regression note in appendix) (1.56 MB)
TheoryIII_ver39 (look at footnote 5) (2.14 MB)















Compression of Deep Neural Networks for Image Instance Retrieval. (2017). at <https://arxiv.org/abs/1701.04923>
1701.04923.pdf (614.33 KB)

Musings on Deep Learning: Properties of SGD. (2017).
CBMM Memo 067 v2 (revised 7/19/2017) (5.88 MB)
CBMM Memo 067 v3 (revised 9/15/2017) (5.89 MB)
CBMM Memo 067 v4 (revised 12/26/2017) (5.57 MB)



Object-Oriented Deep Learning. (2017).
CBMM-Memo-070.pdf (963.54 KB)

Theory II: Landscape of the Empirical Risk in Deep Learning. (2017).
CBMM Memo 066_1703.09833v2.pdf (5.56 MB)

Theory of Deep Learning IIb: Optimization Properties of SGD. (2017).
CBMM-Memo-072.pdf (3.66 MB)

Theory of Deep Learning III: explaining the non-overfitting puzzle. (2017).
CBMM-Memo-073.pdf (2.65 MB)
CBMM Memo 073 v2 (revised 1/15/2018) (2.81 MB)
CBMM Memo 073 v3 (revised 1/30/2018) (2.72 MB)
CBMM Memo 073 v4 (revised 12/30/2018) (575.72 KB)




View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation. Current Biology 27, 1-6 (2017).
When and Why Are Deep Networks Better Than Shallow Ones?. AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence (2017).
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2
art%3A10.1007%2Fs11633-017-1054-2.pdf (1.68 MB)

Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex. (2016).
CBMM Memo No. 047 (1.29 MB)

How Important Is Weight Symmetry in Backpropagation?. Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) (Association for the Advancement of Artificial Intelligence, 2016).
liao-leibo-poggio.pdf (191.91 KB)

How Important Is Weight Symmetry in Backpropagation?. Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) (2016). at <https://cbmm.mit.edu/sites/default/files/publications/liao-leibo-poggio.pdf>
Learning Functions: When Is Deep Better Than Shallow. (2016). at <https://arxiv.org/pdf/1603.00988v4.pdf>
Streaming Normalization: Towards Simpler and More Biologically-plausible Normalizations for Online and Recurrent Learning. (2016).
CBMM-Memo-057.pdf (1.27 MB)

Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?. (2016).
CBMM-Memo-058v1.pdf (2.42 MB)
CBMM-Memo-058v5.pdf (2.45 MB)
CBMM-Memo-058-v6.pdf (2.74 MB)



How Important is Weight Symmetry in Backpropagation?. (2015).
1510.05067v3.pdf (615.32 KB)
