Publication
Export 40 results:
Filters: Author is Qianli Liao [Clear All Filters]
Complexity Control by Gradient Descent in Deep Networks. Nature Communications 11, (2020).
s41467-020-14663-9.pdf (431.68 KB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Explicit regularization and implicit bias in deep network classifiers trained with the square loss. arXiv (2020). at <https://arxiv.org/abs/2101.00072>
Hierarchically Local Tasks and Deep Convolutional Networks. (2020).
CBMM_Memo_109.pdf (2.12 MB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Implicit dynamic regularization in deep networks. (2020).
v1.2 (2.29 MB)
v.59 Update on rank (2.43 MB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Theoretical issues in deep networks. Proceedings of the National Academy of Sciences 201907369 (2020). doi:10.1073/pnas.1907369117
PNASlast.pdf (915.3 KB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Biologically-plausible learning algorithms can scale to large datasets. International Conference on Learning Representations, (ICLR 2019) (2019).
gk7779.pdf (721.53 KB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Dynamics & Generalization in Deep Networks -Minimizing the Norm. NAS Sackler Colloquium on Science of Deep Learning (2019).
Theoretical Issues in Deep Networks. (2019).
CBMM Memo 100 v1 (1.71 MB)
CBMM Memo 100 v3 (8/25/2019) (1.31 MB)
CBMM Memo 100 v4 (11/19/2019) (1008.23 KB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Theories of Deep Learning: Approximation, Optimization and Generalization . TECHCON 2019 (2019).
Biologically-plausible learning algorithms can scale to large datasets. (2018).
CBMM-Memo-092.pdf (1.31 MB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Classical generalization bounds are surprisingly tight for Deep Networks. (2018).
CBMM-Memo-091.pdf (1.43 MB)
CBMM-Memo-091-v2.pdf (1.88 MB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf (1.18 MB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Theory III: Dynamics and Generalization in Deep Networks. (2018).
Original, intermediate versions are available under request (2.67 MB)
CBMM Memo 90 v12.pdf (4.74 MB)
Theory_III_ver44.pdf Update Hessian (4.12 MB)
Theory_III_ver48 (Updated discussion of convergence to max margin) (2.56 MB)
fixing errors and sharpening some proofs (2.45 MB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Compression of Deep Neural Networks for Image Instance Retrieval. (2017). at <https://arxiv.org/abs/1701.04923>
1701.04923.pdf (614.33 KB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Musings on Deep Learning: Properties of SGD. (2017).
CBMM Memo 067 v2 (revised 7/19/2017) (5.88 MB)
CBMM Memo 067 v3 (revised 9/15/2017) (5.89 MB)
CBMM Memo 067 v4 (revised 12/26/2017) (5.57 MB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Object-Oriented Deep Learning. (2017).
CBMM-Memo-070.pdf (963.54 KB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Theory II: Landscape of the Empirical Risk in Deep Learning. (2017).
CBMM Memo 066_1703.09833v2.pdf (5.56 MB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Theory of Deep Learning IIb: Optimization Properties of SGD. (2017).
CBMM-Memo-072.pdf (3.66 MB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
Theory of Deep Learning III: explaining the non-overfitting puzzle. (2017).
CBMM-Memo-073.pdf (2.65 MB)
CBMM Memo 073 v2 (revised 1/15/2018) (2.81 MB)
CBMM Memo 073 v3 (revised 1/30/2018) (2.72 MB)
CBMM Memo 073 v4 (revised 12/30/2018) (575.72 KB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)
View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation. Current Biology 27, 1-6 (2017).
When and Why Are Deep Networks Better Than Shallow Ones?. AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence (2017).
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2
art%3A10.1007%2Fs11633-017-1054-2.pdf (1.68 MB)
![application/pdf PDF icon](/modules/file/icons/application-pdf.png)