Publication
Export 35 results:
Filters: Author is Lorenzo Rosasco [Clear All Filters]
Scalable Causal Discovery with Score Matching. NeurIPS 2022 (2022). at <https://openreview.net/forum?id=v56PHv_W2A>
For interpolating kernel machines, the minimum norm ERM solution is the most stable. (2020). CBMM_Memo_108.pdf (1015.14 KB) Better bound (without inequalities!) (1.03 MB)
Beating SGD Saturation with Tail-Averaging and Minibatching. Neural Information Processing Systems (NeurIPS 2019) (2019). 9422-beating-sgd-saturation-with-tail-averaging-and-minibatching.pdf (389.35 KB)
Dynamics & Generalization in Deep Networks -Minimizing the Norm. NAS Sackler Colloquium on Science of Deep Learning (2019).
Implicit Regularization of Accelerated Methods in Hilbert Spaces. Neural Information Processing Systems (NeurIPS 2019) (2019). 9591-implicit-regularization-of-accelerated-methods-in-hilbert-spaces.pdf (451.14 KB)
Theory III: Dynamics and Generalization in Deep Networks. (2018). Original, intermediate versions are available under request (2.67 MB) CBMM Memo 90 v12.pdf (4.74 MB) Theory_III_ver44.pdf Update Hessian (4.12 MB) Theory_III_ver48 (Updated discussion of convergence to max margin) (2.56 MB) fixing errors and sharpening some proofs (2.45 MB)
Computational and Cognitive Neuroscience of Vision 85-104 (Springer, 2017).
Symmetry Regularization. (2017). CBMM-Memo-063.pdf (6.1 MB)
Theory of Deep Learning III: explaining the non-overfitting puzzle. (2017). CBMM-Memo-073.pdf (2.65 MB) CBMM Memo 073 v2 (revised 1/15/2018) (2.81 MB) CBMM Memo 073 v3 (revised 1/30/2018) (2.72 MB) CBMM Memo 073 v4 (revised 12/30/2018) (575.72 KB)
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2 art%3A10.1007%2Fs11633-017-1054-2.pdf (1.68 MB)
Holographic Embeddings of Knowledge Graphs. Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16) (2016). 1510.04935v2.pdf (360.65 KB)
On invariance and selectivity in representation learning. Information and Inference: A Journal of the IMA iaw009 (2016). doi:10.1093/imaiai/iaw009 imaiai.iaw009.full_.pdf (267.87 KB)
Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?. (2016). CBMM-Memo-058v1.pdf (2.42 MB) CBMM-Memo-058v5.pdf (2.45 MB) CBMM-Memo-058-v6.pdf (2.74 MB) Proposition 4 has been deleted (2.75 MB)
Deep Convolutional Networks are Hierarchical Kernel Machines. (2015). CBMM Memo 035_rev5.pdf (975.65 KB)
Discriminative Template Learning in Group-Convolutional Networks for Invariant Speech Representations. INTERSPEECH-2015 (International Speech Communication Association (ISCA), 2015). at <http://www.isca-speech.org/archive/interspeech_2015/i15_3229.html>
Holographic Embeddings of Knowledge Graphs. (2015). holographic-embeddings.pdf (677.87 KB)
On Invariance and Selectivity in Representation Learning. (2015). CBMM Memo No. 029 (812.07 KB)
I-theory on depth vs width: hierarchical function composition. (2015). cbmm_memo_041.pdf (1.18 MB)
Learning with incremental iterative regularization. NIPS 2015 (2015). at <https://papers.nips.cc/paper/6015-learning-with-incremental-iterative-regularization> Learning with Incremental Iterative Regularization_1405.0042v2.pdf (504.66 KB)
Less is More: Nyström Computational Regularization. NIPS 2015 (2015). at <https://papers.nips.cc/paper/5936-less-is-more-nystrom-computational-regularization> Less is More- Nystr ̈om Computational Regularization_1507.04717v4.pdf (287.14 KB)
Notes on Hierarchical Splines, DCLNs and i-theory. (2015). CBMM Memo 037 (1.83 MB)
Unsupervised learning of invariant representations. Theoretical Computer Science (2015). doi:10.1016/j.tcs.2015.06.048
A Deep Representation for Invariance and Music Classification. ICASSP 2014 - 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2014). doi:10.1109/ICASSP.2014.6854954