Publication
Export 154 results:
Filters: Author is Tomaso Poggio [Clear All Filters]
Compositional Sparsity of Learnable Functions. (2024). CBMM-Memo-145.pdf (1.25 MB)
Compositional sparsity of learnable functions. Bulletin of the American Mathematical Society 61, 438-456 (2024).
For HyperBFs AGOP is a greedy approximation to gradient descent. (2024). CBMM-Memo-148.pdf (1.06 MB)
Formation of Representations in Neural Networks. (2024). CBMM-Memo-150.pdf (4.03 MB)
On Generalization Bounds for Neural Networks with Low Rank Layers. (2024). CBMM-Memo-151.pdf (697.31 KB)
Cervelli menti algoritmi. 272 (Sperling & Kupfer, 2023). at <https://www.sperling.it/libri/cervelli-menti-algoritmi-marco-magrini>
Dynamics in Deep Classifiers trained with the Square Loss: normalization, low rank, neural collapse and generalization bounds. Research (2023). doi:10.34133/research.0024 research.0024.pdf (4.05 MB)
Feature learning in deep classifiers through Intermediate Neural Collapse. (2023). Feature_Learning_memo.pdf (2.16 MB)
For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability. Analysis and Applications 21, 193 - 215 (2023).
A Homogeneous Transformer Architecture. (2023). CBMM Memo 143 v2 (1.1 MB)
The Janus effects of SGD vs GD: high noise and low rank. (2023). Updated with appendix showing empirically that the main results extend to deep nonlinear networks (2.95 MB) Small updates...typos... (616.82 KB)
Norm-Based Generalization Bounds for Compositionally Sparse Neural Networks. (2023). Norm-based bounds for convnets.pdf (1.2 MB)
Norm-based Generalization Bounds for Sparse Neural Networks. NeurIPS 2023 (2023). at <https://proceedings.neurips.cc/paper_files/paper/2023/file/8493e190ff1bbe3837eca821190b61ff-Paper-Conference.pdf> NeurIPS-2023-norm-based-generalization-bounds-for-sparse-neural-networks-Paper-Conference.pdf (577.69 KB)
SGD and Weight Decay Provably Induce a Low-Rank Bias in Deep Neural Networks. (2023). Low-rank bias.pdf (2.38 MB)
System Identification of Neural Systems: If We Got It Right, Would We Know?. Proceedings of the 40th International Conference on Machine Learning, PMLR 202, 12430-12444 (2023). han23d.pdf (797.48 KB)
How Deep Sparse Networks Avoid the Curse of Dimensionality: Efficiently Computable Functions are Compositionally Sparse. (2022). v1.0 (984.15 KB) v5.7 adding in context learning etc (1.16 MB)
PCA as a defense against some adversaries. (2022). CBMM-Memo-135.pdf (2.58 MB)
Representation Learning in Sensory Cortex: a theory. IEEE Access 1 - 1 (2022). doi:10.1109/ACCESS.2022.3208603 Representation_Learning_in_Sensory_Cortex_a_theory.pdf (1.17 MB)
Deep Learning for Seismic Inverse Problems: Toward the Acceleration of Geophysical Analysis Workflows. IEEE Signal Processing Magazine 38, 89 - 119 (2021).
Distribution of Classification Margins: Are All Data Equal?. (2021). CBMM Memo 115.pdf (9.56 MB) arXiv version (23.05 MB)
Dynamics and Neural Collapse in Deep Classifiers trained with the Square Loss. (2021). v1.0 (4.61 MB) v1.4corrections to generalization section (5.85 MB) v1.7Small edits (22.65 MB)