Publication
Found 283 results
Author Title Type [ Year
] Filters: First Letter Of Last Name is P [Clear All Filters]
On efficiently computable functions, deep networks and sparse compositionality. (2025).
Deep_sparse_networks_approximate_efficiently_computable_functions.pdf (223.15 KB)
The Indoor-Training Effect: unexpected gains from distribution shifts in the transition function. (2025). at <https://arxiv.org/abs/2401.15856>
Multiplicative Regularization Generalizes Better Than Additive Regularization. (2025).
CBMM Memo 158.pdf (4.8 MB)
Position: A Theory of Deep Learning Must Include Compositional Sparsity. (2025).
CBMM Memo 159.pdf (676.35 KB)
What if Eye..? Computationally Recreating Vision Evolution. arXiv (2025). at <https://arxiv.org/abs/2501.15001>
2501.15001v1.pdf (5.2 MB)
Benchmarking Out-of-Distribution Generalization Capabilities of DNN-based Encoding Models for the Ventral Visual Cortex. NeurIPS 2024 (2024).
Compositional sparsity of learnable functions. Bulletin of the American Mathematical Society 61, 438-456 (2024).
Compositional Sparsity of Learnable Functions. (2024).
This is an update of the AMS paper (230.72 KB)
For HyperBFs AGOP is a greedy approximation to gradient descent. (2024).
CBMM-Memo-148.pdf (1.06 MB)
Formation of Representations in Neural Networks. (2024).
CBMM-Memo-150.pdf (4.03 MB)
On Generalization Bounds for Neural Networks with Low Rank Layers. (2024).
CBMM-Memo-151.pdf (697.31 KB)
On Generalization Bounds for Neural Networks with Low Rank Layers. (2024).
CBMM-Memo-151.pdf (697.31 KB)
How does the primate brain combine generative and discriminative computations in vision?. arXiv (2024). at <https://arxiv.org/abs/2401.06005>
On the Power of Decision Trees in Auto-Regressive Language Modeling. (2024).
CBMM-Memo-149.pdf (2.11 MB)
Self-Assembly of a Biologically Plausible Learning Circuit. (2024).
CBMM-Memo-152.pdf (1.84 MB)
Top-tuning: A study on transfer learning for an efficient alternative to fine tuning for image classification with fast kernel methods. Image and Vision Computing 142, 104894 (2024).
An adversarial collaboration protocol for testing contrasting predictions of global neuronal workspace and integrated information theory. PLOS ONE 18, e0268577 (2023).
journal.pone_.0268577.pdf (1.99 MB)
An adversarial collaboration protocol for testing contrasting predictions of global neuronal workspace and integrated information theory. PLOS ONE 18, e0268577 (2023).
journal.pone_.0268577.pdf (1.99 MB)
An adversarial collaboration to critically evaluate theories of consciousness. bioRxiv (2023). doi:https://doi.org/10.1101/2023.06.23.546249
An adversarial collaboration to critically evaluate theories of consciousness. bioRxiv (2023). doi:https://doi.org/10.1101/2023.06.23.546249
Catalyzing next-generation Artificial Intelligence through NeuroAIAbstract. Nature Communications 14, (2023).
Cervelli menti algoritmi. 272 (Sperling & Kupfer, 2023). at <https://www.sperling.it/libri/cervelli-menti-algoritmi-marco-magrini>
Dynamics in Deep Classifiers trained with the Square Loss: normalization, low rank, neural collapse and generalization bounds. Research (2023). doi:10.34133/research.0024
research.0024.pdf (4.05 MB)
EEG Entropy in REM Sleep as a Physiologic Biomarker in Early Clinical Stages of Alzheimer’s Disease. Journal of Alzheimer's Disease 91, 1557 - 1572 (2023).