Publication
Export 862 results:
Compositional Sparsity of Learnable Functions. (2024). CBMM-Memo-145.pdf (1.25 MB)
Dissociable neuronal substrates of visual feature attention and working memory. Neuron 112, 850 - 863.e6 (2024).
A ubiquitous spectrolaminar motif of local field potential power across the primate cortexAbstract. Nature Neuroscience 27, 547 - 560 (2024).
An adversarial collaboration protocol for testing contrasting predictions of global neuronal workspace and integrated information theory. PLOS ONE 18, e0268577 (2023). journal.pone_.0268577.pdf (1.99 MB)
BrainBERT: Self-supervised representation learning for Intracranial Electrodes. International Conference on Learning Representations (2023). at <https://openreview.net/forum?id=xmcYx_reUn6> 985_brainbert_self_supervised_repr.pdf (9.71 MB)
Cervelli menti algoritmi. 272 (Sperling & Kupfer, 2023). at <https://www.sperling.it/libri/cervelli-menti-algoritmi-marco-magrini>
CNNs reveal the computational implausibility of the expertise hypothesis. iScience 26, 105976 (2023).
Cross-task specificity and within-task invariance of cognitive control processes. Cell Reports 42, 111919 (2023). PIIS2211124722018174.pdf (3.97 MB)
Decoding of human identity by computer vision and neuronal vision. Scientific Reports 13, (2023). s41598-022-26946-w.pdf (1.88 MB)
Decoding of human identity by computer vision and neuronal visionAbstract. Scientific Reports 13, (2023).
Dynamics in Deep Classifiers trained with the Square Loss: normalization, low rank, neural collapse and generalization bounds. Research (2023). doi:10.34133/research.0024 research.0024.pdf (4.05 MB)
Emotion prediction as computation over a generative theory of mind. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 381, (2023). houlihan2023computedappraisals.pdf (2.37 MB)
An empirical assay of view-invariant object learning in humans and comparison with baseline image-computable models. bioRxiv (2023). at <https://www.biorxiv.org/content/10.1101/2022.12.31.522402v1>
Feature learning in deep classifiers through Intermediate Neural Collapse. (2023). Feature_Learning_memo.pdf (2.16 MB)
For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability. Analysis and Applications 21, 193 - 215 (2023).
Forward learning with top-down feedback: empirical and analytical characterization. arXiv (2023). at <https://arxiv.org/abs/2302.05440>
A Homogeneous Transformer Architecture. (2023). CBMM-Memo-143.pdf (1.07 MB)
Implicit regularization with strongly convex bias: Stability and acceleration. Analysis and Applications 21, 165 - 191 (2023).
Infants and toddlers leverage their understanding of action goals to evaluate agents who help others. Child Development (2023). doi:10.1111/cdev.13895
The Janus effects of SGD vs GD: high noise and low rank. (2023). Updated with appendix showing empirically that the main results extend to deep nonlinear networks (2.95 MB) Small updates...typos... (616.82 KB)
Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions. PLOS Biology 21, e3002366 (2023).
Minute-scale periodicity of neuronal firing in the human entorhinal cortex. Cell Reports 42, 113271 (2023). 1-s2.0-S2211124723012834-main.pdf (5.33 MB)
Model metamers reveal divergent invariances between biological and artificial neural networks. Nature Neuroscience (2023). doi:10.1038/s41593-023-01442-0