Publication
Loss landscape: SGD has a better view. (2020).
CBMM-Memo-107.pdf (1.03 MB)
Typos and small edits, ver11 (955.08 KB)
Small edits, corrected Hessian for spurious case (337.19 KB)
Compositional Sparsity of Learnable Functions. (2024).
This is an update of the AMS paper (230.72 KB)
Notes on Hierarchical Splines, DCLNs and i-theory. (2015).
CBMM Memo 037 (1.83 MB)
Cervelli menti algoritmi. 272 (Sperling & Kupfer, 2023). at <https://www.sperling.it/libri/cervelli-menti-algoritmi-marco-magrini>
Explicit regularization and implicit bias in deep network classifiers trained with the square loss. arXiv (2020). at <https://arxiv.org/abs/2101.00072>
How Deep Sparse Networks Avoid the Curse of Dimensionality: Efficiently Computable Functions are Compositionally Sparse. (2022).
v1.0 (984.15 KB)
v5.7 adding in context learning etc (1.16 MB)
Is Research in Intelligence an Existential Risk?. (2014).
Is Research in Intelligence an Existential Risk.pdf (571.42 KB)
From Marr’s Vision to the Problem of Human Intelligence. (2021).
CBMM-Memo-118.pdf (362.19 KB)
Double descent in the condition number. (2019).
Fixing typos, clarifying error in y, best approach is crossvalidation (837.18 KB)
Incorporated footnote in text plus other edits (854.05 KB)
Deleted previous discussion on kernel regression and deep nets: it will appear, extended, in a separate paper (795.28 KB)
correcting a bad typo (261.24 KB)
Deleted plot of condition number of kernel matrix: we cannot get a double descent curve (769.32 KB)
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2
art%3A10.1007%2Fs11633-017-1054-2.pdf (1.68 MB)
Implicit dynamic regularization in deep networks. (2020).
v1.2 (2.29 MB)
v.59 Update on rank (2.43 MB)
Stable Foundations for Learning: a framework for learning theory (in both the classical and modern regime). (2020).
Original file (584.54 KB)
Corrected typos and details of "equivalence" CV stability and expected error for interpolating machines. Added Appendix on SGD. (905.29 KB)
Edited Appendix on SGD. (909.19 KB)
Deleted Appendix. Corrected typos etc (880.27 KB)
Added result about square loss and min norm (898.03 KB)
On Generalization Bounds for Neural Networks with Low Rank Layers. (2024).
CBMM-Memo-151.pdf (697.31 KB)
A Virtual Reality Experimental Approach for Studying How the Brain Implements Attentive Behaviors. Tri-Institute 2019 Gateways to the Laboratory Summer Program (2019).
Spatiotemporal dynamics of neocortical excitation and inhibition during human sleep. Proceedings of the National Academy of Sciences (2012). doi:10.1073/pnas.1109895109
SpatiotemporalDynamic.pdf (2.56 MB)
Eye movements and retinotopic tuning in developmental prosopagnosia. Journal of Vision 19, 7 (2019).
Individual differences in face-looking behavior generalize from the lab to the world. Journal of Vision (2016).
Individual Differences in Face Looking Behavior Generalize from the Lab to the World. Journal of Vision 16, (2016).
Real World Face Fixations, Journal of Vision article, 2016 (20.25 MB)
How does the primate brain combine generative and discriminative computations in vision?. arXiv (2024). at <https://arxiv.org/abs/2401.06005>
Rapid Physical Predictions from Convolutional Neural Networks. Neural Information Processing Systems, Intuitive Physics Workshop (2016). at <http://phys.csail.mit.edu/papers/9.pdf>
Rapid Physical Predictions - NIPS Physics Workshop Poster (1.47 MB)
Oscillations, neural computations and learning during wake and sleep. Current Opinion in Neurobiology 44C, (2017).
Temporal Grounding Graphs for Language Understanding with Accrued Visual-Linguistic Context. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI 2017) (2017). at <c>
Incentives Boost Model-Based Control Across a Range of Severity on Several Psychiatric Constructs. Biological Psychiatry 85, 425 - 433 (2019).
Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset. Interspeech 2021 (2021). doi:10.21437/Interspeech.2021
]