Publication
Loss landscape: SGD has a better view. (2020).
CBMM-Memo-107.pdf (1.03 MB)
Typos and small edits, ver11 (955.08 KB)
Small edits, corrected Hessian for spurious case (337.19 KB)
I-theory on depth vs width: hierarchical function composition. (2015).
cbmm_memo_041.pdf (1.18 MB)
Cervelli menti algoritmi. 272 (Sperling & Kupfer, 2023). at <https://www.sperling.it/libri/cervelli-menti-algoritmi-marco-magrini>
Theory of Deep Learning III: explaining the non-overfitting puzzle. (2017).
CBMM-Memo-073.pdf (2.65 MB)
CBMM Memo 073 v2 (revised 1/15/2018) (2.81 MB)
CBMM Memo 073 v3 (revised 1/30/2018) (2.72 MB)
CBMM Memo 073 v4 (revised 12/30/2018) (575.72 KB)
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2
art%3A10.1007%2Fs11633-017-1054-2.pdf (1.68 MB)
Explicit regularization and implicit bias in deep network classifiers trained with the square loss. arXiv (2020). at <https://arxiv.org/abs/2101.00072>
Turing++ Questions: A Test for the Science of (Human) Intelligence. AI Magazine 37 , 73-77 (2016).
Turing_Plus_Questions.pdf (424.91 KB)
How Deep Sparse Networks Avoid the Curse of Dimensionality: Efficiently Computable Functions are Compositionally Sparse. (2022).
v1.0 (984.15 KB)
v5.7 adding in context learning etc (1.16 MB)
Theory I: Why and When Can Deep Networks Avoid the Curse of Dimensionality?. (2016).
CBMM-Memo-058v1.pdf (2.42 MB)
CBMM-Memo-058v5.pdf (2.45 MB)
CBMM-Memo-058-v6.pdf (2.74 MB)
Proposition 4 has been deleted (2.75 MB)
Double descent in the condition number. (2019).
Fixing typos, clarifying error in y, best approach is crossvalidation (837.18 KB)
Incorporated footnote in text plus other edits (854.05 KB)
Deleted previous discussion on kernel regression and deep nets: it will appear, extended, in a separate paper (795.28 KB)
correcting a bad typo (261.24 KB)
Deleted plot of condition number of kernel matrix: we cannot get a double descent curve (769.32 KB)
Computational role of eccentricity dependent cortical magnification. (2014).
CBMM-Memo-017.pdf (1.04 MB)
From Marr’s Vision to the Problem of Human Intelligence. (2021).
CBMM-Memo-118.pdf (362.19 KB)
On Generalization Bounds for Neural Networks with Low Rank Layers. (2024).
CBMM-Memo-151.pdf (697.31 KB)
A Virtual Reality Experimental Approach for Studying How the Brain Implements Attentive Behaviors. Tri-Institute 2019 Gateways to the Laboratory Summer Program (2019).
Spatiotemporal dynamics of neocortical excitation and inhibition during human sleep. Proceedings of the National Academy of Sciences (2012). doi:10.1073/pnas.1109895109
SpatiotemporalDynamic.pdf (2.56 MB)
Individual differences in face-looking behavior generalize from the lab to the world. Journal of Vision (2016).
Eye movements and retinotopic tuning in developmental prosopagnosia. Journal of Vision 19, 7 (2019).
Individual Differences in Face Looking Behavior Generalize from the Lab to the World. Journal of Vision 16, (2016).
Real World Face Fixations, Journal of Vision article, 2016 (20.25 MB)
How does the primate brain combine generative and discriminative computations in vision?. arXiv (2024). at <https://arxiv.org/abs/2401.06005>
Rapid Physical Predictions from Convolutional Neural Networks. Neural Information Processing Systems, Intuitive Physics Workshop (2016). at <http://phys.csail.mit.edu/papers/9.pdf>
Rapid Physical Predictions - NIPS Physics Workshop Poster (1.47 MB)
Oscillations, neural computations and learning during wake and sleep. Current Opinion in Neurobiology 44C, (2017).
Temporal Grounding Graphs for Language Understanding with Accrued Visual-Linguistic Context. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI 2017) (2017). at <c>
Incentives Boost Model-Based Control Across a Range of Severity on Several Psychiatric Constructs. Biological Psychiatry 85, 425 - 433 (2019).
Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset. Interspeech 2021 (2021). doi:10.21437/Interspeech.2021
]