Publication
Deep vs. shallow networks : An approximation theory perspective. (2016).
Original submission, visit the link above for the updated version (960.27 KB)
Deep vs. shallow networks: An approximation theory perspective. Analysis and Applications 14, 829 - 848 (2016).
Discriminative Template Learning in Group-Convolutional Networks for Invariant Speech Representations. INTERSPEECH-2015 (International Speech Communication Association (ISCA), 2015). at <http://www.isca-speech.org/archive/interspeech_2015/i15_3229.html>
Distribution of Classification Margins: Are All Data Equal?. (2021).
CBMM Memo 115.pdf (9.56 MB)
arXiv version (23.05 MB)
Do Deep Neural Networks Suffer from Crowding?. (2017).
CBMM-Memo-069.pdf (6.47 MB)
Double descent in the condition number. (2019).
Fixing typos, clarifying error in y, best approach is crossvalidation (837.18 KB)
Incorporated footnote in text plus other edits (854.05 KB)
Deleted previous discussion on kernel regression and deep nets: it will appear, extended, in a separate paper (795.28 KB)
correcting a bad typo (261.24 KB)
Deleted plot of condition number of kernel matrix: we cannot get a double descent curve (769.32 KB)
Dreaming with ARC. Learning Meets Combinatorial Algorithms workshop at NeurIPS 2020 (2020).
CBMM Memo 113.pdf (1019.64 KB)
Dynamics and Neural Collapse in Deep Classifiers trained with the Square Loss. (2021).
v1.0 (4.61 MB)
v1.4corrections to generalization section (5.85 MB)
v1.7Small edits (22.65 MB)
Dynamics & Generalization in Deep Networks -Minimizing the Norm. NAS Sackler Colloquium on Science of Deep Learning (2019).
Dynamics in Deep Classifiers trained with the Square Loss: normalization, low rank, neural collapse and generalization bounds. Research (2023). doi:10.34133/research.0024
research.0024.pdf (4.05 MB)
The dynamics of invariant object recognition in the human visual system. J Neurophysiol 111, 91-102 (2014).
The dynamics of invariant object recognition in the human visual system. (2014). doi:http://dx.doi.org/10.7910/DVN/KRUPXZ
Eccentricity Dependent Deep Neural Networks for Modeling Human Vision. Vision Sciences Society (2017).
Eccentricity Dependent Deep Neural Networks: Modeling Invariance in Human Vision. AAAI Spring Symposium Series, Science of Intelligence (2017). at <https://www.aaai.org/ocs/index.php/SSS/SSS17/paper/view/15360>
paper.pdf (963.87 KB)
Eccentricity Dependent Neural Network with Recurrent Attention for Scale, Translation and Clutter Invariance . Vision Science Society (2019).
The Effects of Image Distribution and Task on Adversarial Robustness. (2021).
CBMM_Memo_116.pdf (5.44 MB)
Fast, invariant representation for human action in the visual system. (2016). at <http://arxiv.org/abs/1601.01358>
CBMM Memo 042 (3.03 MB)
A fast, invariant representation for human action in the visual system. J Neurophysiol jn.00642.2017 (2017). doi:10.1152/jn.00642.2017
Author's last draft (695.63 KB)
A fast, invariant representation for human action in the visual system. Journal of Neurophysiology (2018). doi:https://doi.org/10.1152/jn.00642.2017
Feature learning in deep classifiers through Intermediate Neural Collapse. (2023).
Feature_Learning_memo.pdf (2.16 MB)
Fisher-Rao Metric, Geometry, and Complexity of Neural Networks. arXiv.org (2017). at <https://arxiv.org/abs/1711.01530>
1711.01530.pdf (966.99 KB)
For HyperBFs AGOP is a greedy approximation to gradient descent. (2024).
CBMM-Memo-148.pdf (1.06 MB)
For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability. Analysis and Applications 21, 193 - 215 (2023).
]