Publication
Properties of invariant object recognition in human one-shot learning suggests a hierarchical architecture different from deep convolutional neural networks. Vision Science Society (2019).
Representation Learning from Orbit Sets for One-shot Classification. AAAI Spring Symposium Series, Science of Intelligence (2017). at <https://www.aaai.org/ocs/index.php/SSS/SSS17/paper/view/15357>
Word-level Invariant Representations From Acoustic Waveforms. INTERSPEECH 2014 - 15th Annual Conf. of the International Speech Communication Association (International Speech Communication Association (ISCA), 2014). at <http://www.isca-speech.org/archive/interspeech_2014/i14_2385.html>
Evaluating the Adversarial Robustness of a Foveated Texture Transform Module in a CNN. NeurIPS 2021 (2021). at <https://nips.cc/Conferences/2021/Schedule?showEvent=21868>
NSF Science and Technology Centers – The Class of 2013. (2013).
NSFGender2013_poster.pdf (2.77 MB)
System Identification of Neural Systems: If We Got It Right, Would We Know?. Proceedings of the 40th International Conference on Machine Learning, PMLR 202, 12430-12444 (2023).
han23d.pdf (797.48 KB)
Unsupervised Learning of Invariant Representations in Hierarchical Architectures. (2013).
1311.4158v2.pdf (3.78 MB)
When and Why Are Deep Networks Better Than Shallow Ones?. AAAI-17: Thirty-First AAAI Conference on Artificial Intelligence (2017).
CNS (“Cortical Network Simulator”): a GPU-based framework for simulating cortically-organized networks. (2010).
cns.tar (1.46 MB)
MIT-CSAIL-TR-2010-013.pdf (389.38 KB)
(last version before switch to classdef syntax) (1.05 MB)
The dynamics of invariant object recognition in the human visual system. (2014). doi:http://dx.doi.org/10.7910/DVN/KRUPXZ
The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex. (2015).
modularity_dataset_ver1.tar.gz (36.14 MB)
A Large Video Database for Human Motion Recognition. (2011).
Kuehne_etal_ICCV2011.pdf (433.27 KB)
An analysis of training and generalization errors in shallow and deep networks. Neural Networks 121, 229 - 241 (2020).
Complexity Control by Gradient Descent in Deep Networks. Nature Communications 11, (2020).
s41467-020-14663-9.pdf (431.68 KB)
Compositional sparsity of learnable functions. Bulletin of the American Mathematical Society 61, 438-456 (2024).
Compression of Deep Neural Networks for Image Instance Retrieval. (2017). at <https://arxiv.org/abs/1701.04923>
1701.04923.pdf (614.33 KB)
Deep Leaning: Mathematics and Neuroscience. A Sponsored Supplement to Science Brain-Inspired intelligent robotics: The intersection of robotics and neuroscience, 9-12 (2016).
Deep Learning for Seismic Inverse Problems: Toward the Acceleration of Geophysical Analysis Workflows. IEEE Signal Processing Magazine 38, 89 - 119 (2021).
Deep vs. shallow networks: An approximation theory perspective. Analysis and Applications 14, 829 - 848 (2016).
Dynamics in Deep Classifiers trained with the Square Loss: normalization, low rank, neural collapse and generalization bounds. Research (2023). doi:10.34133/research.0024
research.0024.pdf (4.05 MB)
]