Publication
The dynamics of invariant object recognition in the human visual system. J Neurophysiol 111, 91-102 (2014).
Explicit regularization and implicit bias in deep network classifiers trained with the square loss. arXiv (2020). at <https://arxiv.org/abs/2101.00072>
A fast, invariant representation for human action in the visual system. J Neurophysiol jn.00642.2017 (2017). doi:10.1152/jn.00642.2017
Author's last draft (695.63 KB)
A fast, invariant representation for human action in the visual system. Journal of Neurophysiology (2018). doi:https://doi.org/10.1152/jn.00642.2017
For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability. Analysis and Applications 21, 193 - 215 (2023).
Function approximation by deep networks. Communications on Pure & Applied Analysis 19, 4085 - 4095 (2020).
1534-0392_2020_8_4085.pdf (514.57 KB)
On invariance and selectivity in representation learning. Information and Inference: A Journal of the IMA iaw009 (2016). doi:10.1093/imaiai/iaw009
imaiai.iaw009.full_.pdf (267.87 KB)
The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex. PLOS Computational Biology 11, e1004390 (2015).
journal.pcbi_.1004390.pdf (2.04 MB)
Invariant recognition drives neural representations of action sequences. PLoS Comp. Bio (2017).
Invariant recognition drives neural representations of action sequences. PLOS Computational Biology 13, e1005859 (2017).
journal.pcbi_.1005859.pdf (9.24 MB)
Invariant Recognition Shapes Neural Representations of Visual Input. Annual Review of Vision Science 4, 403 - 422 (2018).
annurev-vision-091517-034103.pdf (1.55 MB)
Nested Invariance Pooling and RBM Hashing for Image Instance Retrieval. arXiv.org (2016). at <https://arxiv.org/abs/1603.04595>
1603.04595.pdf (2.9 MB)
Neural Tuning Size in a Model of Primate Visual Processing Accounts for Three Key Markers of Holistic Face Processing. Public Library of Science | PLoS ONE 1(3): e0150980, (2016).
journal.pone_.0150980.PDF (384.15 KB)
An Overview of Some Issues in the Theory of Deep Networks. IEEJ Transactions on Electrical and Electronic Engineering 15, 1560 - 1571 (2020).
Pruning Convolutional Neural Networks for Image Instance Retrieval. (2017). at <https://arxiv.org/abs/1707.05455>
1707.05455.pdf (143.46 KB)
Representation Learning in Sensory Cortex: a theory. IEEE Access 1 - 1 (2022). doi:10.1109/ACCESS.2022.3208603
Representation_Learning_in_Sensory_Cortex_a_theory.pdf (1.17 MB)
Scale and translation-invariance for novel objects in human vision. Scientific Reports 10, (2020).
s41598-019-57261-6.pdf (1.46 MB)
Theoretical issues in deep networks. Proceedings of the National Academy of Sciences 201907369 (2020). doi:10.1073/pnas.1907369117
PNASlast.pdf (915.3 KB)
Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf (1.18 MB)
Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)
Turing++ Questions: A Test for the Science of (Human) Intelligence. AI Magazine 37 , 73-77 (2016).
Turing_Plus_Questions.pdf (424.91 KB)
Unsupervised learning of invariant representations. Theoretical Computer Science (2015). doi:10.1016/j.tcs.2015.06.048
View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation. Current Biology 27, 1-6 (2017).
]