Publication
What if.. (2015).
What if.pdf (2.09 MB)
A Science of Intelligence . (2015).
A Science of Intelligence.pdf (659.5 KB)
Fisher-Rao Metric, Geometry, and Complexity of Neural Networks. arXiv.org (2017). at <https://arxiv.org/abs/1711.01530>
1711.01530.pdf (966.99 KB)
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review. International Journal of Automation and Computing 1-17 (2017). doi:10.1007/s11633-017-1054-2
art%3A10.1007%2Fs11633-017-1054-2.pdf (1.68 MB)
What if Eye..? Computationally Recreating Vision Evolution. arXiv (2025). at <https://arxiv.org/abs/2501.15001>
2501.15001v1.pdf (5.2 MB)
View-Tolerant Face Recognition and Hebbian Learning Imply Mirror-Symmetric Neural Tuning to Head Orientation. Current Biology 27, 1-6 (2017).
Unsupervised learning of invariant representations. Theoretical Computer Science (2015). doi:10.1016/j.tcs.2015.06.048
Turing++ Questions: A Test for the Science of (Human) Intelligence. AI Magazine 37 , 73-77 (2016).
Turing_Plus_Questions.pdf (424.91 KB)
Theory II: Deep learning and optimization. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
03_775-788_00920_Bpast.No_.66-6_31.12.18_K2.pdf (5.43 MB)
Theory I: Deep networks and the curse of dimensionality. Bulletin of the Polish Academy of Sciences: Technical Sciences 66, (2018).
02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf (1.18 MB)
Theoretical issues in deep networks. Proceedings of the National Academy of Sciences 201907369 (2020). doi:10.1073/pnas.1907369117
PNASlast.pdf (915.3 KB)
Scale and translation-invariance for novel objects in human vision. Scientific Reports 10, (2020).
s41598-019-57261-6.pdf (1.46 MB)
Representation Learning in Sensory Cortex: a theory. IEEE Access 1 - 1 (2022). doi:10.1109/ACCESS.2022.3208603
Representation_Learning_in_Sensory_Cortex_a_theory.pdf (1.17 MB)
Pruning Convolutional Neural Networks for Image Instance Retrieval. (2017). at <https://arxiv.org/abs/1707.05455>
1707.05455.pdf (143.46 KB)
An Overview of Some Issues in the Theory of Deep Networks. IEEJ Transactions on Electrical and Electronic Engineering 15, 1560 - 1571 (2020).
Neural Tuning Size in a Model of Primate Visual Processing Accounts for Three Key Markers of Holistic Face Processing. Public Library of Science | PLoS ONE 1(3): e0150980, (2016).
journal.pone_.0150980.PDF (384.15 KB)
Nested Invariance Pooling and RBM Hashing for Image Instance Retrieval. arXiv.org (2016). at <https://arxiv.org/abs/1603.04595>
1603.04595.pdf (2.9 MB)
Invariant Recognition Shapes Neural Representations of Visual Input. Annual Review of Vision Science 4, 403 - 422 (2018).
annurev-vision-091517-034103.pdf (1.55 MB)
Invariant recognition drives neural representations of action sequences. PLoS Comp. Bio (2017).
Invariant recognition drives neural representations of action sequences. PLOS Computational Biology 13, e1005859 (2017).
journal.pcbi_.1005859.pdf (9.24 MB)
The Invariance Hypothesis Implies Domain-Specific Regions in Visual Cortex. PLOS Computational Biology 11, e1004390 (2015).
journal.pcbi_.1004390.pdf (2.04 MB)
On invariance and selectivity in representation learning. Information and Inference: A Journal of the IMA iaw009 (2016). doi:10.1093/imaiai/iaw009
imaiai.iaw009.full_.pdf (267.87 KB)
]