Export 847 results:
Adversarially trained neural representations may already be as robust as corresponding biological neural representations. arXiv (2022).
The Aligned Multimodal Movie Treebank: An audio, video, dependency-parse treebank. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (2022).
Aligning Model and Macaque Inferior Temporal Cortex Representations Improves Model-to-Human Behavioral Alignment and Adversarial Robustness. bioRxiv (2022).
Animal-to-Animal Variability in Partial Hippocampal Remapping in Repeated Environments. The Journal of Neuroscience 42, 5268 - 5280 (2022). 5268.full_.pdf (2.97 MB)
An approximate representation of objects underlies physical reasoning. psyArXiv (2022). at <https://psyarxiv.com/vebu5/>
Artificial intelligence insights into hippocampal processing. Frontiers in Computational Neuroscience 16, (2022).
Brain-like functional specialization emerges spontaneously in deep neural networks. Science Advances 8, (2022).
A computational probe into the behavioral and neural markers of atypical facial emotion processing in autism. The Journal of Neuroscience JN-RM-2229-21 (2022). doi:10.1523/JNEUROSCI.2229-21.2022
Dangerous Ground: One-Year-Old Infants are Sensitive to Peril in Other Agents’ Action PlansAbstract. Open Mind 6, 211 - 231 (2022).
Deep neural network models of sound localization reveal how perception is adapted to real-world environments. Nature Human Behavior 6, 111–133 (2022). s41562-021-01244-z.pdf (7.22 MB)
Do computational models of vision need shape-based representations? Evidence from an individual with intriguing visual perceptions. Cognitive Neuropsychology 1 - 3 (2022). doi:10.1080/02643294.2022.2041588
Early concepts of intimacy: Young humans use saliva sharing to infer close relationships. Science 375, 311 - 315 (2022).
On the Efficacy of Co-Attention Transformer Layers in Visual Question Answering. arXiv (2022). doi:10.48550/arXiv.2201.03965 On_the_Efficacy_of_Co-Attention_Transformer_Layers.pdf (35.54 MB)
Eight-Month-Old Infants’ Social Evaluations of Agents Who Act on False Beliefs. Proceedings of the Annual Meeting of the Cognitive Science Society 44, (2022).
Error-driven Input Modulation: Solving the Credit Assignment Problem without a Backward Pass. Proceedings of the 39th International Conference on Machine Learning, PMLR 162, 4937-4955 (2022). dellaferrera22a.pdf (909.91 KB)
Eszopiclone and Zolpidem Produce Opposite Effects on Hippocampal Ripple DensityDataSheet1.docx. Frontiers in Pharmacology 12, (2022).
The evolution of color naming reflects pressure for efficiency: Evidence from the recent pastAbstract. Journal of Language Evolution (2022). doi:10.1093/jole/lzac001
Face neurons encode nonsemantic features. Proceedings of the National Academy of Sciences 119, (2022).
Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks. International Conference on Learning Representations (ICLR) (2022). at <https://openreview.net/forum?id=yeP_zx9vqNm>
Genome-wide mapping of somatic mutation rates uncovers drivers of cancerAbstract. Nature Biotechnology 40, 1634 - 1643 (2022).
Harmonicity aids hearing in noise. Attention, Perception, & Psychophysics (2022). doi:10.3758/s13414-021-02376-0
A highly selective response to food in human visual cortex revealed by hypothesis-free voxel decomposition. Current Biology 32, 4159 - 4171.e9 (2022).
How Deep Sparse Networks Avoid the Curse of Dimensionality: Efficiently Computable Functions are Compositionally Sparse. (2022). v1.0 (984.15 KB) v5.3 more fine tuning (1.15 MB)
On the Implicit Bias Towards Minimal Depth of Deep Neural Networks. arXiv (2022). at <https://arxiv.org/abs/2202.09028> 2202.09028.pdf (2 MB)
Incorporating Rich Social Interactions Into MDPs. (2022). CBMM-Memo-133.pdf (1.68 MB)