Using Embodied AI to help answer”why” questions in systems neuroscience
Date Posted:
September 29, 2023
Date Recorded:
September 19, 2023
Speaker(s):
Aran Nayebi - All Captioned Videos
Associated CBMM Pages:
Description:
Abstract:
Deep neural networks trained on high-variation tasks ("goals”) have had immense success as predictive models of the human and non-human primate visual pathways. More specifically, a positive relationship has been observed between model performance on ImageNet categorization and neural predictivity. Past a point, however, improved categorization performance on ImageNet does not yield improved neural predictivity, even between very different architectures. In this talk, I will present two case studies in both rodents and primates, that demonstrate a more general correspondence between self-supervised learning of visual representations relevant to high-dimensional embodied control and increased gains in neural predictivity.
In the first study, we develop the (currently) most precise model of the mouse visual system, and show that self-supervised, contrastive algorithms outperform supervised approaches in capturing neural response variance across visual areas. By “implanting” these visual networks into a biomechanically-realistic rodent body to navigate to rewards in a novel maze environment, we observe that the artificial rodent with a contrastively-optimized visual system is able to obtain more reward across episodes compared to its supervised counterpart. The second case study examines mental simulations in primates, wherein we show that self-supervised video foundation models that predict the future state of their environment in latent spaces that can support a wide range of sensorimotor tasks, align most closely with human error patterns and macaque frontal cortex neural dynamics. Taken together, our findings suggest that self-supervised learning of visual representations that are reusable for downstream Embodied AI tasks may be a promising way forward to study the evolutionary constraints of neural circuits in multiple species.
Timestamps:
0:00 - Guangyu Robert Yang Introduction
1:17 - Introduction
10:54 - Mouse Visual Cortex as a Task-General, Limited Resource System
29:13 - Reusable Latent Representations for Primate Mental Simulation
51:45 - Heuristics for Interrogating Natural Intelligence
Papers Discussed:
1. A. Nayebi*, N. C. Kong*, C. Zhuang, J. L. Gardner, A. M. Norcia, & D. L. Yamins. Mouse visual cortex as a limited resource system that self-learns an ecologically-general representation. PLOS Computational Biology, 19(10), e1011506. 2023. https://doi.org/10.1371/journal.pcbi.1011506
2. A. Nayebi, R. Rajalingham, M. Jazayeri, G.R. Yang. “Neural foundations of mental simulation: future prediction of latent representations on dynamic scenes”. Advances in Neural Information Processing Systems (NeurIPS), Volume 36 (2023): 70548–70561. https://arxiv.org/abs/2305.11772
have an interactive transcript feature enabled, which appears below the video when playing. Viewers can search for keywords in the video or click on any word in the transcript to jump to that point in the video. When searching, a dark bar with white vertical lines appears below the video frame. Each white line is an occurrence of the searched term and can be clicked on to jump to that spot in the video.