From infancy, we perceive not just what is where in an image, but the physical and social dynamics unfolding in a three-dimensonal scene: We see surfaces that support objects about to fall or that can be moved, people helping or fighting with each other, walls that limit what we and others can see. These representations for scene layout, intuitive physics and psychology constitute what we call the cognitive core: They let us judge what is about to happen, and plan effective actions. Module 3 aims to characterize the cognitive core precisely, via computational modeling, psychophysics with adults and children, and neuroscience in both humans and non-humans. We ask: What are the representations and computations that support core cognition? How do core knowledge systems take input from and provide top-down guidance for perceptual streams (Module 1) and attentional routines (Module 2)? How are they constructed in childhood and refined by learning throughout life? What are their neural bases, both in terms of large-scale brain architecture and circuit-level computational mechanisms?
“Third-Party Preferences for Imitators in Preverbal Infants”, Open Mind, vol. 2, no. 2, pp. 61 - 71, 2018. ,
“The statistical shape of geometric reasoning”, Scientific Reports, vol. 8, no. 1, 2018. ,
“Synthesizing 3D Shapes via Modeling Multi-view Depth Maps and Silhouettes with Deep Generative Networks”, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017. ,
“Differentiable physics and stable modes for tool-use and manipulation planning”, Robotics: Science and Systems 2018. 2018. ,
“Divergence in the functional organization of human and macaque auditory cortex revealed by fMRI responses to harmonic tones”, Nature Neuroscience, 2019. ,