From infancy, we perceive not just what is where in an image, but the physical and social dynamics unfolding in a three-dimensonal scene: We see surfaces that support objects about to fall or that can be moved, people helping or fighting with each other, walls that limit what we and others can see. These representations for scene layout, intuitive physics and psychology constitute what we call the cognitive core: They let us judge what is about to happen, and plan effective actions. Module 3 aims to characterize the cognitive core precisely, via computational modeling, psychophysics with adults and children, and neuroscience in both humans and non-humans. We ask: What are the representations and computations that support core cognition? How do core knowledge systems take input from and provide top-down guidance for perceptual streams (Module 1) and attentional routines (Module 2)? How are they constructed in childhood and refined by learning throughout life? What are their neural bases, both in terms of large-scale brain architecture and circuit-level computational mechanisms?
“Third-Party Preferences for Imitators in Preverbal Infants”, Open Mind, vol. 2, no. 2, pp. 61 - 71, 2018. ,
“The statistical shape of geometric reasoning”, Scientific Reports, vol. 8, no. 1, 2018. ,
“Draping an Elephant: Uncovering Children's Reasoning About Cloth-Covered Objects”, Cognitive Science Society. Montreal, Canada, 2019. ,
“Synthesizing 3D Shapes via Modeling Multi-view Depth Maps and Silhouettes with Deep Generative Networks”, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017. ,