Causal and compositional generative models in online perception

TitleCausal and compositional generative models in online perception
Publication TypeConference Paper
Year of Publication2017
AuthorsYildirim, I, Janner, M
EditorBelledonne, M, Wallraven, C, Freiwald, WA, Tenenbaum, JB
Conference Name39th Annual Conference of the Cognitive Science Society
Conference LocationLondon, UK
Abstract

From a quick glance or the touch of an object, our brains map sensory signals to scenes composed of rich and detailed shapes and surfaces. Unlike the standard pattern recognition approaches to perception, we argue that this mapping draws on internal causal and compositional models of the outside phys- ical world, and that such internal models underlie the general- ization capacity of human perception. Here, we present a gen- erative model of visual and multisensory perception in which the latent variables encode intrinsic properties of objects such as their shapes and surfaces in addition to their extrinsic prop- erties such as pose and occlusion. These latent variables can be composed in novel ways and are inputs to sensory-specific causal models that output sense-specific signals. We present a novel recognition network that performs efficient inference in the generative model, computing at a speed similar to online perception. We show that our model, but not an alternative baseline model or a lesion of our model, can account for hu- man performance in an occluded face matching task and in a cross-modal visual-to-haptic face matching task. 

Research Area: 

CBMM Relationship: 

  • CBMM Funded