Causal and compositional generative models in online perception

TitleCausal and compositional generative models in online perception
Publication TypeConference Proceedings
Year of Publication2017
AuthorsYildirim, I, Janner, M, Belledonne, M, Wallraven, C, Freiwald, WA, Tenenbaum, JB
Conference Name 39th Annual Meeting of the Cognitive Science Society - COGSCI 2017
Date Published07/2017
Conference LocationLondon, UK
Abstract

From a quick glance or the touch of an object, our brains map sensory signals to scenes composed of rich and detailed shapes and surfaces. Unlike the standard approaches to perception, we argue that this mapping draws on internal causal and compositional models of the physical world and these internal models underlie the generalization capacity of human perception. Here, we present a generative model of visual and multisensory perception in which the latent variables encode intrinsic (e.g., shape) and extrinsic (e.g., occlusion) object properties. Latent variables are inputs to causal models that output sense-specific signals. We present a recognition network that performs efficient inference in the generative model, computing at a speed similar to online perception. We show that our model, but not alternatives, can account for human performance in an occluded face matching task and in a visual-to-haptic face matching task.

URLhttps://mindmodeling.org/cogsci2017/papers/0266/index.html

Research Area: 

CBMM Relationship: 

  • CBMM Related