%0 Conference Proceedings %B Cognitive Science Society %D 2019 %T Draping an Elephant: Uncovering Children's Reasoning About Cloth-Covered Objects %A Tomer D Ullman %A Eliza Kosoy %A Ilker Yildirim %A Amir Arsalan Soltani %A Max Siegel %A Joshua B. Tenenbaum %A Elizabeth S Spelke %K analysis-by-synthesis %K cloth %K cognitive development %K imagination %K intuitive physics %K object recognition %K occlusion %K perception %K vision %X

Humans have an intuitive understanding of physics. They can predict how a physical scene will unfold, and reason about how it came to be. Adults may rely on such a physical representation for visual reasoning and recognition, going beyond visual features and capturing objects in terms of their physical properties. Recently, the use of draped objects in recognition was used to examine adult object representations in the absence of many common visual features. In this paper we examine young children’s reasoning about draped objects in order to examine the develop of physical object representation. In addition, we argue that a better understanding of the development of the concept of cloth as a physical entity is worthwhile in and of itself, as it may form a basic ontological category in intuitive physical reasoning akin to liquids and solids. We use two experiments to investigate young children’s (ages 3–5) reasoning about cloth-covered objects, and find that they perform significantly above chance (though far from perfectly) indicating a representation of physical objects that can interact dynamically with the world. Children’s success and failure pattern is similar across the two experiments, and we compare it to adult behavior. We find a small effect, which suggests the specific features that make reasoning about certain objects more difficult may carry into adulthood.

%B Cognitive Science Society %C Montreal, Canada %8 07/2019 %G eng %U https://mindmodeling.org/cogsci2019/papers/0506/index.html %0 Conference Paper %B 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) %D 2017 %T Synthesizing 3D Shapes via Modeling Multi-view Depth Maps and Silhouettes with Deep Generative Networks %A Amir Arsalan Soltani %A Haibin Huang %A Jiajun Wu %A Tejas Kulkarni %A Joshua B. Tenenbaum %K 2d to 3d %K 3D generation %K 3D reconstruction %K Core object system %K depth map %K generative %K perception %K silhouette %X

We study the problem of learning generative models of 3D shapes. Voxels or 3D parts have been widely used as the underlying representations to build complex 3D shapes; however, voxel-based representations suffer from high memory requirements, and parts-based models require a large collection of cached or richly parametrized parts. We take an alternative approach: learning a generative model over multi-view depth maps or their corresponding silhouettes, and using a deterministic rendering function to produce 3D shapes from these images. A multi-view representation of shapes enables generation of 3D models with fine details, as 2D depth maps and silhouettes can be modeled at a much higher resolution than 3D voxels. Moreover, our approach naturally brings the ability to recover the underlying 3D representation from depth maps of one or a few viewpoints. Experiments show that our framework can generate 3D shapes with variations and details. We also demonstrate that our model has out-of-sample generalization power for real-world tasks with occluded objects.

%B 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) %C Honolulu, HI %8 07/2017 %G eng %U http://ieeexplore.ieee.org/document/8099752/http://xplorestaging.ieee.org/ielx7/8097368/8099483/08099752.pdf?arnumber=8099752 %R 10.1109/CVPR.2017.269