MarrNet: 3D Shape Reconstruction via 2.5D Sketches

TitleMarrNet: 3D Shape Reconstruction via 2.5D Sketches
Publication TypeConference Proceedings
Year of Publication2017
AuthorsWu, J, Wang, Y, Xue, T, Sun, X, Freeman, WT, Tenenbaum, JB
EditorGuyon, I, Luxburg, UV, Bengio, S, Wallach, H, Fergus, R, Vishwanathan, S, Garnett, R
Conference NameAdvances in Neural Information Processing Systems 30
Pagination540–550
Date Published12/2017
Conference LocationLong Beach, CA
Abstract

3D object reconstruction from a single image is a highly under-determined problem, requiring strong prior knowledge of plausible 3D shapes. This introduces challenge for learning-based approaches, as 3D object annotations in real images are scarce. Previous work chose to train on synthetic data with ground truth 3D information, but suffered from the domain adaptation issue when tested on real data. In this work, we propose an end-to-end trainable framework, sequentially estimating 2.5D sketches and 3D object shapes. Our disentangled, two-step formulation has three advantages. First, compared to full 3D shape, 2.5D sketches are much easier to be recovered from a 2D image, and to transfer from synthetic to real data. Second, for 3D reconstruction from the 2.5D sketches, we can easily transfer the learned model on synthetic data to real images, as rendered 2.5D sketches are invariant to object appearance variations in real images, including lighting, texture, etc. This further relieves the domain adaptation problem. Third, we derive differentiable projective functions from 3D shape to 2.5D sketches, making the framework end-to-end trainable on real images, requiring no real-image annotations. Our framework achieves state-of-the-art performance on 3D shape reconstruction.

URLhttp://papers.nips.cc/paper/6657-marrnet-3d-shape-reconstruction-via-25d-sketches.pdf

Research Area: 

CBMM Relationship: 

  • CBMM Funded