Shape and Material from Sound

TitleShape and Material from Sound
Publication TypeConference Proceedings
Year of Publication2017
Authorszhang, zhoutong, Li, Q, Huang, Z, Wu, J, Tenenbaum, JB, Freeman, WT
EditorGuyon, I, Luxburg, UV, Bengio, S, Wallach, H, Fergus, R, Vishwanathan, S, Garnett, R
Conference NameAdvances in Neural Information Processing Systems 30
Pagination1278–1288
Date Published12/2017
Conference LocationLong Beach, CA
Abstract

What can we infer from hearing an object falling onto the ground? Based on knowledge of the physical world, humans are able to infer rich information from such limited data: rough shape of the object, its material, the height of falling, etc. In this paper, we aim to approximate such competency. We first mimic the human knowledge about the physical world using a fast physics-based generative model. Then, we present an analysis-by-synthesis approach to infer properties of the falling object. We further approximate human past experience by directly mapping audio to object properties using deep learning with self-supervision. We evaluate our method through behavioral studies, where we compare human predictions with ours on inferring object shape, material, and initial height of falling. Results show that our method achieves near-human performance, without any annotations.

URLhttp://papers.nips.cc/paper/6727-shape-and-material-from-sound.pdf

Research Area: 

CBMM Relationship: 

  • CBMM Funded