Title | Shape and Material from Sound |
Publication Type | Conference Proceedings |
Year of Publication | 2017 |
Authors | zhang, zhoutong, Li, Q, Huang, Z, Wu, J, Tenenbaum, JB, Freeman, WT |
Editor | Guyon, I, Luxburg, UV, Bengio, S, Wallach, H, Fergus, R, Vishwanathan, S, Garnett, R |
Conference Name | Advances in Neural Information Processing Systems 30 |
Pagination | 1278–1288 |
Date Published | 12/2017 |
Conference Location | Long Beach, CA |
Abstract | What can we infer from hearing an object falling onto the ground? Based on knowledge of the physical world, humans are able to infer rich information from such limited data: rough shape of the object, its material, the height of falling, etc. In this paper, we aim to approximate such competency. We first mimic the human knowledge about the physical world using a fast physics-based generative model. Then, we present an analysis-by-synthesis approach to infer properties of the falling object. We further approximate human past experience by directly mapping audio to object properties using deep learning with self-supervision. We evaluate our method through behavioral studies, where we compare human predictions with ours on inferring object shape, material, and initial height of falling. Results show that our method achieves near-human performance, without any annotations. |
URL | http://papers.nips.cc/paper/6727-shape-and-material-from-sound.pdf |
Research Area:
CBMM Relationship:
- CBMM Funded