Approaching human 3D shape perception with neurally mappable models

TitleApproaching human 3D shape perception with neurally mappable models
Publication TypeJournal Article
Year of Publication2023
AuthorsO'Connell, TP, Bonnen, T, Friedman, Y, Tewari, A, Tenenbaum, JB, Sitzmann, V, Kanwisher, N
JournalarXiv
Date Published08/2023
Abstract

Humans effortlessly infer the 3D shape of objects. What computations underlie this ability? Although various computational models have been proposed, none of them capture the human ability to match object shape across viewpoints. Here, we ask whether and how this gap might be closed. We begin with a relatively novel class of computational models, 3D neural fields, which encapsulate the basic principles of classic analysis-by-synthesis in a deep neural network (DNN). First, we find that a 3D Light Field Network (3D-LFN) supports 3D matching judgments well aligned to humans for within-category comparisons, adversarially-defined comparisons that accentuate the 3D failure cases of standard DNN models, and adversarially-defined comparisons for algorithmically generated shapes with no category structure. We then investigate the source of the 3D-LFN's ability to achieve human-aligned performance through a series of computational experiments. Exposure to multiple viewpoints of objects during training and a multi-view learning objective are the primary factors behind model-human alignment; even conventional DNN architectures come much closer to human behavior when trained with multi-view objectives. Finally, we find that while the models trained with multi-view learning objectives are able to partially generalize to new object categories, they fall short of human alignment. This work provides a foundation for understanding human shape inferences within neurally mappable computational architectures.

URLhttps://arxiv.org/abs/2308.11300

Associated Module: 

CBMM Relationship: 

  • CBMM Funded