Visual Concept-Metaconcept Learning

TitleVisual Concept-Metaconcept Learning
Publication TypeConference Proceedings
Year of Publication2019
AuthorsHan, C, Mao, J, Gan, C, Tenenbaum, JB, Wu, J
Conference NameNeural Information Processing Systems (NeurIPS 2019)
Date Published11/2019
Conference LocationVancouver, Canada

Humans reason with concepts and metaconcepts: we recognize red and blue from visual input; we also understand that they are colors, i.e., red is an instance of color. In this paper, we propose the visual concept-metaconcept learner (VCML) for joint learning of concepts and metaconcepts from images and associated question-answer pairs. The key is to exploit the bidirectional connection between visual concepts and metaconcepts. Visual representations provide grounding cues for predicting relations between unseen pairs of concepts. Knowing that red and blue are instances of color, we generalize to the fact that green is also an instance of color since they all categorize the hue of objects. Meanwhile, knowledge about metaconcepts empowers visual concept learning from limited, noisy, and even biased data. From just a few examples of purple cubes we can understand a new color purple, which resembles the hue of the cubes instead of the shape of them. Evaluation on both synthetic and real-world datasets validates our claims.

Associated Module: 

CBMM Relationship: 

  • CBMM Funded