One Shot Learning by Composition of Meaningful Patches

TitleOne Shot Learning by Composition of Meaningful Patches
Publication TypeConference Paper
Year of Publication2015
AuthorsWong, A, Yuille, A
Conference NameInternational Conference on Computer Vision (ICCV)
Date Published12/2015
Conference LocationSantiago, Chile
Abstract
The task of discriminating one object from another is almost trivial for a human being. However, this task is computationally taxing for most modern machine learning methods; whereas, we perform this task at ease given very few examples for learning. It has been proposed that the quick grasp of concept may come from the shared knowledge between the new example and examples previously learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of information on how a visual concept is constructed. We propose an unsupervised method for learning a compact dictionary of image patches representing meaningful components of an objects. Using those patches as features, we build a compositional model that outperforms a number of popular algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.

Research Area: 

CBMM Relationship: 

  • CBMM Funded