One Shot Learning via Compositions of Meaningful Patches

TitleOne Shot Learning via Compositions of Meaningful Patches
Publication TypeConference Paper
Year of Publication2015
AuthorsWong, A, Yuille, A
Conference NameInternational Conference on Computer Vision (ICCV)
Abstract

The task of discriminating one object from another is al- most trivial for a human being. However, this task is compu- tationally taxing for most modern machine learning meth- ods; whereas, we perform this task at ease given very few examples for learning.  It has been proposed that the quick grasp of concept may come from the shared knowledge be- tween  the  new  example  and  examples  previously  learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of in- formation on how a visual concept is constructed.  We pro- pose an unsupervised method for learning a compact dictio- nary of image patches representing meaningful components of an objects.  Using those patches as features, we build a compositional model that outperforms a number of popu- lar algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.

Research Area: 

CBMM Relationship: 

  • CBMM Funded