%0 Conference Paper %B International Conference on Computer Vision (ICCV) %D 2015 %T One Shot Learning by Composition of Meaningful Patches %A Alex Wong %A Alan Yuille %X
The task of discriminating one object from another is almost trivial for a human being. However, this task is computationally taxing for most modern machine learning methods; whereas, we perform this task at ease given very few examples for learning. It has been proposed that the quick grasp of concept may come from the shared knowledge between the new example and examples previously learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of information on how a visual concept is constructed. We propose an unsupervised method for learning a compact dictionary of image patches representing meaningful components of an objects. Using those patches as features, we build a compositional model that outperforms a number of popular algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.
%B International Conference on Computer Vision (ICCV) %C Santiago, Chile %8 12/2015 %G eng