Unsupervised learning of clutter-resistant visual representations from natural videos.

TitleUnsupervised learning of clutter-resistant visual representations from natural videos.
Publication TypeCBMM Memos
Year of Publication2014
AuthorsLiao, Q, Leibo, JZ, Poggio, T
Number023
Date Published09/2014
Abstract

Populations of neurons in inferotemporal cortex (IT) maintain an explicit code for object identity that also tolerates transformations of object appearance e.g., position, scale, viewing angle [1, 2, 3]. Though the learning rules are not known, recent results [4, 5, 6] suggest the operation of an unsupervised temporal-association-based method e.g., Foldiak’s trace rule [7]. Such methods exploit the temporal continuity of the visual world by assuming that visual experience over short timescales will tend to have invariant identity content. Thus, by associating representations of frames from nearby times, a representation that tolerates whatever transformations occurred in the video may be achieved. Many previous studies verified that such rules can work in simple situations without background clutter, but the presence of visual clutter has remained problematic for this approach. Here we show that temporal association based on large class-specific filters (templates) avoids the problem of clutter. Our system learns in an unsupervised way from natural videos gathered from the internet, and is able to perform a difficult unconstrained face recognition task on natural images (Labeled Faces in the Wild [8]).

arXiv

arXiv:1409.3879v1

DSpace@MIT

http://hdl.handle.net/1721.1/100187

Download: 

CBMM Memo No: 

023

Research Area: 

CBMM Relationship: 

  • CBMM Funded