Discriminate-and-Rectify Encoders: Learning from Image Transformation Sets

TitleDiscriminate-and-Rectify Encoders: Learning from Image Transformation Sets
Publication TypeCBMM Memos
Year of Publication2017
AuthorsTacchetti, A, Voinea, S, Evangelopoulos, G
Date Published03/2017
Abstract

The complexity of a learning task is increased by transformations in the input space that preserve class identity. Visual object recognition for example is affected by changes in viewpoint, scale, illumination or planar transformations. While drastically altering the visual appearance, these changes are orthogonal to recognition and should not be reflected in the representation or feature encoding used for learning. We introduce a framework for weakly supervised learning of image embeddings that are robust to transformations and selective to the class distribution, using sets of transforming examples (orbit sets), deep parametrizations and a novel orbit-based loss. The proposed loss combines a discriminative, contrastive part for orbits with a reconstruction error that learns to rectify orbit transformations. The learned embeddings are evaluated in distance metric-based tasks, such as one-shot classification under geometric transformations, as well as face verification and retrieval under more realistic visual variability. Our results suggest that orbit sets, suitably computed or observed, can be used for efficient, weakly-supervised learning of semantically relevant image embeddings.

arXiv

arXiv:1703.04775v1

DSpace@MIT

http://hdl.handle.net/1721.1/107446

Download: 

CBMM Memo No: 

062

Research Area: 

CBMM Relationship: 

  • CBMM Funded