Trading robust representations for sample complexity through self-supervised visual experience

TitleTrading robust representations for sample complexity through self-supervised visual experience
Publication TypeConference Paper
Year of Publication2018
AuthorsTacchetti, A, Voinea, S, Evangelopoulos, G
EditorBengio, S, Wallach, H, Larochelle, H, Grauman, K, Cesa-Bianchi, N, Garnett, R
Conference NameAdvances in Neural Information Processing Systems 31
Date Published12/2018
Conference LocationMontreal, Canada
Abstract

Learning in small sample regimes is among the most remarkable features of the human perceptual system. This ability is related to robustness to transformations, which is acquired through visual experience in the form of weak- or self-supervision during development. We explore the idea of allowing artificial systems to learn representations of visual stimuli through weak supervision prior to downstream su- pervised tasks. We introduce a novel loss function for representation learning using unlabeled image sets and video sequences, and experimentally demonstrate that these representations support one-shot learning and reduce the sample complexity of multiple recognition tasks. We establish the existence of a trade-off between the sizes of weakly supervised, automatically obtained from video sequences, and fully supervised data sets. Our results suggest that equivalence sets other than class labels, which are abundant in unlabeled visual experience, can be used for self-supervised learning of semantically relevant image embeddings.

URLhttp://papers.nips.cc/paper/8170-trading-robust-representations-for-sample-complexity-through-self-supervised-visual-experience.pdf
Download:  PDF icon trading-robust-representations-for-sample-complexity-through-self-supervised-visual-experience.pdf PDF icon NeurIPS2018_Poster.pdf

Research Area: