Title | Trading robust representations for sample complexity through self-supervised visual experience |
Publication Type | Conference Paper |
Year of Publication | 2018 |
Authors | Tacchetti, A, Voinea, S, Evangelopoulos, G |
Editor | Bengio, S, Wallach, H, Larochelle, H, Grauman, K, Cesa-Bianchi, N, Garnett, R |
Conference Name | Advances in Neural Information Processing Systems 31 |
Date Published | 12/2018 |
Conference Location | Montreal, Canada |
Abstract | Learning in small sample regimes is among the most remarkable features of the human perceptual system. This ability is related to robustness to transformations, which is acquired through visual experience in the form of weak- or self-supervision during development. We explore the idea of allowing artificial systems to learn representations of visual stimuli through weak supervision prior to downstream su- pervised tasks. We introduce a novel loss function for representation learning using unlabeled image sets and video sequences, and experimentally demonstrate that these representations support one-shot learning and reduce the sample complexity of multiple recognition tasks. We establish the existence of a trade-off between the sizes of weakly supervised, automatically obtained from video sequences, and fully supervised data sets. Our results suggest that equivalence sets other than class labels, which are abundant in unlabeled visual experience, can be used for self-supervised learning of semantically relevant image embeddings. |
URL | http://papers.nips.cc/paper/8170-trading-robust-representations-for-sample-complexity-through-self-supervised-visual-experience.pdf |
Download:
trading-robust-representations-for-sample-complexity-through-self-supervised-visual-experience.pdf NeurIPS2018_Poster.pdf