Training and Evaluating Multimodal Word Embeddings with Large-scale Web Annotated Images

TitleTraining and Evaluating Multimodal Word Embeddings with Large-scale Web Annotated Images
Publication TypeConference Paper
Year of Publication2016
AuthorsMao, J, Xu, J, Jing, Y, Yuille, A
Conference NameNIPS 2016
Abstract

In this paper, we focus on training and evaluating effective word embeddings with both text and visual information.  More specifically, we introduce a large-scale dataset with 300 million sentences describing over 40 million images crawled and downloaded from publicly available Pins (i.e. an image with sentence descriptions uploaded by users) on Pinterest [ 2 ]. This dataset is more than 200 times larger than MS COCO [ 22 ], the standard large-scale image dataset with sentence descriptions. In addition, we construct an evaluation dataset to directly assess the effectiveness of word embeddings in terms of finding semantically similar or related words and phrases. The word/phrase pairs in this evaluation dataset are collected from the click data with millions of users in an image search system,  thus contain rich semantic relationships.  Based on these datasets, we propose and compare several Recurrent Neural Networks (RNNs) based multimodal (text and image) models. Experiments show that our model benefits from incorporating the visual information into the word embeddings, and a weight sharing strategy is crucial for learning such multimodal embeddings. The project page is: http://www.stat. ucla.edu/~junhua.mao/multimodal_embedding.html 1 .

Research Area: 

CBMM Relationship: 

  • CBMM Funded