Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset

TitleSpoken ObjectNet: A Bias-Controlled Spoken Caption Dataset
Publication TypeCBMM Memos
Year of Publication2021
AuthorsPalmer, I, Rouditchenko, A, Barbu, A, Katz, B, Glass, J
Abstract

Visually-grounded spoken language datasets can enable models to learn cross-modal correspon- dences with very weak supervision. However, modern audio-visual datasets contain biases that un- dermine the real-world performance of models trained on that data. We introduce Spoken ObjectNet, which is designed to remove some of these biases and provide a way to better evaluate how effec- tively models will perform in real-world scenarios. This dataset expands upon ObjectNet, which is a biascontrolled image dataset that features similar image classes to those present in ImageNet. We detail our data collection pipeline, which features several methods to improve caption quality, including automated language model checks. Lastly, we show baseline results on image retrieval and audio re- trieval tasks. These results show that models trained on other datasets and then evaluated on Spoken ObjectNet tend to perform poorly due to biases in other datasets that the models have learned. We also show evidence that the performance decrease is due to the dataset controls, and not the transfer setting.

DSpace@MIT

https://hdl.handle.net/1721.1/141358

Download:  PDF icon CBMM-Memo-128.pdf
CBMM Memo No:  128

Associated Module: 

CBMM Relationship: 

  • CBMM Funded