Learning Scene Gist with Convolutional Neural Networks to Improve Object Recognition

TitleLearning Scene Gist with Convolutional Neural Networks to Improve Object Recognition
Publication TypeJournal Article
Year of Publication2018
AuthorsWu, K, Wu, E, Kreiman, G
Journal arXiv | Cornell University
Volume arXiv:1803.01967
Date Published03/2018

Advancements in convolutional neural networks (CNNs) have made significant strides toward achieving high performance levels on multiple object recognition tasks. While some approaches utilize information from the entire scene to propose regions of interest, the task of interpreting a particular region or object is still performed independently of other objects and features in the image. Here we demonstrate that a scene's 'gist' can significantly contribute to how well humans can recognize objects. These findings are consistent with the notion that humans foveate on an object and incorporate information from the periphery to aid in recognition. We use a biologically inspired two-part convolutional neural network ('GistNet') that models the fovea and periphery to provide a proof-of-principle demonstration that computational object recognition can significantly benefit from the gist of the scene as contextual information. Our model yields accuracy improvements of up to 50% in certain object categories when incorporating contextual gist, while only increasing the original model size by 5%. This proposed model mirrors our intuition about how the human visual system recognizes objects, suggesting specific biologically plausible constraints to improve machine vision and building initial steps towards the challenge of scene understanding.


Research Area: 

CBMM Relationship: 

  • CBMM Funded