Hypothesis-driven Online Video Stream Learning with Augmented Memory

TitleHypothesis-driven Online Video Stream Learning with Augmented Memory
Publication TypeJournal Article
Year of Publication2021
AuthorsZhang, M, Badkundri, R, Talbot, MB, Zawar, R, Kreiman, G
JournalarXiv
Date Published04/2021
Abstract

The ability to continuously acquire new knowledge without forgetting previous tasks remains a challenging problem for computer vision systems. Standard continual learning benchmarks focus on learning from static iid images in an offline setting. Here, we examine a more challenging and realistic online continual learning problem called online stream learning. Like humans, some AI agents have to learn incrementally from a continuous temporal stream of non-repeating data. We propose a novel model, Hypotheses-driven Augmented Memory Network (HAMN), which efficiently consolidates previous knowledge using an augmented memory matrix of "hypotheses" and replays reconstructed image features to avoid catastrophic forgetting. Compared with pixel-level and generative replay approaches, the advantages of HAMN are two-fold. First, hypothesis-based knowledge consolidation avoids redundant information in the image pixel space and makes memory usage far more efficient. Second, hypotheses in the augmented memory can be re-used for learning new tasks, improving generalization and transfer learning ability. Given a lack of online incremental class learning datasets on video streams, we introduce and adapt two additional video datasets, Toybox and iLab, for online stream learning. We also evaluate our method on the CORe50 and online CIFAR100 datasets. Our method performs significantly better than all state-of-the-art methods, while offering much more efficient memory usage. All source code and data are publicly available at this URL

URLhttps://arxiv.org/abs/2104.02206
DOI10.48550/arXiv.2104.02206
Download:  PDF icon 2104.02206.pdf

Associated Module: 

CBMM Relationship: 

  • CBMM Funded