Compositional Generative Networks & Adversarial Examiners: Beyond the Limitations of Current AI

Compositional Generative Networks & Adversarial Examiners: Beyond the Limitations of Current AI

Date Posted:  May 5, 2021
Date Recorded:  May 4, 2021
CBMM Speaker(s):  Alan L. Yuille
  • All Captioned Videos
  • Brains, Minds and Machines Seminar Series
Description: 

Current AI visual algorithms are very limited compared to the robustness and flexibility of the human visual system. These limitations, however, are often obscured by the standard performance measures (SPMs) used to evaluate vision algorithms which favor data-driven methods. SPMs, however, are problematic due to the combinatorial complexity of natural images and lead to unrealistic expectations about the effectiveness of current algorithms. We argue that tougher performance measures, such as out-of-distribution testing and adversarial examiners, are required to realistically evaluate vision algorithms and hence to encourage AI vision systems which can achieve human level performance. We illustrate this by studying object classification where the algorithms are trained on standard datasets which have limited occlusion but are tested on datasets where the objects are severally occluded (out-of-distribution testing) and/or where adversarial patches are placed in the images (adversarial examiners). We show that standard Deep Nets perform badly under these types of tests but Generative Compositional Nets, which perform approximate analysis by synthesis, are much more robust.