Prof. Alan L. Yuille (JHU)
Abstract: Current AI visual algorithms are very limited compared to the robustness and flexibility of the human visual system. These limitations, however, are often obscured by the standard performance measures (SPMs) used to evaluate vision algorithms which favor data-driven methods. SPMs, however, are problematic due to the combinatorial complexity of natural images and lead to unrealistic expectations about the effectiveness of current algorithms. We argue that tougher performance measures, such as out-of-distribution testing and adversarial examiners, are required to realistically evaluate vision algorithms and hence to encourage AI vision systems which can achieve human level performance. We illustrate this by studying object classification where the algorithms are trained on standard datasets which have limited occlusion but are tested on datasets where the objects are severally occluded (out-of-distribution testing) and/or where adversarial patches are placed in the images (adversarial examiners). We show that standard Deep Nets perform badly under these types of tests but Generative Compositional Nets, which perform approximate analysis by synthesis, are much more robust.
Zoom link: https://mit.zoom.us/j/95505708173?pwd=cjBLVlZWYXNXcDBIanRKMWZNNXZuZz09
Passcode: 522130