Simulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations

TitleSimulating a Primary Visual Cortex at the Front of CNNs Improves Robustness to Image Perturbations
Publication TypeConference Proceedings
Year of Publication2020
AuthorsDapello, J, Marques, T, Schrimpf, M, Geiger, F, Cox, D, DiCarlo, JJ
Conference NameAdvances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020)
Date Published12/2020
Abstract

Current state-of-the-art object recognition models are largely based on convolutional neural network (CNN) architectures, which are loosely inspired by the primate visual system. However, these CNNs can be fooled by imperceptibly small, explicitly crafted perturbations, and struggle to recognize objects in corrupted images that are easily recognized by humans. Here, by making comparisons with primate neural data, we first observed that CNN models with a neural hidden layer that better matches primate primary visual cortex (V1) are also more robust to adversarial attacks. Inspired by this observation, we developed VOneNets, a new class of hybrid CNN vision models. Each VOneNet contains a fixed weight neural network front-end that simulates primate V1, called the VOneBlock, followed by a neural network back-end adapted from current CNN vision models. The VOneBlock is based on a classical neuroscientific model of V1: the linear-nonlinear-Poisson model, consisting of a biologically-constrained Gabor filter bank, simple and complex cell nonlinearities, and a V1 neuronal stochasticity generator. After training, VOneNets retain high ImageNet performance, but each is substantially more robust, outperforming the base CNNs and state-of-the-art methods by 18% and 3%, respectively, on a conglomerate benchmark of perturbations comprised of white box adversarial attacks and common image corruptions. Finally, we show that all components of the VOneBlock work in synergy to improve robustness. While current CNN architectures are arguably brain-inspired, the results presented here demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in ImageNet-level computer vision applications.

Github: https://github.com/dicarlolab/vonenet

URLhttps://proceedings.neurips.cc/paper/2020/hash/98b17f068d5d9b7668e19fb8ae470841-Abstract.html

Associated Module: 

CBMM Relationship: 

  • CBMM Funded