Title | Foveation-based Mechanisms Alleviate Adversarial Examples |
Publication Type | CBMM Memos |
Year of Publication | 2016 |
Authors | Luo, Y, Boix, X, Roig, G, Poggio, T, Zhao, Q |
Number | 044 |
Date Published | 01/2016 |
Publication Language | English |
Abstract | We show that adversarial examples, i.e., the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations---applying the CNN in different image regions. To see this, first, we report results in ImageNet that lead to a revision of the hypothesis that adversarial perturbations are a consequence of CNNs acting as a linear classifier: CNNs act locally linearly to changes in the image regions with objects recognized by the CNN, and in other regions the CNN may act non-linearly. Then, we corroborate that when the neural responses are linear, applying the foveation mechanism to the adversarial example tends to significantly reduce the effect of the perturbation. This is because, hypothetically, the CNNs for ImageNet are robust to changes of scale and translation of the object produced by the foveation, but this property does not generalize to transformations of the perturbation. As a result, the accuracy after a foveation is almost the same as the accuracy of the CNN without the adversarial perturbation, even if the adversarial perturbation is calculated taking into account a foveation. |
arXiv | |
DSpace@MIT |
Research Area:
CBMM Relationship:
- CBMM Funded