PCA as a defense against some adversaries

TitlePCA as a defense against some adversaries
Publication TypeCBMM Memos
Year of Publication2022
AuthorsGupte, A, Banburski, A, Poggio, T
Abstract

Neural network classifiers are known to be highly vulnerable to adversarial perturbations in their inputs. Under the hypothesis that adversarial examples lie outside of the sub-manifold of natural images, previous work has investigated the impact of principal components in data on adversarial robustness. In this paper we show that there exists a very simple defense mechanism in the case where adversarial images are separable in a previously defined $(k,p)$ metric. This defense is very successful against the  popular Carlini-Wagner attack, but less so against some other common attacks like FGSM. It is interesting to note that the defense is still successful for relatively large perturbations.

DSpace@MIT

https://hdl.handle.net/1721.1/141424

Download:  PDF icon CBMM-Memo-135.pdf
CBMM Memo No:  135

Associated Module: 

Research Area: 

CBMM Relationship: 

  • CBMM Funded