Projects per year
Abstract
Recent advances in Deep Learning show the existence of image-agnostic quasi-imperceptible perturbations that when applied to `any' image can fool a state-of-the-art network classifier to change its prediction about the image label. These `Universal Adversarial Perturbations' pose a serious threat to the success of Deep Learning in practice. We present the first dedicated framework to effectively defend the networks against such perturbations. Our approach learns a Perturbation Rectifying Network (PRN) as `pre-input' layers to a targeted model, such that the targeted model needs no modification. The PRN is learned from real and synthetic image-agnostic perturbations, where an efficient method to compute the latter is also proposed. A perturbation detector is separately trained on the Discrete Cosine Transform of the input-output difference of the PRN. A query image is first passed through the PRN and verified by the detector. If a perturbation is detected, the output of the PRN is used for label prediction instead of the actual image. A rigorous evaluation shows that our framework can defend the network classifiers against unseen adversarial perturbations in the real-world scenarios with up to 97.5% success rate. The PRN also generalizes well in the sense that training for one targeted network defends another network with a comparable success rate.
Original language | English |
---|---|
Title of host publication | IEEE Conference on Computer Vision and Pattern Recognition |
Place of Publication | United States |
Publisher | IEEE, Institute of Electrical and Electronics Engineers |
Pages | 3389-3398 |
Number of pages | 10 |
ISBN (Electronic) | 9781538664209 |
ISBN (Print) | 9781538664216 |
DOIs | |
Publication status | Published - 18 Jun 2018 |
Event | 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition - Salt Lake City, United States Duration: 18 Jun 2018 → 23 Jun 2018 |
Conference
Conference | 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition |
---|---|
Country/Territory | United States |
City | Salt Lake City |
Period | 18/06/18 → 23/06/18 |
Fingerprint
Dive into the research topics of 'Defense against Universal Adversarial Perturbations'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Defense against adversarial attacks on deep learning in computer vision
1/01/19 → 17/02/22
Project: Research