Projects per year
Abstract
Deep neural network models are vulnerable to adversarial perturbations that are subtle but change the model predictions. Adversarial perturbations are generally computed for RGB images and are, hence, equally distributed among the RGB channels. We show, for the first time, that adversarial perturbations prevail in the Y-channel of the YC_bC_r > color space and exploit this finding to propose a defense mechanism. Our defense ResUpNet, which is end-to-end trainable, removes perturbations only from the Y-channel by exploiting ResNet features in a bottleneck free up-sampling framework. The refined Y-channel is combined with the untouched C_bC_r -channels to restore the clean image. We compare ResUpNet to existing defenses in the input transformation category and show that it achieves the best balance between maintaining the original accuracies on clean images and defense against adversarial attacks. Finally, we show that for the same attack and fixed perturbation magnitude, learning perturbations only in the Y-channel results in higher fooling rates. For example, with a very small perturbation magnitude epsilon=0.002) the fooling rates of FGSM and PGD attacks on the ResNet50 model increase by 11.1% and 15.6% respectively, when the perturbations are learned only for the Y-channel.
Original language | English |
---|---|
Title of host publication | IJCNN 2021 - International Joint Conference on Neural Networks, Proceedings |
Publisher | IEEE, Institute of Electrical and Electronics Engineers |
ISBN (Electronic) | 9780738133669 |
DOIs | |
Publication status | Published - 18 Jul 2021 |
Event | 2021 International Joint Conference on Neural Networks, IJCNN 2021 - Virtual, Shenzhen, China Duration: 18 Jul 2021 → 22 Jul 2021 |
Publication series
Name | Proceedings of the International Joint Conference on Neural Networks |
---|---|
Volume | 2021-July |
Conference
Conference | 2021 International Joint Conference on Neural Networks, IJCNN 2021 |
---|---|
Country/Territory | China |
City | Virtual, Shenzhen |
Period | 18/07/21 → 22/07/21 |
Fingerprint
Dive into the research topics of 'Adversarial Attacks and Defense on Deep Learning Classification Models using YCbCr Color Images'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Defense against adversarial attacks on deep learning in computer vision
Mian, A. (Investigator 01)
ARC Australian Research Council
1/01/19 → 31/03/24
Project: Research