Efficient Detection of Pixel-Level Adversarial Attacks

Research output: Chapter in Book/Conference paperConference paperpeer-review

4 Citations (Scopus)


Deep learning has achieved unprecedented performance in object recognition and scene understanding. However, deep models are also found vulnerable to adversarial attacks. Of particular relevance to robotics systems are pixel-level attacks that can completely fool a neural network by altering very few pixels (e.g. 1-5) in an image. We present the first technique to detect the presence of adversarial pixels in images for the robotic systems, employing an Adversarial Detection Network (ADNet). The proposed network efficiently recognize an input as adversarial or clean by discriminating the peculiar activation signals of the adversarial samples from the clean ones. It acts as a defense mechanism for the robotic vision system by detecting and rejecting the adversarial samples. We thoroughly evaluate our technique on three benchmark datasets including CIFAR-10, CIFAR-100 and Fashion MNIST. Results demonstrate effective detection of adversarial samples by ADNet.

Original languageEnglish
Title of host publication2020 IEEE International Conference on Image Processing, ICIP 2020 - Proceedings
Place of PublicationUnited Arab Emirates
PublisherIEEE, Institute of Electrical and Electronics Engineers
Number of pages5
ISBN (Electronic)9781728163956
Publication statusPublished - Oct 2020
Event2020 IEEE International Conference on Image Processing - Virtual, Abu Dhabi, United Arab Emirates
Duration: 25 Sept 202028 Sept 2020
Conference number: 27th

Publication series

NameProceedings - International Conference on Image Processing, ICIP
ISSN (Print)1522-4880


Conference2020 IEEE International Conference on Image Processing
Abbreviated titleICIP 2020
Country/TerritoryUnited Arab Emirates
CityVirtual, Abu Dhabi


Dive into the research topics of 'Efficient Detection of Pixel-Level Adversarial Attacks'. Together they form a unique fingerprint.

Cite this