Projects per year
Deep Neural Networks are brittle in that small changes in the input can drastically affect their prediction outcome and confidence. Consequently, research in this area mainly focus on adversarial attacks and defenses. In this paper, we take an alternative stance and introduce the concept of Assistive Signals, which are perturbations optimized to improve a model's confidence score regardless if it's under attack or not. We analyze some interesting properties of these assistive perturbations and extend the idea to optimize them in the 3D space simulating different lighting conditions and viewing angles. Experimental evaluations show that the assistive signals generated by our optimization method increase the accuracy and confidence of deep models more than those generated by conventional methods that work in the 2D space. 'Assistive Signals' also illustrate bias of ML models towards certain patterns in real-life objects.
|Title of host publication||Proceedings - 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2021|
|Place of Publication||USA|
|Publisher||IEEE, Institute of Electrical and Electronics Engineers|
|Number of pages||5|
|Publication status||Published - Jun 2021|
|Event||2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2021 - Virtual, Online, United States|
Duration: 19 Jun 2021 → 25 Jun 2021
|Name||IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops|
|Conference||2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2021|
|Period||19/06/21 → 25/06/21|
FingerprintDive into the research topics of 'Assistive signals for deep neural network classifiers'. Together they form a unique fingerprint.
- 1 Finished