Abstract
Deep learning offers state-of-the-art solutions for multiple computer vision tasks. However, they are also vulnerable to subtle input perturbations which can significantly change model predictions (i.e., adversarial attacks).The ultimate goal of adversarial attacks is to point out weaknesses of deep learning approaches to the extent that it can be secured from any potential attack. In this dissertation I aim to further expand the adversarial machine learning and computer vision horizon by exploring non-traditional approaches. In addition, I contribute to other fields in machine learning such as metrics for fairer benchmarking, explainability, and novel 3D texture generation.
Original language | English |
---|---|
Qualification | Doctor of Philosophy |
Awarding Institution |
|
Supervisors/Advisors |
|
Thesis sponsors | |
Award date | 10 Jun 2022 |
DOIs | |
Publication status | Unpublished - 2022 |