This dissertation makes four major contributions towards the understanding of deep visual models. Firstly, it develops a model-centric technique that peeks inside the internal representation of a learned classifier. Secondly, it introduces an adversarial attack algorithm that has explicit control over the input and output domains. Thirdly, it proposes a prior-free technique to estimate high-resolution input-centric saliency maps. Lastly, it presents an algorithm that increases model robustness to adversarial perturbations. Extensive experiments demonstrate state-of-the-art performance of the proposed methods alongside their utility in many practical applications.
|Qualification||Doctor of Philosophy|
|Award date||26 Aug 2021|
|Publication status||Unpublished - 2021|