Generative Metric Learning for Adversarially Robust Open-world Person Re-Identification

Deyin Liu, Lin Yuanbo Wu, Richang Hong, Zongyuan Ge, Jialie Shen, Farid Boussaid, Mohammed Bennamoun

Research output: Contribution to journalArticlepeer-review

26 Citations (Scopus)

Abstract

The vulnerability of re-identification (re-ID) models under adversarial attacks is of significant concern as criminals may use adversarial perturbations to evade surveillance systems. Unlike a closed-world re-ID setting (i.e., a fixed number of training categories), a reliable re-ID system in the open world raises the concern of training a robust yet discriminative classifier, which still shows robustness in the context of unknown examples of an identity. In this work, we improve the robustness of open-world re-ID models by proposing a generative metric learning approach to generate adversarial examples that are regularized to produce robust distance metric. The proposed approach leverages the expressive capability of generative adversarial networks to defend the re-ID models against feature disturbance attacks. By generating the target people variants and sampling the triplet units for metric learning, our learned distance metrics are regulated to produce accurate predictions in the feature metric space. Experimental results on the three re-ID datasets, i.e., Market-1501, DukeMTMC-reID, and MSMT17 demonstrate the robustness of our method.

Original languageEnglish
Article number20
JournalACM Transactions on Multimedia Computing, Communications and Applications
Volume19
Issue number1
DOIs
Publication statusPublished - 5 Jan 2023

Fingerprint

Dive into the research topics of 'Generative Metric Learning for Adversarially Robust Open-world Person Re-Identification'. Together they form a unique fingerprint.

Cite this