TY - JOUR
T1 - Generative Metric Learning for Adversarially Robust Open-world Person Re-Identification
AU - Liu, Deyin
AU - Wu, Lin Yuanbo
AU - Hong, Richang
AU - Ge, Zongyuan
AU - Shen, Jialie
AU - Boussaid, Farid
AU - Bennamoun, Mohammed
N1 - Funding Information:
This work was funded by Australian Research Council (Grants DP150100294 and DP150104251). Lin (Yuanbo) Wu was partially supported by NSFC U19A2073, 62002096. This work was also partially supported by Co-operative Innovation Project of Colleges in Anhui (GXXT-2019-025).
Publisher Copyright:
© 2023 Association for Computing Machinery.
PY - 2023/1/5
Y1 - 2023/1/5
N2 - The vulnerability of re-identification (re-ID) models under adversarial attacks is of significant concern as criminals may use adversarial perturbations to evade surveillance systems. Unlike a closed-world re-ID setting (i.e., a fixed number of training categories), a reliable re-ID system in the open world raises the concern of training a robust yet discriminative classifier, which still shows robustness in the context of unknown examples of an identity. In this work, we improve the robustness of open-world re-ID models by proposing a generative metric learning approach to generate adversarial examples that are regularized to produce robust distance metric. The proposed approach leverages the expressive capability of generative adversarial networks to defend the re-ID models against feature disturbance attacks. By generating the target people variants and sampling the triplet units for metric learning, our learned distance metrics are regulated to produce accurate predictions in the feature metric space. Experimental results on the three re-ID datasets, i.e., Market-1501, DukeMTMC-reID, and MSMT17 demonstrate the robustness of our method.
AB - The vulnerability of re-identification (re-ID) models under adversarial attacks is of significant concern as criminals may use adversarial perturbations to evade surveillance systems. Unlike a closed-world re-ID setting (i.e., a fixed number of training categories), a reliable re-ID system in the open world raises the concern of training a robust yet discriminative classifier, which still shows robustness in the context of unknown examples of an identity. In this work, we improve the robustness of open-world re-ID models by proposing a generative metric learning approach to generate adversarial examples that are regularized to produce robust distance metric. The proposed approach leverages the expressive capability of generative adversarial networks to defend the re-ID models against feature disturbance attacks. By generating the target people variants and sampling the triplet units for metric learning, our learned distance metrics are regulated to produce accurate predictions in the feature metric space. Experimental results on the three re-ID datasets, i.e., Market-1501, DukeMTMC-reID, and MSMT17 demonstrate the robustness of our method.
KW - Adversarial attack
KW - generative metric learning
KW - open-world person re-identification
KW - robust models
UR - http://www.scopus.com/inward/record.url?scp=85148016451&partnerID=8YFLogxK
U2 - 10.1145/3522714
DO - 10.1145/3522714
M3 - Article
AN - SCOPUS:85148016451
SN - 1551-6857
VL - 19
JO - ACM Transactions on Multimedia Computing, Communications and Applications
JF - ACM Transactions on Multimedia Computing, Communications and Applications
IS - 1
M1 - 20
ER -