TY - JOUR
T1 - Semantic Camera Self-Aware Contrastive Learning for Unsupervised Vehicle Re-Identification
AU - Tao, Xuefeng
AU - Kong, Jun
AU - Jiang, Min
AU - Luo, Xi
N1 - Publisher Copyright:
© 1994-2012 IEEE.
PY - 2024/8/23
Y1 - 2024/8/23
N2 - Unsupervised vehicle re-identification (ReID) aims to retrieve vehicle images from different cameras without using identity labels. Patch features, which capture fine-grained semantic information of vehicles, are crucial for ReID. However, existing methods often fail to preserve the discriminative semantic structure of vehicles due to the non-uniformity of feature attributes across patches. Moreover, domain discrepancy among cameras also requires attention, as it can cause large intra-class variance and noisy clustering results. To tackle these problems, in this letter, we propose a novel Semantic Camera Self-Aware Contrastive Learning (SCSCL) framework for unsupervised vehicle ReID. Firstly, we design the Semantic Self-Aware Contrastive (SSC) loss to perceive the semantic attributes of vehicle images from spatial transformer parameters, thereby enhancing the semantic representation of patch features. Secondly, we design the Camera Self-Aware Contrastive (CSC) loss to perceive the cross-camera distance distributions to facilitate the exploration of instance constraints, thereby enabling cross-camera clustering-friendly representations. Finally, extensive experimental results on VeRi-776 and VehicleID datasets attest to the efficacy of our method over the state-of-the-art performance.
AB - Unsupervised vehicle re-identification (ReID) aims to retrieve vehicle images from different cameras without using identity labels. Patch features, which capture fine-grained semantic information of vehicles, are crucial for ReID. However, existing methods often fail to preserve the discriminative semantic structure of vehicles due to the non-uniformity of feature attributes across patches. Moreover, domain discrepancy among cameras also requires attention, as it can cause large intra-class variance and noisy clustering results. To tackle these problems, in this letter, we propose a novel Semantic Camera Self-Aware Contrastive Learning (SCSCL) framework for unsupervised vehicle ReID. Firstly, we design the Semantic Self-Aware Contrastive (SSC) loss to perceive the semantic attributes of vehicle images from spatial transformer parameters, thereby enhancing the semantic representation of patch features. Secondly, we design the Camera Self-Aware Contrastive (CSC) loss to perceive the cross-camera distance distributions to facilitate the exploration of instance constraints, thereby enabling cross-camera clustering-friendly representations. Finally, extensive experimental results on VeRi-776 and VehicleID datasets attest to the efficacy of our method over the state-of-the-art performance.
KW - camera self-aware contrastive loss
KW - semantic self-aware contrastive loss
KW - Unsupervised vehicle re-identification
UR - http://www.scopus.com/inward/record.url?scp=85201772703&partnerID=8YFLogxK
U2 - 10.1109/LSP.2024.3449233
DO - 10.1109/LSP.2024.3449233
M3 - Article
AN - SCOPUS:85201772703
SN - 1070-9908
VL - 31
SP - 2175
EP - 2179
JO - IEEE Signal Processing Letters
JF - IEEE Signal Processing Letters
ER -