Semantic Camera Self-Aware Contrastive Learning for Unsupervised Vehicle Re-Identification

Xuefeng Tao, Jun Kong, Min Jiang, Xi Luo

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Unsupervised vehicle re-identification (ReID) aims to retrieve vehicle images from different cameras without using identity labels. Patch features, which capture fine-grained semantic information of vehicles, are crucial for ReID. However, existing methods often fail to preserve the discriminative semantic structure of vehicles due to the non-uniformity of feature attributes across patches. Moreover, domain discrepancy among cameras also requires attention, as it can cause large intra-class variance and noisy clustering results. To tackle these problems, in this letter, we propose a novel Semantic Camera Self-Aware Contrastive Learning (SCSCL) framework for unsupervised vehicle ReID. Firstly, we design the Semantic Self-Aware Contrastive (SSC) loss to perceive the semantic attributes of vehicle images from spatial transformer parameters, thereby enhancing the semantic representation of patch features. Secondly, we design the Camera Self-Aware Contrastive (CSC) loss to perceive the cross-camera distance distributions to facilitate the exploration of instance constraints, thereby enabling cross-camera clustering-friendly representations. Finally, extensive experimental results on VeRi-776 and VehicleID datasets attest to the efficacy of our method over the state-of-the-art performance.

Original languageEnglish
Pages (from-to)2175-2179
Number of pages5
JournalIEEE Signal Processing Letters
Volume31
DOIs
Publication statusPublished - 23 Aug 2024

Fingerprint

Dive into the research topics of 'Semantic Camera Self-Aware Contrastive Learning for Unsupervised Vehicle Re-Identification'. Together they form a unique fingerprint.

Cite this