Identity Adaptation for Person Re-identification

Qiuhong Ke, Mohammed Bennamoun, Hossein Rahmani, Senjian An, Ferdous Sohel, Farid Boussaid

Research output: Contribution to journalArticle

Abstract

Person re-identification (re-ID), which aims to identify the same individual from a gallery collected with different cameras, has attracted increasing attention in the multimedia retrieval community. Current deep learning methods for person re-identification (re-ID) focus on learning classification models on training identities to obtain a ID-discriminative Embedding (IDE) extractor, which is used to extract features from testing images for re-ID. The IDE features of the testing identities might not be discriminative due to that the training identities are different from the testing identities. In this paper, we introduce a new ID-Adaptation Network (ID-AdaptNet), which aims to improve the discriminative power of the IDE features of the testing identities for better person re-ID. The main idea of the ID-AdaptNet is to transform the IDE features to a common discriminative latent space, where the representations of the ‘seen’ training identities are enforced to adapt to those of the ‘unseen’ training identities. More specifically, the ID-AdaptNet is trained by simultaneously minimizing the classification cross-entropy and the discrepancy between the ‘seen’ and the ‘unseen’ training identities in the hidden space. To calculate the discrepancy, we represent their probability distributions as moment sequences and calculate their distance using their central moments. We further propose a Stacking ID-AdaptNet that jointly trains multiple ID-AdaptNets with a regularization method for better re-ID. Experiments show that the ID-AdaptNet and Stacking ID-AdaptNet effectively improve the discriminative power of IDE features.

Original languageEnglish
Article number8452946
Pages (from-to)48147-48155
JournalIEEE Access
Volume6
DOIs
Publication statusPublished - 30 Aug 2018

Fingerprint

Testing
Probability distributions
Identification (control systems)
Entropy
Cameras
Experiments
Deep learning

Cite this

Ke, Qiuhong ; Bennamoun, Mohammed ; Rahmani, Hossein ; An, Senjian ; Sohel, Ferdous ; Boussaid, Farid. / Identity Adaptation for Person Re-identification. In: IEEE Access. 2018 ; Vol. 6. pp. 48147-48155.
@article{873915cc711247d7b9739f4011f275ae,
title = "Identity Adaptation for Person Re-identification",
abstract = "Person re-identification (re-ID), which aims to identify the same individual from a gallery collected with different cameras, has attracted increasing attention in the multimedia retrieval community. Current deep learning methods for person re-identification (re-ID) focus on learning classification models on training identities to obtain a ID-discriminative Embedding (IDE) extractor, which is used to extract features from testing images for re-ID. The IDE features of the testing identities might not be discriminative due to that the training identities are different from the testing identities. In this paper, we introduce a new ID-Adaptation Network (ID-AdaptNet), which aims to improve the discriminative power of the IDE features of the testing identities for better person re-ID. The main idea of the ID-AdaptNet is to transform the IDE features to a common discriminative latent space, where the representations of the ‘seen’ training identities are enforced to adapt to those of the ‘unseen’ training identities. More specifically, the ID-AdaptNet is trained by simultaneously minimizing the classification cross-entropy and the discrepancy between the ‘seen’ and the ‘unseen’ training identities in the hidden space. To calculate the discrepancy, we represent their probability distributions as moment sequences and calculate their distance using their central moments. We further propose a Stacking ID-AdaptNet that jointly trains multiple ID-AdaptNets with a regularization method for better re-ID. Experiments show that the ID-AdaptNet and Stacking ID-AdaptNet effectively improve the discriminative power of IDE features.",
keywords = "Australia, Feature extraction, ID Adaptation, Image color analysis, Moment Matching, Person re-identification, Stacking, Task analysis, Testing, Training",
author = "Qiuhong Ke and Mohammed Bennamoun and Hossein Rahmani and Senjian An and Ferdous Sohel and Farid Boussaid",
year = "2018",
month = "8",
day = "30",
doi = "10.1109/ACCESS.2018.2867898",
language = "English",
volume = "6",
pages = "48147--48155",
journal = "IEEE Access",
issn = "2169-3536",
publisher = "IEEE, Institute of Electrical and Electronics Engineers",

}

Identity Adaptation for Person Re-identification. / Ke, Qiuhong; Bennamoun, Mohammed; Rahmani, Hossein; An, Senjian; Sohel, Ferdous; Boussaid, Farid.

In: IEEE Access, Vol. 6, 8452946, 30.08.2018, p. 48147-48155.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Identity Adaptation for Person Re-identification

AU - Ke, Qiuhong

AU - Bennamoun, Mohammed

AU - Rahmani, Hossein

AU - An, Senjian

AU - Sohel, Ferdous

AU - Boussaid, Farid

PY - 2018/8/30

Y1 - 2018/8/30

N2 - Person re-identification (re-ID), which aims to identify the same individual from a gallery collected with different cameras, has attracted increasing attention in the multimedia retrieval community. Current deep learning methods for person re-identification (re-ID) focus on learning classification models on training identities to obtain a ID-discriminative Embedding (IDE) extractor, which is used to extract features from testing images for re-ID. The IDE features of the testing identities might not be discriminative due to that the training identities are different from the testing identities. In this paper, we introduce a new ID-Adaptation Network (ID-AdaptNet), which aims to improve the discriminative power of the IDE features of the testing identities for better person re-ID. The main idea of the ID-AdaptNet is to transform the IDE features to a common discriminative latent space, where the representations of the ‘seen’ training identities are enforced to adapt to those of the ‘unseen’ training identities. More specifically, the ID-AdaptNet is trained by simultaneously minimizing the classification cross-entropy and the discrepancy between the ‘seen’ and the ‘unseen’ training identities in the hidden space. To calculate the discrepancy, we represent their probability distributions as moment sequences and calculate their distance using their central moments. We further propose a Stacking ID-AdaptNet that jointly trains multiple ID-AdaptNets with a regularization method for better re-ID. Experiments show that the ID-AdaptNet and Stacking ID-AdaptNet effectively improve the discriminative power of IDE features.

AB - Person re-identification (re-ID), which aims to identify the same individual from a gallery collected with different cameras, has attracted increasing attention in the multimedia retrieval community. Current deep learning methods for person re-identification (re-ID) focus on learning classification models on training identities to obtain a ID-discriminative Embedding (IDE) extractor, which is used to extract features from testing images for re-ID. The IDE features of the testing identities might not be discriminative due to that the training identities are different from the testing identities. In this paper, we introduce a new ID-Adaptation Network (ID-AdaptNet), which aims to improve the discriminative power of the IDE features of the testing identities for better person re-ID. The main idea of the ID-AdaptNet is to transform the IDE features to a common discriminative latent space, where the representations of the ‘seen’ training identities are enforced to adapt to those of the ‘unseen’ training identities. More specifically, the ID-AdaptNet is trained by simultaneously minimizing the classification cross-entropy and the discrepancy between the ‘seen’ and the ‘unseen’ training identities in the hidden space. To calculate the discrepancy, we represent their probability distributions as moment sequences and calculate their distance using their central moments. We further propose a Stacking ID-AdaptNet that jointly trains multiple ID-AdaptNets with a regularization method for better re-ID. Experiments show that the ID-AdaptNet and Stacking ID-AdaptNet effectively improve the discriminative power of IDE features.

KW - Australia

KW - Feature extraction

KW - ID Adaptation

KW - Image color analysis

KW - Moment Matching

KW - Person re-identification

KW - Stacking

KW - Task analysis

KW - Testing

KW - Training

UR - http://www.scopus.com/inward/record.url?scp=85052794129&partnerID=8YFLogxK

U2 - 10.1109/ACCESS.2018.2867898

DO - 10.1109/ACCESS.2018.2867898

M3 - Article

VL - 6

SP - 48147

EP - 48155

JO - IEEE Access

JF - IEEE Access

SN - 2169-3536

M1 - 8452946

ER -