Cross‐modality person re‐identification using hybrid mutual learning

Zhong Zhang, Qing Dong, Sen Wang, Shuang Liu*, Baihua Xiao, Tariq S. Durrani

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)
10 Downloads (Pure)

Abstract

Cross-modality person re-identification (Re-ID) aims to retrieve a query identity from red, green, blue (RGB) images or infrared (IR) images. Many approaches have been proposed to reduce the distribution gap between RGB modality and IR modality. However, they ignore the valuable collaborative relationship between RGB modality and IR modality. Hybrid Mutual Learning (HML) for cross-modality person Re-ID is proposed, which builds the collaborative relationship by using mutual learning from the aspects of local features and triplet relation. Specifically, HML contains local-mean mutual learning and triplet mutual learning where they focus on transferring local representational knowledge and structural geometry knowledge so as to reduce the gap between RGB modality and IR modality. Furthermore, Hierarchical Attention Aggregation is proposed to fuse local feature maps and local feature vectors to enrich the information of the classifier input. Extensive experiments on two commonly used data sets, that is, SYSU-MM01 and RegDB verify the effectiveness of the proposed method.

Original languageEnglish
Pages (from-to)1-12
Number of pages12
JournalIET Computer Vision
Volume17
Issue number1
Early online date10 Jul 2022
DOIs
Publication statusPublished - Feb 2023

Keywords

  • computer vision
  • person re-identification
  • hybrid mutual learning (HML)
  • red, green, blue (RGB)
  • infrared (IR)
  • cross-modality
  • person re-identification (Re-ID)

Fingerprint

Dive into the research topics of 'Cross‐modality person re‐identification using hybrid mutual learning'. Together they form a unique fingerprint.

Cite this