Integration graph attention network and multi-centre constrained loss for cross-modality person re‐identification

Di He, Jingrui Zhang, Zhong Zhang, Shuang Liu, Tariq S. Durrani

Research output: Contribution to journalArticlepeer-review

5 Downloads (Pure)

Abstract

Abstract: Cross‐modality person re‐identification is a challenging task due to the large visual appearance difference between RGB and infrared images. Existing studies mainly focus on learning local features and ignore the correlation between local features. In this paper, the Integration Graph Attention Network is proposed to learn the completed correlation between local features via the graph structure. To this end, the authors learn the coarse‐fine attention weights to aggregate the local features by considering local detail and global information. Furthermore, the Multi‐Centre Constrained Loss is proposed to optimise the feature similarity by constraining the centres of modality and identity. It simultaneously utilises three kinds of centre constraints, that is intra‐identity centre constraint, modality centre constraint, and inter‐identity centre constraint, in order to reduce the influence of modality information explicitly. The proposed method is evaluated on two standard benchmark datasets, that is SYSU‐MM01 and RegDB, and the results demonstrate that the authors’ method achieves better performance than the state‐of‐the‐art methods, for example, surpassing NFS by 4.8% and 6.0% mAP on the single‐shot setting in All‐search and Indoor‐search modes, respectively.
Original languageEnglish
Number of pages12
JournalIET Computer Vision
Early online date9 Aug 2022
DOIs
Publication statusE-pub ahead of print - 9 Aug 2022

Keywords

  • person re-identification
  • Multi-Centre Constrained Loss
  • Integration Graph Attention Network

Fingerprint

Dive into the research topics of 'Integration graph attention network and multi-centre constrained loss for cross-modality person re‐identification'. Together they form a unique fingerprint.

Cite this