Hierarchical and multi-featured fusion for effective gait recognition under variable scenarios

Yanmei Chai, Jie Ren, Huimin Zhao, Yang Li, Jinchang Ren, Paul Murray

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)
85 Downloads (Pure)

Abstract

Human identification by gait analysis has attracted a great deal of interest in the computer vision and forensics communities as an unobtrusive technique that is capable of recognizing humans at range. In recent years, significant progress has been made, and a number of approaches capable of this task have been proposed and developed. Among them, approaches based on single source features are the most popular. However the recognition rate of these methods is often unsatisfactory due to the lack of information contained in single feature sources. Consequently, in this paper, a hierarchal and multi-featured fusion approach is proposed for effective gait recognition. In practice, using more features for fusion does not necessarily mean a better recognition rate and features should in fact be carefully selected such that they are complementary to each other. Here, complementary features are extracted in three groups: Dynamic Region Area; Extension and Space features; and 2D Stick Figure Model features. To balance the proportion of features used in fusion a hierarchical feature-level fusion method is proposed. Comprehensive results of applying the proposed techniques to three well-known datasets have demonstrated that our fusion based approach can improve the overall recognition rate when compared to a benchmark algorithm.
Original languageEnglish
Number of pages13
JournalPattern Analysis and Applications
Early online date26 Mar 2015
DOIs
Publication statusPublished - 2015

Keywords

  • gait recognition
  • hierarchical and multi-featured fusion
  • extension and space features
  • 2D stick figure model
  • dynamic region area

Fingerprint Dive into the research topics of 'Hierarchical and multi-featured fusion for effective gait recognition under variable scenarios'. Together they form a unique fingerprint.

Cite this