Multi-level cross-view consistent feature learning for person re-identification

Yixiu Liu, Yunzhou Zhang, Bir Bhanu, Sonya Coleman, Dermot Kerr

Research output: Contribution to journalArticlepeer-review


Person re-identification plays an important role in searching for a specific person in a camera network with non-overlapping cameras. The most critical problem for re-identification is feature representation. In this paper, a multi-level cross-view consistent feature learning framework is proposed for person re-identification. First, local deep, LOMO and SIFT features are extracted to form multi-level features. Specifically, local features from the lower and higher layers of a convolutional neural network (CNN) are extracted, these features complement each other as they extract apparent and semantic properties. Second, an ID-based cross-view multi-level dictionary learning (IDB-CMDL) is carried out to obtain sparse and discriminant feature representation. Third, a cross-view consistent word learning is performed to get the cross-view consistent BoVW histograms from sparse feature representation. Finally, a multi-level metric learning fuses multiple BoVW histograms, and learns the sample distance in the subspace for ranking. Experiments on the public CUHK03, Market1501, and DukeMTMC-ReID datasets show results that are superior to many state-of-the-art methods for person re-identification.
Original languageEnglish
Pages (from-to)1-14
Number of pages14
Early online date12 Jan 2021
Publication statusPublished - 7 May 2021


  • person re-identification
  • ocal deep features
  • ID-based crossview multi-level dictionary learning
  • cross-view consistent word learning
  • multi-level metric learning
  • ID-based cross-view multi-level dictionary learning
  • Multi-level metric learning
  • Local deep features
  • Person re-identification
  • Cross-view consistent word learning


Dive into the research topics of 'Multi-level cross-view consistent feature learning for person re-identification'. Together they form a unique fingerprint.

Cite this