Separability-Oriented Subclass Discriminant Analysis

Huan Wan, Hui Wang, Gongde Guo, Xin Wei

Research output: Contribution to journalArticle

9 Citations (Scopus)
58 Downloads (Pure)

Abstract

Linear discriminant analysis (LDA) is a classical method for discriminative dimensionality reduction. The original LDA may degrade in its performance for non-Gaussian data, and may be unable to extract sufficient features to satisfactorily explain the data when the number of classes is small. Two prominent extensions to address these problems are subclass discriminant analysis (SDA) and mixture subclass discriminant analysis (MSDA). They divide every class into subclasses and re-define the within-class and between-class scatter matrices on the basis of subclass. In this paper we study the issue of how to obtain subclasses more effectively in order to achieve higher class separation. We observe that there is significant overlap between models of the subclasses, which we hypothesise is undesirable. In order to reduce their overlap we propose an extension of LDA, separability oriented subclass discriminant analysis (SSDA), which employs hierarchical clustering to divide a class into subclasses using a separability oriented criterion, before applying LDA optimisation using re-defined scatter matrices. Extensive experiments have shown that SSDA has better performance than LDA, SDA and MSDA in most cases. Additional experiments have further shown that SSDA can project data into LDA space that has higher class separation than LDA, SDA and MSDA in most cases.
Original languageEnglish
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume39
Early online date22 Feb 2017
DOIs
Publication statusE-pub ahead of print - 22 Feb 2017

    Fingerprint

Keywords

  • Dimensionality reduction
  • feature extraction
  • linear discriminant analysis
  • subclass discriminant analysis
  • classification

Cite this