Investigating the performance of MLP classifiers where limited training data are available for some classes

CR Parikh, MJ Pont, Yuhua Li, NB Jones

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The standard implementation of the back-propagation training algorithm for multi-layer Perceptron (MLP) neural networks assumes that there are equal number of samples for training each of the required classes. Where limited training data are available for one (or more) classes, sub-optimal performance may be obtained. We have demonstrated in a previous study [Parikh ct al., 1999. Proceedings of Condition Monitoring 1999, Swansea, UK] that, where unequal training class cannot be avoided, performance of the classifier may be substantially improved by duplicating the available patterns in the smaller class. In this study, we investigate whether the addition of random noise to the `duplicated' training patterns will further improve the classification performance. In the study conducted here, we conclude that the addition of noise does not give a consistent improvement in performance.
Original languageEnglish
Title of host publicationUnknown Host Publication
Pages22-27
Number of pages6
Publication statusPublished (in print/issue) - 2000
EventSOFT COMPUTING TECHNIQUES AND APPLICATIONS -
Duration: 1 Jan 2000 → …

Publication series

NameADVANCES IN SOFT COMPUTING

Conference

ConferenceSOFT COMPUTING TECHNIQUES AND APPLICATIONS
Period1/01/00 → …

Bibliographical note

Workshop 99 on Recent Advances in Soft Computing, LEICESTER, ENGLAND, JUL 01-02, 1999

Keywords

  • Multi-layer Perceptron
  • training algorithm
  • condition monitoring
  • fault diagnosis

Fingerprint

Dive into the research topics of 'Investigating the performance of MLP classifiers where limited training data are available for some classes'. Together they form a unique fingerprint.

Cite this