The standard implementation of the back-propagation training algorithm for multi-layer Perceptron (MLP) neural networks assumes that there are equal number of samples for training each of the required classes. Where limited training data are available for one (or more) classes, sub-optimal performance may be obtained. We have demonstrated in a previous study [Parikh ct al., 1999. Proceedings of Condition Monitoring 1999, Swansea, UK] that, where unequal training class cannot be avoided, performance of the classifier may be substantially improved by duplicating the available patterns in the smaller class. In this study, we investigate whether the addition of random noise to the `duplicated' training patterns will further improve the classification performance. In the study conducted here, we conclude that the addition of noise does not give a consistent improvement in performance.
|Title of host publication||Unknown Host Publication|
|Number of pages||6|
|Publication status||Published - 2000|
|Event||SOFT COMPUTING TECHNIQUES AND APPLICATIONS - |
Duration: 1 Jan 2000 → …
|Name||ADVANCES IN SOFT COMPUTING|
|Conference||SOFT COMPUTING TECHNIQUES AND APPLICATIONS|
|Period||1/01/00 → …|
- Multi-layer Perceptron
- training algorithm
- condition monitoring
- fault diagnosis
Parikh, CR., Pont, MJ., Li, Y., & Jones, NB. (2000). Investigating the performance of MLP classifiers where limited training data are available for some classes. In Unknown Host Publication (pp. 22-27). (ADVANCES IN SOFT COMPUTING).