TY - GEN
T1 - Investigating the performance of MLP classifiers where limited training data are available for some classes
AU - Parikh, CR
AU - Pont, MJ
AU - Li, Yuhua
AU - Jones, NB
N1 - Workshop 99 on Recent Advances in Soft Computing, LEICESTER, ENGLAND, JUL 01-02, 1999
PY - 2000
Y1 - 2000
N2 - The standard implementation of the back-propagation training algorithm for multi-layer Perceptron (MLP) neural networks assumes that there are equal number of samples for training each of the required classes. Where limited training data are available for one (or more) classes, sub-optimal performance may be obtained. We have demonstrated in a previous study [Parikh ct al., 1999. Proceedings of Condition Monitoring 1999, Swansea, UK] that, where unequal training class cannot be avoided, performance of the classifier may be substantially improved by duplicating the available patterns in the smaller class. In this study, we investigate whether the addition of random noise to the `duplicated' training patterns will further improve the classification performance. In the study conducted here, we conclude that the addition of noise does not give a consistent improvement in performance.
AB - The standard implementation of the back-propagation training algorithm for multi-layer Perceptron (MLP) neural networks assumes that there are equal number of samples for training each of the required classes. Where limited training data are available for one (or more) classes, sub-optimal performance may be obtained. We have demonstrated in a previous study [Parikh ct al., 1999. Proceedings of Condition Monitoring 1999, Swansea, UK] that, where unequal training class cannot be avoided, performance of the classifier may be substantially improved by duplicating the available patterns in the smaller class. In this study, we investigate whether the addition of random noise to the `duplicated' training patterns will further improve the classification performance. In the study conducted here, we conclude that the addition of noise does not give a consistent improvement in performance.
KW - Multi-layer Perceptron
KW - training algorithm
KW - condition monitoring
KW - fault diagnosis
M3 - Conference contribution
T3 - ADVANCES IN SOFT COMPUTING
SP - 22
EP - 27
BT - Unknown Host Publication
T2 - SOFT COMPUTING TECHNIQUES AND APPLICATIONS
Y2 - 1 January 2000
ER -