Abstract
Ensemble classifiers have been investigated by many in the artificial intelligence and machine learning community. Majority voting and weighted
majority voting are two commonly used combination schemes in ensemble learning. However, understanding of them is incomplete at best, with
some properties even misunderstood. In this paper, we present a group
of properties of these two schemes formally under a geometric framework. Two key factors, every component base classifier’s performance and dissimilarity between each pair of component classifiers are evaluated by the same metric - the Euclidean distance. Consequently, ensembling becomes a deterministic problem and the performance of an ensemble can be calculated directly by a formula. We prove several theorems of interest and explain their implications for ensembles. In particular, we compare and contrast the effect of the number of component classifiers on these two types of ensemble schemes. Some important properties of both combination schemes are discussed. And a method to calculate the optimal weights for the weighted majority voting is presented. Empirical investigation is conducted to verify the theoretical results. We believe
that the results from this paper are very useful for us to understand
the fundamental properties of these two combination schemes and the
principles of ensemble classifiers in general. The results are also helpful
for us to investigate some issues in ensemble classifiers, such as ensemble performance prediction, diversity, ensemble pruning, and others.
majority voting are two commonly used combination schemes in ensemble learning. However, understanding of them is incomplete at best, with
some properties even misunderstood. In this paper, we present a group
of properties of these two schemes formally under a geometric framework. Two key factors, every component base classifier’s performance and dissimilarity between each pair of component classifiers are evaluated by the same metric - the Euclidean distance. Consequently, ensembling becomes a deterministic problem and the performance of an ensemble can be calculated directly by a formula. We prove several theorems of interest and explain their implications for ensembles. In particular, we compare and contrast the effect of the number of component classifiers on these two types of ensemble schemes. Some important properties of both combination schemes are discussed. And a method to calculate the optimal weights for the weighted majority voting is presented. Empirical investigation is conducted to verify the theoretical results. We believe
that the results from this paper are very useful for us to understand
the fundamental properties of these two combination schemes and the
principles of ensemble classifiers in general. The results are also helpful
for us to investigate some issues in ensemble classifiers, such as ensemble performance prediction, diversity, ensemble pruning, and others.
Original language | English |
---|---|
Pages (from-to) | 4929–4958 |
Number of pages | 30 |
Journal | Machine Learning |
Volume | 112 |
Issue number | 12 |
Early online date | 27 Sept 2023 |
DOIs | |
Publication status | Published (in print/issue) - 31 Dec 2023 |
Bibliographical note
Publisher Copyright:© 2023, The Author(s).
Keywords
- Ensemble learning
- Geometric framework
- Classification
- Majority voting
- Weighted majority voting