Journal of Applied Mathematics and Decision Sciences
Volume 8 (2004), Issue 3, Pages 141-154
doi:10.1155/S1173912604000094
Variance reduction trends on ‘boosted’ classifiers
School of Mathematics & Applied Statistics, the University of Wollongong, Australia
Copyright © 2004 Virginia Wheway. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Ensemble classification techniques such as bagging, (Breiman, 1996a), boosting (Freund & Schapire, 1997) and arcing algorithms (Breiman, 1997) have received much attention in recent literature. Such techniques have been shown to lead to reduced
classification error on unseen cases. Even when the ensemble is trained well
beyond zero training set error, the ensemble continues to exhibit improved classification
error on unseen cases. Despite many studies and conjectures, the reasons behind
this improved performance and understanding of the underlying probabilistic structures
remain open and challenging problems. More recently, diagnostics such as edge and margin (Breiman, 1997; Freund & Schapire, 1997; Schapire et al., 1998) have been used to
explain the improvements made when ensemble classifiers are built. This paper presents
some interesting results from an empirical study performed on a set of representative
datasets using the decision tree learner C4.5 (Quinlan, 1993). An exponential-like decay
in the variance of the edge is observed as the number of boosting trials is increased.
i.e. boosting appears to ‘homogenise’ the edge. Some initial theory is presented which
indicates that a lack of correlation between the errors of individual classifiers is a key
factor in this variance reduction.