New Generation Computing, 25(2007)117-141
Ohmsha, Ltd. and Springer
Received 18 July 2006
Revised manuscript received 28 August 2006
In this article, several boosting methods are discussed, which are notable implementations of the ensemble learning. Starting from the firstly introduced “boosting by filter” which is an embodiment of the proverb ”Two heads are better than one”, more advanced versions of boosting methods “AdaBoost” and “U-Boost” are introduced. A geometrical structure and some statistical properties such as consistency and robustness of boosting algorithms are discussed, and then simulation studies are presented for con- firming discussed behaviors of algorithms.
Keywords:Boosting, Classification Problem, Large-scale Learning Machine, Statistical Learning Theory.