Comparing Methods for Generating Diverse Ensembles of Artificial Neural Networks
2010 (English)Conference paper (Refereed)
It is well-known that ensemble performance relies heavily on sufficient diversity among the base classifiers. With this in mind, the strategy used to balance diversity and base classifier accuracy must be considered a key component of any ensemble algorithm. This study evaluates the predictive performance of neural network ensembles, specifically comparing straightforward techniques to more sophisticated. In particular, the sophisticated methods GASEN and NegBagg are compared to more straightforward methods, where each ensemble member is trained independently of the others. In the experimentation, using 31 publicly available data sets, the straightforward methods clearly outperformed the sophisticated methods, thus questioning the use of the more complex algorithms.
Place, publisher, year, edition, pages
IEEE , 2010.
ensembles, diversity, Machine Learning
Computer Science Computer and Information Science
IdentifiersURN: urn:nbn:se:hb:diva-6145DOI: 10.1109/IJCNN.2010.5596763Local ID: 2320/6869ISBN: 978-1-4244-6916-1OAI: oai:DiVA.org:hb-6145DiVA: diva2:886829
WCCI 2010 IEEE World Congress on Computational Intelligence, IJCNN 2010