2013 (English)Conference paper (Refereed)
In this paper, we introduce and evaluate a novel
method, called random brains, for producing neural network
ensembles. The suggested method, which is heavily inspired by
the random forest technique, produces diversity implicitly by
using bootstrap training and randomized architectures. More
specifically, for each base classifier multilayer perceptron, a
number of randomly selected links between the input layer
and the hidden layer are removed prior to training, thus
resulting in potentially weaker but more diverse base classifiers.
The experimental results on 20 UCI data sets show that
random brains obtained significantly higher accuracy and AUC,
compared to standard bagging of similar neural networks not
utilizing randomized architectures. The analysis shows that the
main reason for the increased ensemble performance is the
ability to produce effective diversity, as indicated by the increase
in the difficulty diversity measure.
Place, publisher, year, edition, pages
IEEE , 2013.
Data mining, Machine Learning
Computer Science Computer and Information Science
IdentifiersURN: urn:nbn:se:hb:diva-7057Local ID: 2320/12922OAI: oai:DiVA.org:hb-7057DiVA: diva2:887764
International Joint Conference on Neural Networks, Dallas, TX, USA, August 4-9, 2013.
Swedish Foundation for Strategic
Research through the project High-Performance Data Mining for Drug Effect
Detection (IIS11-0053) and the Knowledge Foundation through the project
Big Data Analytics by Online Ensemble Learning (20120192)2015-12-222015-12-22