Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Overproduce-and-Select: The Grim Reality
Högskolan i Borås, Institutionen Handels- och IT-högskolan.
Högskolan i Borås, Institutionen Handels- och IT-högskolan.
Stockholm University, Sweden.
2013 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Overproduce-and-select (OPAS) is a frequently used paradigm for building ensembles. In static OPAS, a large number of base classifiers are trained, before a subset of the available models is selected to be combined into the final ensemble. In general, the selected classifiers are supposed to be accurate and diverse for the OPAS strategy to result in highly accurate ensembles, but exactly how this is enforced in the selection process is not obvious. Most often, either individual models or ensembles are evaluated, using some performance metric, on available and labeled data. Naturally, the underlying assumption is that an observed advantage for the models (or the resulting ensemble) will carry over to test data. In the experimental study, a typical static OPAS scenario, using a pool of artificial neural networks and a number of very natural and frequently used performance measures, is evaluated on 22 publicly available data sets. The discouraging result is that although a fairly large proportion of the ensembles obtained higher test set accuracies, compared to using the entire pool as the ensemble, none of the selection criteria could be used to identify these highly accurate ensembles. Despite only investigating a specific scenario, we argue that the settings used are typical for static OPAS, thus making the results general enough to question the entire paradigm.

Place, publisher, year, edition, pages
IEEE, 2013.
Keyword [en]
Ensembles, Neural networks, Overproduce-and-select, Data mining, Machine Learning
National Category
Computer Sciences Computer and Information Sciences
Identifiers
URN: urn:nbn:se:kth:diva-221533DOI: 10.1109/CIEL.2013.6613140ISI: 000335317800008Scopus ID: 2-s2.0-84886789587Local ID: 2320/12920OAI: oai:DiVA.org:kth-221533DiVA, id: diva2:1175269
Conference
IEEE Symposium on Computational Intelligence and Ensemble Learning (CIEL), 16-19 April 2013 , Singapore
Note

Sponsorship:

Swedish Foundation for

Strategic Research through the project High-Performance Data

Mining for Drug Effect Detection (ref. no. IIS11-0053)

QC 20180202

Available from: 2018-01-17 Created: 2018-01-17 Last updated: 2018-02-02Bibliographically approved

Open Access in DiVA

fulltext(199 kB)1 downloads
File information
File name FULLTEXT01.pdfFile size 199 kBChecksum SHA-512
b41444d3c4201bf9469d3ff306c925ab4c3001c3d5a882a0549546dfac9caf25a9714464d1f22fc44796cf3ed49d35dd2053230da803724b8ec8e22a72ada932
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Boström, Henrik
Computer SciencesComputer and Information Sciences

Search outside of DiVA

GoogleGoogle Scholar
Total: 1 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 2 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf