Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Forests of probability estimation trees
Stockholms universitet, Samhällsvetenskapliga fakulteten, Institutionen för data- och systemvetenskap.
2012 (Engelska)Ingår i: International journal of pattern recognition and artificial intelligence, ISSN 0218-0014, Vol. 26, nr 2, 1251001- s.Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Probability estimation trees (PETs) generalize classification trees in that they assign class probability distributions instead of class labels to examples that are to be classified. This property has been demonstrated to allow PETs to outperform classification trees with respect to ranking performance, as measured by the area under the ROC curve (AUC). It has further been shown that the use of probability correction improves the performance of PETs. This has lead to the use of probability correction also in forests of PETs. However, it was recently observed that probability correction may in fact deteriorate performance of forests of PETs. A more detailed study of the phenomenon is presented and the reasons behind this observation are analyzed. An empirical investigation is presented, comparing forests of classification trees to forests of both corrected and uncorrected PETS on 34 data sets from the UCI repository. The experiment shows that a small forest (10 trees) of probability corrected PETs gives a higher AUC than a similar-sized forest of classification trees, hence providing evidence in favor of using forests of probability corrected PETs. However, the picture changes when increasing the forest size, as the AUC is no longer improved by probability correction. For accuracy and squared error of predicted class probabilities (Brier score), probability correction even leads to a negative effect. An analysis of the mean squared error of the trees in the forests and their variance, shows that although probability correction results in trees that are more correct on average, the variance is reduced at the same time, leading to an overall loss of performance for larger forests. The main conclusions are that probability correction should only be employed in small forests of PETs, and that for larger forests, classification trees and PETs are equally good alternatives.

Ort, förlag, år, upplaga, sidor
2012. Vol. 26, nr 2, 1251001- s.
Nyckelord [en]
Random forests, probability estimation trees, accuracy, area under ROC curve, Brier score
Nationell ämneskategori
Fjärranalysteknik
Identifikatorer
URN: urn:nbn:se:su:diva-81799DOI: 10.1142/S0218001412510019ISI: 000308104300002OAI: oai:DiVA.org:su-81799DiVA: diva2:567515
Anmärkning

AuthorCount:1;

Tillgänglig från: 2012-11-13 Skapad: 2012-11-01 Senast uppdaterad: 2012-11-13Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas

Övriga länkar

Förlagets fulltext

Sök vidare i DiVA

Av författaren/redaktören
Boström, Henrik
Av organisationen
Institutionen för data- och systemvetenskap
I samma tidskrift
International journal of pattern recognition and artificial intelligence
Fjärranalysteknik

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 101 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf