Change search
ReferencesLink to record
Permanent link

Direct link
Towards Application-specific Evaluation Metrics
Responsible organisation
2008 (English)Conference paper (Refereed) Published
Abstract [en]

Classifier evaluation has historically been conducted by estimating predictive accuracy via cross-validation tests or similar methods. More recently, ROC analysis has been shown to be a good alternative. However, the characteristics vary greatly between problem domains and it has been shown that some evaluation metrics are more appropriate than others in certain cases. We argue that different problems have different requirements and should therefore make use of evaluation metrics that correspond to the relevant requirements. For this purpose, we motivate the need for generic multi-criteria evaluation methods, i.e., methods that dictate how to integrate metrics but not which metrics to integrate. We present such a generic evaluation method and discuss how to select metrics on the basis of the application at hand.

Place, publisher, year, edition, pages
Helsinki, 2008.
National Category
Computer Science
URN: urn:nbn:se:bth-8553Local ID: diva2:836279
The 3rd workshop on Evaluation Methods for Machine Learning
Available from: 2012-09-18 Created: 2008-07-18 Last updated: 2015-06-30Bibliographically approved

Open Access in DiVA

fulltext(50 kB)12 downloads
File information
File name FULLTEXT01.pdfFile size 50 kBChecksum SHA-512
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Lavesson, Niklas
Computer Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 12 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 20 hits
ReferencesLink to record
Permanent link

Direct link