Change search
Refine search result
1 - 42 of 42
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Conformal Prediction Using Decision Trees2013Conference paper (Refereed)
    Abstract [en]

    Conformal prediction is a relatively new framework in which the predictive models output sets of predictions with a bound on the error rate, i.e., in a classification context, the probability of excluding the correct class label is lower than a predefined significance level. An investigation of the use of decision trees within the conformal prediction framework is presented, with the overall purpose to determine the effect of different algorithmic choices, including split criterion, pruning scheme and way to calculate the probability estimates. Since the error rate is bounded by the framework, the most important property of conformal predictors is efficiency, which concerns minimizing the number of elements in the output prediction sets. Results from one of the largest empirical investigations to date within the conformal prediction framework are presented, showing that in order to optimize efficiency, the decision trees should be induced using no pruning and with smoothed probability estimates. The choice of split criterion to use for the actual induction of the trees did not turn out to have any major impact on the efficiency. Finally, the experimentation also showed that when using decision trees, standard inductive conformal prediction was as efficient as the recently suggested method cross-conformal prediction. This is an encouraging results since cross-conformal prediction uses several decision trees, thus sacrificing the interpretability of a single decision tree.

  • 2.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Linusson, Henrik
    University of Borås, School of Business and IT.
    Regression conformal prediction with random forests2014In: Machine Learning, ISSN 0885-6125, E-ISSN 1573-0565, Vol. 97, no 1-2, p. 155-176Article in journal (Refereed)
    Abstract [en]

    Regression conformal prediction produces prediction intervals that are valid, i.e., the probability of excluding the correct target value is bounded by a predefined confidence level. The most important criterion when comparing conformal regressors is efficiency; the prediction intervals should be as tight (informative) as possible. In this study, the use of random forests as the underlying model for regression conformal prediction is investigated and compared to existing state-of-the-art techniques, which are based on neural networks and k-nearest neighbors. In addition to their robust predictive performance, random forests allow for determining the size of the prediction intervals by using out-of-bag estimates instead of requiring a separate calibration set. An extensive empirical investigation, using 33 publicly available data sets, was undertaken to compare the use of random forests to existing stateof- the-art conformal predictors. The results show that the suggested approach, on almost all confidence levels and using both standard and normalized nonconformity functions, produced significantly more efficient conformal predictors than the existing alternatives.

  • 3.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Linusson, Henrik
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Rule Extraction with Guaranteed Fidelity2014Conference paper (Refereed)
    Abstract [en]

    This paper extends the conformal prediction framework to rule extraction, making it possible to extract interpretable models from opaque models in a setting where either the infidelity or the error rate is bounded by a predefined significance level. Experimental results on 27 publicly available data sets show that all three setups evaluated produced valid and rather efficient conformal predictors. The implication is that augmenting rule extraction with conformal prediction allows extraction of models where test set errors or test sets infidelities are guaranteed to be lower than a chosen acceptable level. Clearly this is beneficial for both typical rule extraction scenarios, i.e., either when the purpose is to explain an existing opaque model, or when it is to build a predictive model that must be interpretable.

  • 4.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Niklasson, Lars
    University of Borås, School of Business and IT.
    Increasing Rule Extraction Accuracy by Post-processing GP Trees2008In: Proceedings of the Congress on Evolutionary Computation, IEEE Press , 2008, p. 3010-3015Conference paper (Refereed)
    Abstract [en]

    Genetic programming (GP), is a very general and efficient technique, often capable of outperforming more specialized techniques on a variety of tasks. In this paper, we suggest a straightforward novel algorithm for post-processing of GP classification trees. The algorithm iteratively, one node at a time, searches for possible modifications that would result in higher accuracy. More specifically, the algorithm for each split evaluates every possible constant value and chooses the best. With this design, the post-processing algorithm can only increase training accuracy, never decrease it. In this study, we apply the suggested algorithm to GP trees, extracted from neural network ensembles. Experimentation, using 22 UCI datasets, shows that the post-processing results in higher test set accuracies on a large majority of datasets. As a matter of fact, for two setups of three evaluated, the increase in accuracy is statistically significant.

  • 5.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Using Imaginary Ensembles to Select GP Classifiers2010In: Genetic Programming: 13th European Conference, EuroGP 2010, Istanbul, Turkey, April 7-9, 2010, Proceedings / [ed] A.I. et al. Esparcia-Alcazar, Springer-Verlag Berlin Heidelberg , 2010, p. 278-288Conference paper (Refereed)
    Abstract [en]

    When predictive modeling requires comprehensible models, most data miners will use specialized techniques producing rule sets or decision trees. This study, however, shows that genetically evolved decision trees may very well outperform the more specialized techniques. The proposed approach evolves a number of decision trees and then uses one of several suggested selection strategies to pick one specific tree from that pool. The inherent inconsistency of evolution makes it possible to evolve each tree using all data, and still obtain somewhat different models. The main idea is to use these quite accurate and slightly diverse trees to form an imaginary ensemble, which is then used as a guide when selecting one specific tree. Simply put, the tree classifying the largest number of instances identically to the ensemble is chosen. In the experimentation, using 25 UCI data sets, two selection strategies obtained significantly higher accuracy than the standard rule inducer J48.

  • 6.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Post-processing Evolved Decision Trees2009In: Foundations of Computational Intelligence / [ed] Ajith Abraham, Springer Verlag , 2009, p. 149-164Chapter in book (Other academic)
    Abstract [en]

    Although Genetic Programming (GP) is a very general technique, it is also quite powerful. As a matter of fact, GP has often been shown to outperform more specialized techniques on a variety of tasks. In data mining, GP has successfully been applied to most major tasks; e.g. classification, regression and clustering. In this chapter, we introduce, describe and evaluate a straightforward novel algorithm for post-processing genetically evolved decision trees. The algorithm works by iteratively, one node at a time, search for possible modifications that will result in higher accuracy. More specifically, the algorithm, for each interior test, evaluates every possible split for the current attribute and chooses the best. With this design, the post-processing algorithm can only increase training accuracy, never decrease it. In the experiments, the suggested algorithm is applied to GP decision trees, either induced directly from datasets, or extracted from neural network ensembles. The experimentation, using 22 UCI datasets, shows that the suggested post-processing technique results in higher test set accuracies on a large majority of the datasets. As a matter of fact, the increase in test accuracy is statistically significant for one of the four evaluated setups, and substantial on two out of the other three.

  • 7.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Löfström, Tuwe
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Evolved Decision Trees as Conformal Predictors2013Conference paper (Refereed)
    Abstract [en]

    In conformal prediction, predictive models output sets of predictions with a bound on the error rate. In classification, this translates to that the probability of excluding the correct class is lower than a predefined significance level, in the long run. Since the error rate is guaranteed, the most important criterion for conformal predictors is efficiency. Efficient conformal predictors minimize the number of elements in the output prediction sets, thus producing more informative predictions. This paper presents one of the first comprehensive studies where evolutionary algorithms are used to build conformal predictors. More specifically, decision trees evolved using genetic programming are evaluated as conformal predictors. In the experiments, the evolved trees are compared to decision trees induced using standard machine learning techniques on 33 publicly available benchmark data sets, with regard to predictive performance and efficiency. The results show that the evolved trees are generally more accurate, and the corresponding conformal predictors more efficient, than their induced counterparts. One important result is that the probability estimates of decision trees when used as conformal predictors should be smoothed, here using the Laplace correction. Finally, using the more discriminating Brier score instead of accuracy as the optimization criterion produced the most efficient conformal predictions.

  • 8.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Producing Implicit Diversity in ANN Ensembles2012Conference paper (Refereed)
    Abstract [en]

    Combining several ANNs into ensembles normally results in a very accurate and robust predictive models. Many ANN ensemble techniques are, however, quite complicated and often explicitly optimize some diversity metric. Unfortunately, the lack of solid validation of the explicit algorithms, at least for classification, makes the use of diversity measures as part of an optimization function questionable. The merits of implicit methods, most notably bagging, are on the other hand experimentally established and well-known. This paper evaluates a number of straightforward techniques for introducing implicit diversity in ANN ensembles, including a novel technique producing diversity by using ANNs with different and slightly randomized link structures. The experimental results, comparing altogether 54 setups and two different ensemble sizes on 30 UCI data sets, show that all methods succeeded in producing implicit diversity, but that the effect on ensemble accuracy varied. Still, most setups evaluated did result in more accurate ensembles, compared to the baseline setup, especially for the larger ensemble size. As a matter of fact, several setups even obtained significantly higher ensemble accuracy than bagging. The analysis also identified that diversity was, relatively speaking, more important for the larger ensembles. Looking specifically at the methods used to increase the implicit diversity, setups using the technique that utilizes the randomized link structures generally produced the most accurate ensembles.

  • 9.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Overproduce-and-Select: The Grim Reality2013Conference paper (Refereed)
    Abstract [en]

    Overproduce-and-select (OPAS) is a frequently used paradigm for building ensembles. In static OPAS, a large number of base classifiers are trained, before a subset of the available models is selected to be combined into the final ensemble. In general, the selected classifiers are supposed to be accurate and diverse for the OPAS strategy to result in highly accurate ensembles, but exactly how this is enforced in the selection process is not obvious. Most often, either individual models or ensembles are evaluated, using some performance metric, on available and labeled data. Naturally, the underlying assumption is that an observed advantage for the models (or the resulting ensemble) will carry over to test data. In the experimental study, a typical static OPAS scenario, using a pool of artificial neural networks and a number of very natural and frequently used performance measures, is evaluated on 22 publicly available data sets. The discouraging result is that although a fairly large proportion of the ensembles obtained higher test set accuracies, compared to using the entire pool as the ensemble, none of the selection criteria could be used to identify these highly accurate ensembles. Despite only investigating a specific scenario, we argue that the settings used are typical for static OPAS, thus making the results general enough to question the entire paradigm.

  • 10.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Random Brains2013Conference paper (Refereed)
    Abstract [en]

    In this paper, we introduce and evaluate a novel method, called random brains, for producing neural network ensembles. The suggested method, which is heavily inspired by the random forest technique, produces diversity implicitly by using bootstrap training and randomized architectures. More specifically, for each base classifier multilayer perceptron, a number of randomly selected links between the input layer and the hidden layer are removed prior to training, thus resulting in potentially weaker but more diverse base classifiers. The experimental results on 20 UCI data sets show that random brains obtained significantly higher accuracy and AUC, compared to standard bagging of similar neural networks not utilizing randomized architectures. The analysis shows that the main reason for the increased ensemble performance is the ability to produce effective diversity, as indicated by the increase in the difficulty diversity measure.

  • 11.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Niklasson, Lars
    University of Borås, School of Business and IT.
    Empirically Investigating the Importance of Diversity2007Conference paper (Refereed)
  • 12.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Evaluating Standard Techniques for Implicit Diversity2008In: Advances in Knowledge Discovery and Data Mining, Springer Verlag , 2008, p. 613-622Conference paper (Refereed)
  • 13.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Niklasson, Lars
    The Importance of Diversity in Neural Network Ensembles: An Empirical Investigation2007Conference paper (Refereed)
    Abstract [en]

    When designing ensembles, it is almost an axiom that the base classifiers must be diverse in order for the ensemble to generalize well. Unfortunately, there is no clear definition of the key term diversity, leading to several diversity measures and many, more or less ad hoc, methods for diversity creation in ensembles. In addition, no specific diversity measure has shown to have a high correlation with test set accuracy. The purpose of this paper is to empirically evaluate ten different diversity measures, using neural network ensembles and 11 publicly available data sets. The main result is that all diversity measures evaluated, in this study too, show low or very low correlation with test set accuracy. Having said that, two measures; double fault and difficulty show slightly higher correlations compared to the other measures. The study furthermore shows that the correlation between accuracy measured on training or validation data and test set accuracy also is rather low. These results challenge ensemble design techniques where diversity is explicitly maximized or where ensemble accuracy on a hold-out set is used for optimization.

  • 14.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Norinder, Ulf
    Evaluating Ensembles on QSAR Classification2009Conference paper (Refereed)
    Abstract [en]

    Novel, often quite technical algorithms, for ensembling artificial neural networks are constantly suggested. Naturally, when presenting a novel algorithm, the authors, at least implicitly, claim that their algorithm, in some aspect, represents the state-of-the-art. Obviously, the most important criterion is predictive performance, normally measured using either accuracy or area under the ROC-curve (AUC). This paper presents a study where the predictive performance of two widely acknowledged ensemble techniques; GASEN and NegBagg, is compared to more straightforward alternatives like bagging. The somewhat surprising result of the experimentation using, in total, 32 publicly available data sets from the medical domain, was that both GASEN and NegBagg were clearly outperformed by several of the straightforward techniques. One particularly striking result was that not applying the GASEN technique; i.e., ensembling all available networks instead of using the subset suggested by GASEN, turned out to produce more accurate ensembles.

  • 15.
    Johansson, Ulf
    et al.
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Löfström, Tuve
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Sundell, Håkan
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Linnusson, Henrik
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Gidenstam, Anders
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Boström, Henrik
    School of Information and Communication Technology, Royal Institute of Technology, Sweden.
    Venn predictors for well-calibrated probability estimation trees2018In: 7th Symposium on Conformal and Probabilistic Prediction and Applications: COPA 2018, 11-13 June 2018, Maastricht, The Netherlands / [ed] Alex J. Gammerman and Vladimir Vovk and Zhiyuan Luo and Evgueni N. Smirnov and Ralf L. M. Peeter, 2018, p. 3-14Conference paper (Refereed)
    Abstract [en]

    Successful use of probabilistic classification requires well-calibrated probability estimates, i.e., the predicted class probabilities must correspond to the true probabilities. The standard solution is to employ an additional step, transforming the outputs from a classifier into probability estimates. In this paper, Venn predictors are compared to Platt scaling and isotonic regression, for the purpose of producing well-calibrated probabilistic predictions from decision trees. The empirical investigation, using 22 publicly available datasets, showed that the probability estimates from the Venn predictor were extremely well-calibrated. In fact, in a direct comparison using the accepted reliability metric, the Venn predictor estimates were the most exact on every data set.

  • 16.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Locally Induced Predictive Models2011Conference paper (Refereed)
    Abstract [en]

    Most predictive modeling techniques utilize all available data to build global models. This is despite the wellknown fact that for many problems, the targeted relationship varies greatly over the input space, thus suggesting that localized models may improve predictive performance. In this paper, we suggest and evaluate a technique inducing one predictive model for each test instance, using only neighboring instances. In the experimentation, several different variations of the suggested algorithm producing localized decision trees and neural network models are evaluated on 30 UCI data sets. The main result is that the suggested approach generally yields better predictive performance than global models built using all available training data. As a matter of fact, all techniques producing J48 trees obtained significantly higher accuracy and AUC, compared to the global J48 model. For RBF network models, with their inherent ability to use localized information, the suggested approach was only successful with regard to accuracy, while global RBF models had a better ranking ability, as seen by their generally higher AUCs.

  • 17.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Boström, Henrik
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Chipper: A Novel Algorithm for Concept Description2008Conference paper (Refereed)
    Abstract [en]

    In this paper, several demands placed on concept description algorithms are identified and discussed. The most important criterion is the ability to produce compact rule sets that, in a natural and accurate way, describe the most important relationships in the underlying domain. An algorithm based on the identified criteria is presented and evaluated. The algorithm, named Chipper, produces decision lists, where each rule covers a maximum number of remaining instances while meeting requested accuracy requirements. In the experiments, Chipper is evaluated on nine UCI data sets. The main result is that Chipper produces compact and understandable rule sets, clearly fulfilling the overall goal of concept description. In the experiments, Chipper's accuracy is similar to standard decision tree and rule induction algorithms, while rule sets have superior comprehensibility.

  • 18.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    One Tree to Explain Them All2011Conference paper (Refereed)
    Abstract [en]

    Random forest is an often used ensemble technique, renowned for its high predictive performance. Random forests models are, however, due to their sheer complexity inherently opaque, making human interpretation and analysis impossible. This paper presents a method of approximating the random forest with just one decision tree. The approach uses oracle coaching, a recently suggested technique where a weaker but transparent model is generated using combinations of regular training data and test data initially labeled by a strong classifier, called the oracle. In this study, the random forest plays the part of the oracle, while the transparent models are decision trees generated by either the standard tree inducer J48, or by evolving genetic programs. Evaluation on 30 data sets from the UCI repository shows that oracle coaching significantly improves both accuracy and area under ROC curve, compared to using training data only. As a matter of fact, resulting single tree models are as accurate as the random forest, on the specific test instances. Most importantly, this is not achieved by inducing or evolving huge trees having perfect fidelity; a large majority of all trees are instead rather compact and clearly comprehensible. The experiments also show that the evolution outperformed J48, with regard to accuracy, but that this came at the expense of slightly larger trees.

  • 19.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Oracle Coached Decision Trees and Lists2010Conference paper (Refereed)
    Abstract [en]

    This paper introduces a novel method for obtaining increased predictive performance from transparent models in situations where production input vectors are available when building the model. First, labeled training data is used to build a powerful opaque model, called an oracle. Second, the oracle is applied to production instances, generating predicted target values, which are used as labels. Finally, these newly labeled instances are utilized, in different combinations with normal training data, when inducing a transparent model. Experimental results, on 26 UCI data sets, show that the use of oracle coaches significantly improves predictive performance, compared to standard model induction. Most importantly, both accuracy and AUC results are robust over all combinations of opaque and transparent models evaluated. This study thus implies that the straightforward procedure of using a coaching oracle, which can be used with arbitrary classifiers, yields significantly better predictive performance at a low computational cost.

  • 20.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    König, Rikard
    University of Borås, School of Business and IT.
    Using Genetic Programming to Obtain Implicit Diversity2009Conference paper (Refereed)
    Abstract [en]

    When performing predictive data mining, the use of ensembles is known to increase prediction accuracy, compared to single models. To obtain this higher accuracy, ensembles should be built from base classifiers that are both accurate and diverse. The question of how to balance these two properties in order to maximize ensemble accuracy is, however, far from solved and many different techniques for obtaining ensemble diversity exist. One such technique is bagging, where implicit diversity is introduced by training base classifiers on different subsets of available data instances, thus resulting in less accurate, but diverse base classifiers. In this paper, genetic programming is used as an alternative method to obtain implicit diversity in ensembles by evolving accurate, but different base classifiers in the form of decision trees, thus exploiting the inherent inconsistency of genetic programming. The experiments show that the GP approach outperforms standard bagging of decision trees, obtaining significantly higher ensemble accuracy over 25 UCI datasets. This superior performance stems from base classifiers having both higher average accuracy and more diversity. Implicitly introducing diversity using GP thus works very well, since evolved base classifiers tend to be highly accurate and diverse.

  • 21.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Löfström, Tuwe
    University of Borås, School of Business and IT.
    Boström, Henrik
    Obtaining accurate and comprehensible classifiers using oracle coaching2012In: Intelligent Data Analysis, ISSN 1088-467X, E-ISSN 1571-4128, Vol. Volume 16, no Number 2, p. 247-263Article in journal (Refereed)
    Abstract [en]

    While ensemble classifiers often reach high levels of predictive performance, the resulting models are opaque and hence do not allow direct interpretation. When employing methods that do generate transparent models, predictive performance typically has to be sacrificed. This paper presents a method of improving predictive performance of transparent models in the very common situation where instances to be classified, i.e., the production data, are known at the time of model building. This approach, named oracle coaching, employs a strong classifier, called an oracle, to guide the generation of a weaker, but transparent model. This is accomplished by using the oracle to predict class labels for the production data, and then applying the weaker method on this data, possibly in conjunction with the original training set. Evaluation on 30 data sets from the UCI repository shows that oracle coaching significantly improves predictive performance, measured by both accuracy and area under ROC curve, compared to using training data only. This result is shown to be robust for a variety of methods for generating the oracles and transparent models. More specifically, random forests and bagged radial basis function networks are used as oracles, while J48 and JRip are used for generating transparent models. The evaluation further shows that significantly better results are obtained when using the oracle-classified production data together with the original training data, instead of using only oracle data. An analysis of the fidelity of the transparent models to the oracles shows that performance gains can be expected from increasing oracle performance rather than from increasing fidelity. Finally, it is shown that further performance gains can be achieved by adjusting the relative weights of training data and oracle data.

  • 22.
    Johansson, Ulf
    et al.
    University of Borås, School of Business and IT.
    Sönströd, Cecilia
    University of Borås, School of Business and IT.
    Norinder, Ulf
    Boström, Henrik
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Using Feature Selection with Bagging and Rule Extraction in Drug Discovery2010Conference paper (Refereed)
    Abstract [en]

    This paper investigates different ways of combining feature selection with bagging and rule extraction in predictive modeling. Experiments on a large number of data sets from the medicinal chemistry domain, using standard algorithms implemented in theWeka data mining workbench, show that feature selection can lead to significantly improved predictive performance.When combining feature selection with bagging, employing the feature selection on each bootstrap obtains the best result.When using decision trees for rule extraction, the effect of feature selection can actually be detrimental, unless the transductive approach oracle coaching is also used. However, employing oracle coaching will lead to significantly improved performance, and the best results are obtainedwhen performing feature selection before training the opaque model. The overall conclusion is that it can make a substantial difference for the predictive performance exactly how feature selection is used in conjunction with other techniques.

  • 23.
    König, Rikard
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Niklasson, Lars
    Improving GP Classification Performance by Injection of Decision Trees2010Conference paper (Refereed)
    Abstract [en]

    This paper presents a novel hybrid method combining genetic programming and decision tree learning. The method starts by estimating a benchmark level of reasonable accuracy, based on decision tree performance on bootstrap samples of the training set. Next, a normal GP evolution is started with the aim of producing an accurate GP. At even intervals, the best GP in the population is evaluated against the accuracy benchmark. If the GP has higher accuracy than the benchmark, the evolution continues normally until the maximum number of generations is reached. If the accuracy is lower than the benchmark, two things happen. First, the fitness function is modified to allow larger GPs, able to represent more complex models. Secondly, a decision tree with increased size and trained on a bootstrap of the training data is injected into the population. The experiments show that the hybrid solution of injecting decision trees into a GP population gives synergetic effects producing results that are better than using either technique separately. The results, from 18 UCI data sets, show that the proposed method clearly outperforms normal GP, and is significantly better than the standard decision tree algorithm.

  • 24.
    Linusson, Henrik
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Efficiency Comparison of Unstable Transductive and Inductive Conformal Classifiers2014Conference paper (Refereed)
    Abstract [en]

    In the conformal prediction literature, it appears axiomatic that transductive conformal classifiers possess a higher predictive efficiency than inductive conformal classifiers, however, this depends on whether or not the nonconformity function tends to overfit misclassified test examples. With the conformal prediction framework’s increasing popularity, it thus becomes necessary to clarify the settings in which this claim holds true. In this paper, the efficiency of transductive conformal classifiers based on decision tree, random forest and support vector machine classification models is compared to the efficiency of corresponding inductive conformal classifiers. The results show that the efficiency of conformal classifiers based on standard decision trees or random forests is substantially improved when used in the inductive mode, while conformal classifiers based on support vector machines are more efficient in the transductive mode. In addition, an analysis is presented that discusses the effects of calibration set size on inductive conformal classifier efficiency.

  • 25.
    Linusson, Henrik
    et al.
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Johansson, Ulf
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Boström, Henrik
    Löfström, Tuwe
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Reliable Confidence Predictions Using Conformal Prediction2016In: Lecture Notes in Computer Science, 2016, Vol. 9651, p. 77-88Conference paper (Refereed)
    Abstract [en]

    Conformal classiers output condence prediction regions, i.e., multi-valued predictions that are guaranteed to contain the true output value of each test pattern with some predened probability. In order to fully utilize the predictions provided by a conformal classier, it is essential that those predictions are reliable, i.e., that a user is able to assess the quality of the predictions made. Although conformal classiers are statistically valid by default, the error probability of the prediction regions output are dependent on their size in such a way that smaller, and thus potentially more interesting, predictions are more likely to be incorrect. This paper proposes, and evaluates, a method for producing rened error probability estimates of prediction regions, that takes their size into account. The end result is a binary conformal condence predictor that is able to provide accurate error probability estimates for those prediction regions containing only a single class label.

  • 26.
    Linusson, Henrik
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Signed-Error Conformal Regression2014In: Advances in Knowledge Discovery and Data Mining 18th Pacific-Asia Conference, PAKDD 2014 Tainan, Taiwan, May 13-16, 2014 Proceedings, Part I, Springer , 2014, p. 224-236Conference paper (Refereed)
    Abstract [en]

    This paper suggests a modification of the Conformal Prediction framework for regression that will strengthen the associated guarantee of validity. We motivate the need for this modification and argue that our conformal regressors are more closely tied to the actual error distribution of the underlying model, thus allowing for more natural interpretations of the prediction intervals. In the experimentation, we provide an empirical comparison of our conformal regressors to traditional conformal regressors and show that the proposed modification results in more robust two-tailed predictions, and more efficient one-tailed predictions.

  • 27.
    Linusson, Henrik
    et al.
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Norinder, Ulf
    Swetox, Karolinska Institutet.
    Boström, Henrik
    Dept. of Computer Science and Informatics, Stockholm University.
    Johansson, Ulf
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Löfström, Tuve
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    On the Calibration of Aggregated Conformal Predictors2017In: Proceedings of Machine Learning Research, 2017Conference paper (Refereed)
    Abstract [en]

    Conformal prediction is a learning framework that produces models that associate witheach of their predictions a measure of statistically valid confidence. These models are typi-cally constructed on top of traditional machine learning algorithms. An important result ofconformal prediction theory is that the models produced are provably valid under relativelyweak assumptions—in particular, their validity is independent of the specific underlyinglearning algorithm on which they are based. Since validity is automatic, much research onconformal predictors has been focused on improving their informational and computationalefficiency. As part of the efforts in constructing efficient conformal predictors, aggregatedconformal predictors were developed, drawing inspiration from the field of classification andregression ensembles. Unlike early definitions of conformal prediction procedures, the va-lidity of aggregated conformal predictors is not fully understood—while it has been shownthat they might attain empirical exact validity under certain circumstances, their theo-retical validity is conditional on additional assumptions that require further clarification.In this paper, we show why validity is not automatic for aggregated conformal predictors,and provide a revised definition of aggregated conformal predictors that gains approximatevalidity conditional on properties of the underlying learning algorithm.

  • 28.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Utilizing Diversity and Performance Measures for Ensemble Creation2009Licentiate thesis, monograph (Other academic)
    Abstract [en]

    An ensemble is a composite model, aggregating multiple base models into one predictive model. An ensemble prediction, consequently, is a function of all included base models. Both theory and a wealth of empirical studies have established that ensembles are generally more accurate than single predictive models. The main motivation for using ensembles is the fact that combining several models will eliminate uncorrelated base classifier errors. This reasoning, however, requires the base classifiers to commit their errors on different instances – clearly there is no point in combining identical models. Informally, the key term diversity means that the base classifiers commit their errors independently of each other. The problem addressed in this thesis is how to maximize ensemble performance by analyzing how diversity can be utilized when creating ensembles. A series of studies, addressing different facets of the question, is presented. The results show that ensemble accuracy and the diversity measure difficulty are the two individually best measures to use as optimization criterion when selecting ensemble members. However, the results further suggest that combinations of several measures are most often better as optimization criteria than single measures. A novel method to find a useful combination of measures is proposed in the end. Furthermore, the results show that it is very difficult to estimate predictive performance on unseen data based on results achieved with available data. Finally, it is also shown that implicit diversity achieved by varied ANN architecture or by using resampling of features is beneficial for ensemble performance.

  • 29.
    Löfström, Tuve
    et al.
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Boström, Henrik
    Stockholm University, Department of Computer and Systems Sciences.
    Linusson, Henrik
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Johansson, Ulf
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Bias Reduction through Conditional Conformal Prediction2015In: Intelligent Data Analysis, ISSN 1088-467X, E-ISSN 1571-4128, Vol. 19, no 6, p. 1355-1375Article in journal (Refereed)
  • 30.
    Löfström, Tuve
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Borström, Henrik
    Ensemble Member Selection Using Multi-Objective Optimization2009Conference paper (Refereed)
    Abstract [en]

    Both theory and a wealth of empirical studies have established that ensembles are more accurate than single predictive models. Unfortunately, the problem of how to maximize ensemble accuracy is, especially for classification, far from solved. In essence, the key problem is to find a suitable criterion, typically based on training or selection set performance, highly correlated with ensemble accuracy on novel data. Several studies have, however, shown that it is difficult to come up with a single measure, such as ensemble or base classifier selection set accuracy, or some measure based on diversity, that is a good general predictor for ensemble test accuracy. This paper presents a novel technique that for each learning task searches for the most effective combination of given atomic measures, by means of a genetic algorithm. Ensembles built from either neural networks or random forests were empirically evaluated on 30 UCI datasets. The experimental results show that when using the generated combined optimization criteria to rank candidate ensembles, a higher test set accuracy for the top ranked ensemble was achieved, compared to using ensemble accuracy on selection data alone. Furthermore, when creating ensembles from a pool of neural networks, the use of the generated combined criteria was shown to generally outperform the use of estimated ensemble accuracy as the single optimization criterion.

  • 31.
    Löfström, Tuve
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Boström, Henrik
    Comparing Methods for Generating Diverse Ensembles of Artificial Neural Networks2010Conference paper (Refereed)
    Abstract [en]

    It is well-known that ensemble performance relies heavily on sufficient diversity among the base classifiers. With this in mind, the strategy used to balance diversity and base classifier accuracy must be considered a key component of any ensemble algorithm. This study evaluates the predictive performance of neural network ensembles, specifically comparing straightforward techniques to more sophisticated. In particular, the sophisticated methods GASEN and NegBagg are compared to more straightforward methods, where each ensemble member is trained independently of the others. In the experimentation, using 31 publicly available data sets, the straightforward methods clearly outperformed the sophisticated methods, thus questioning the use of the more complex algorithms.

  • 32.
    Löfström, Tuve
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Boström, Henrik
    University of Borås, School of Business and IT.
    Effective Utilization of Data in Inductive Conformal Prediction2013Conference paper (Refereed)
    Abstract [en]

    Conformal prediction is a new framework producing region predictions with a guaranteed error rate. Inductive conformal prediction (ICP) was designed to significantly reduce the computational cost associated with the original transductive online approach. The drawback of inductive conformal prediction is that it is not possible to use all data for training, since it sets aside some data as a separate calibration set. Recently, cross-conformal prediction (CCP) and bootstrap conformal prediction (BCP) were proposed to overcome that drawback of inductive conformal prediction. Unfortunately, CCP and BCP both need to build several models for the calibration, making them less attractive. In this study, focusing on bagged neural network ensembles as conformal predictors, ICP, CCP and BCP are compared to the very straightforward and cost-effective method of using the out-of-bag estimates for the necessary calibration. Experiments on 34 publicly available data sets conclusively show that the use of out-of-bag estimates produced the most efficient conformal predictors, making it the obvious preferred choice for ensembles in the conformal prediction framework.

  • 33.
    Löfström, Tuve
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Boström, Henrik
    On the Use of Accuracy and Diversity Measures for Evaluating and Selecting Ensembles of Classifiers2008Conference paper (Refereed)
    Abstract [en]

    The test set accuracy for ensembles of classifiers selected based on single measures of accuracy and diversity as well as combinations of such measures is investigated. It is found that by combining measures, a higher test set accuracy may be obtained than by using any single accuracy or diversity measure. It is further investigated whether a multi-criteria search for an ensemble that maximizes both accuracy and diversity leads to more accurate ensembles than by optimizing a single criterion. The results indicate that it might be more beneficial to search for ensembles that are both accurate and diverse. Furthermore, the results show that diversity measures could compete with accuracy measures as selection criterion.

  • 34.
    Löfström, Tuve
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Boström, Henrik
    The Problem with Ranking Ensembles Based on Training or Validation Performance2008In: Proceedings of the International Joint Conference on Neural Networks, IEEE Press , 2008Conference paper (Refereed)
    Abstract [en]

    The main purpose of this study was to determine whether it is possible to somehow use results on training or validation data to estimate ensemble performance on novel data. With the specific setup evaluated; i.e. using ensembles built from a pool of independently trained neural networks and targeting diversity only implicitly, the answer is a resounding no. Experimentation, using 13 UCI datasets, shows that there is in general nothing to gain in performance on novel data by choosing an ensemble based on any of the training measures evaluated here. This is despite the fact that the measures evaluated include all the most frequently used; i.e. ensemble training and validation accuracy, base classifier training and validation accuracy, ensemble training and validation AUC and two diversity measures. The main reason is that all ensembles tend to have quite similar performance, unless we deliberately lower the accuracy of the base classifiers. The key consequence is, of course, that a data miner can do no better than picking an ensemble at random. In addition, the results indicate that it is futile to look for an algorithm aimed at optimizing ensemble performance by somehow selecting a subset of available base classifiers.

  • 35.
    Löfström, Tuve
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Boström, Henrik
    Using Optimized Optimization Criteria in Ensemble Member Selection2009Conference paper (Refereed)
    Abstract [en]

    Both theory and a wealth of empirical studies have established that ensembles are more accurate than single predictive models. Unfortunately, the problem of how to maximize ensemble accuracy is, especially for classification, far from solved. This paper presents a novel technique, where genetic algorithms are used for combining several measurements into a complex criterion that is optimized separately for each dataset. The experimental results show that when using the generated combined optimization criteria to rank candidate ensembles, a higher test set accuracy for the top ranked ensemble was achieved compared to using other measures alone, e.g., estimated ensemble accuracy or the diversity measure difficulty.

  • 36.
    Löfström, Tuve
    et al.
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Linnusson, Henrik
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Sönströd, Cecilia
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Johansson, Ulf
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    System Health Monitoring using Conformal Anomaly Detection2015Report (Other (popular science, discussion, etc.))
  • 37.
    Löfström, Tuve
    et al.
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Zhao, Jing
    University of Stockholm.
    Linnusson, Henrik
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Jansson, Karl
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Predicting Adverse Drug Events with Confidence2015In: Thirteenth Scandinavian Conference on Artificial Intelligence / [ed] Sławomir Nowaczyk, IOS Press, 2015Conference paper (Refereed)
    Abstract [en]

    This study introduces the conformal prediction framework to the task of predicting the presence of adverse drug events in electronic health records with an associated measure of statistically valid confidence. The imbalanced nature of the problem was addressed both by evaluating different machine learning algorithms, and by comparing different types of conformal predictors. A novel solution was also evaluated, where different underlying models, each model optimized towards one particular class, were combined into a single conformal predictor. This novel solution proved to be superior to previously existing approaches.

  • 38.
    Löfström, Tuwe
    University of Borås, Faculty of Librarianship, Information, Education and IT. Stockholms universitet, Institutionen för data- och systemvetenskap.
    On Effectively Creating Ensembles of Classifiers: Studies on Creation Strategies, Diversity and Predicting with Confidence2015Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    An ensemble is a composite model, combining the predictions from several other models. Ensembles are known to be more accurate than single models. Diversity has been identified as an important factor in explaining the success of ensembles. In the context of classification, diversity has not been well defined, and several heuristic diversity measures have been proposed. The focus of this thesis is on how to create effective ensembles in the context of classification. Even though several effective ensemble algorithms have been proposed, there are still several open questions regarding the role diversity plays when creating an effective ensemble. Open questions relating to creating effective ensembles that are addressed include: what to optimize when trying to find an ensemble using a subset of models used by the original ensemble that is more effective than the original ensemble; how effective is it to search for such a sub-ensemble; how should the neural networks used in an ensemble be trained for the ensemble to be effective? The contributions of the thesis include several studies evaluating different ways to optimize which sub-ensemble would be most effective, including a novel approach using combinations of performance and diversity measures. The contributions of the initial studies presented in the thesis eventually resulted in an investigation of the underlying assumption motivating the search for more effective sub-ensembles. The evaluation concluded that even if several more effective sub-ensembles exist, it may not be possible to identify which sub-ensembles would be the most effective using any of the evaluated optimization measures. An investigation of the most effective ways to train neural networks to be used in ensembles was also performed. The conclusions are that effective ensembles can be obtained by training neural networks in a number of different ways but that high average individual accuracy or much diversity both would generate effective ensembles. Several findings regarding diversity and effective ensembles presented in the literature in recent years are also discussed and related to the results of the included studies. When creating confidence based predictors using conformal prediction, there are several open questions regarding how data should be utilized effectively when using ensembles. Open questions related to predicting with confidence that are addressed include: how can data be utilized effectively to achieve more efficient confidence based predictions using ensembles; how do problems with class imbalance affect the confidence based predictions when using conformal prediction? Contributions include two studies where it is shown in the first that the use of out-of-bag estimates when using bagging ensembles results in more effective conformal predictors and it is shown in the second that a conformal predictor conditioned on the class labels to avoid a strong bias towards the majority class is more effective on problems with class imbalance. The research method used is mainly inspired by the design science paradigm, which is manifested by the development and evaluation of artifacts. 

  • 39.
    Löfström, Tuwe
    et al.
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Johansson, Ulf
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Balkow, Jenny
    Sundell, Håkan
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    A data-driven approach to online fitting services2018In: Data Science and Knowledge Engineering for Sensing Decision Support / [ed] Jun Liu (Ulster University, UK), Jie Lu (University of Technology Sydney, Australia), Yang Xu (Southwest Jiaotong University, China), Luis Martinez (University of Jaén, Spain) and Etienne E Kerre (University of Ghent, Belgium), 2018, p. 1559-1566Conference paper (Refereed)
    Abstract [en]

    Being able to accurately predict several attributes related to size is vital for services supporting online fitting. In this paper, we investigate a data-driven approach, while comparing two different supervised modeling techniques for predictive regression; standard multiple linear regression and neural networks. Using a fairly large, publicly available, data set of high quality, the main results are somewhat discouraging. Specifically, it is questionable whether key attributes like sleeve length, neck size, waist and chest can be modeled accurately enough using easily accessible input variables as sex, weight and height. This is despite the fact that several services online offer exactly this functionality. For this specific task, the results show that standard linear regression was as accurate as the potentially more powerful neural networks. Most importantly, comparing the predictions to reasonable levels for acceptable errors, it was found that an overwhelming majority of all instances had at least one attribute with an unacceptably high prediction error. In fact, if requiring that all variables are predicted with an acceptable accuracy, less than 5% of all instances met that criterion. Specifically, for females, the success rate was as low as 1.8%.

  • 40.
    Radon, Anita
    et al.
    University of Borås, Faculty of Textiles, Engineering and Business.
    Johansson, Pia
    University of Borås, Faculty of Textiles, Engineering and Business.
    Sundström, Malin
    University of Borås, Faculty of Textiles, Engineering and Business.
    Alm, Håkan
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Behre, Martin
    University of Borås, Faculty of Textiles, Engineering and Business.
    Göbel, Hannes
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Hallqvist, Carina
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Hernandez, Niina
    University of Borås, Faculty of Textiles, Engineering and Business.
    Hjelm-Lidholm, Sara
    University of Borås, Faculty of Textiles, Engineering and Business.
    König, Rikard
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Lindberg, Ulla
    University of Borås, Faculty of Textiles, Engineering and Business.
    Löfström, Tuwe
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Sundell, Håkan
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Wallström, Stavroula
    University of Borås, Faculty of Textiles, Engineering and Business.
    What happens when retail meets research?: Special session2016Conference paper (Other academic)
    Abstract [en]

    special session Information

    We are witnessing the beginning of a seismic shift in retail due to digitalization. However, what is meant by digitalization is less clear. Sometimes it is understood as means for automatization and sometimes it is regarded as equal to e-commerce. Sometimes digitalization is considered being both automatization and e-commerce trough new technology. In recent years there has been an increase in Internet and mobile devise usage within the retail sector and e-commerce is growing, encompassing both large and small retailers. Digital tools such as, new applications are developing rapidly in order to search for information about products based on price, health, environmental and ethical considerations, and also to facilitate payments. Also the fixed store settings are changing due to digitalization and at an overall level; digitalization will lead to existing business models being reviewed, challenged and ultimately changed. More specifically, digitalization has consequences for all parts of the physical stores including customer interface, knowledge creation, sustainability performance and logistics. As with all major shifts, digitalization comprises both opportunities and challenges for retail firms and employees, and these needs to be empirically studied and systematically analysed. The Swedish Institute for Innovative Retailing at University of Borås is a research centre with the aim of identifying and analysing emerging trends that digitalization brings for the retail industry.

  • 41.
    Sundell, Håkan
    et al.
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Löfström, Tuve
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Johansson, Ulf
    University of Borås, Faculty of Librarianship, Information, Education and IT.
    Explorative multi-objective optimization of marketing campaigns for the fashion retail industry2018In: Data Science and Knowledge Engineering for Sensing Decision Support / [ed] Jun Liu, Jie Lu, Yang Xu, Luis Martinez and Etienne E Kerre, 2018, p. 1551-1558Conference paper (Refereed)
    Abstract [en]

    We show how an exploratory tool for association rule mining can be used for efficient multi-objective optimization of marketing campaigns for companies within the fashion retail industry. We have earlier designed and implemented a novel digital tool for mining of association rules from given basket data. The tool supports efficient finding of frequent itemsets over multiple hierarchies and interactive visualization of corresponding association rules together with numerical attributes. Normally when optimizing a marketing campaign, factors that cause an increased level of activation among the recipients could in fact reduce the profit, i.e., these factors need to be balanced, rather than optimized individually. Using the tool we can identify important factors that influence the search for an optimal campaign in respect to both activation and profit. We show empirical results from a real-world case-study using campaign data from a well-established company within the fashion retail industry, demonstrating how activation and profit can be simultaneously targeted, using computer-generated algorithms as well as human-controlled visualization.

  • 42.
    Sönströd, Cecilia
    et al.
    University of Borås, School of Business and IT.
    Johansson, Ulf
    University of Borås, School of Business and IT.
    Löfström, Tuve
    University of Borås, School of Business and IT.
    Evaluating Algorithms for Concept Description2009Conference paper (Refereed)
    Abstract [en]

    When performing concept description, models need to be evaluated both on accuracy and comprehensibility. A comprehensible concept description model should present the most important relationships in the data in an accurate and understandable way. Two natural representations for this are decision trees and decision lists. In this study, the two decision list algorithms RIPPER and Chipper, and the decision tree algorithm C4.5, are evaluated for concept description, using publicly available datasets. The experiments show that C4.5 performs very well regarding accuracy and brevity, i.e. the ability to classify instances with few tests, but also produces large models that are hard to survey and contain many extremely specific rules, thus not being good concept descriptions. The decision list algorithms perform reasonably well on accuracy, and are mostly able to produce small models with relatively good predictive performance. Regarding brevity, Chipper is better than RIPPER, using on average fewer conditions to classify an instance. RIPPER, on the other hand, excels in relevance, i.e. the ability to capture a large number of instances with every rule.

1 - 42 of 42
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf