Change search
ReferencesLink to record
Permanent link

Direct link
A Quality Criteria Based Evaluation of Topic Models
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
2016 (English)Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Abstract [en]

Context. Software testing is the process, where a particular software product, or a system is executed, in order to find out the bugs, or issues which may otherwise degrade its performance. Software testing is usually done based on pre-defined test cases. A test case can be defined as a set of terms, or conditions that are used by the software testers to determine, if a particular system that is under test operates as it is supposed to or not. However, in numerous situations, test cases can be so many that executing each and every test case is practically impossible, as there may be many constraints. This causes the testers to prioritize the functions that are to be tested. This is where the ability of topic models can be exploited. Topic models are unsupervised machine learning algorithms that can explore large corpora of data, and classify them by identifying the hidden thematic structure in those corpora. Using topic models for test case prioritization can save a lot of time and resources.

Objectives. In our study, we provide an overview of the amount of research that has been done in relation to topic models. We want to uncover various quality criteria, evaluation methods, and metrics that can be used to evaluate the topic models. Furthermore, we would also like to compare the performance of two topic models that are optimized for different quality criteria, on a particular interpretability task, and thereby determine the topic model that produces the best results for that task.

Methods. A systematic mapping study was performed to gain an overview of the previous research that has been done on the evaluation of topic models. The mapping study focused on identifying quality criteria, evaluation methods, and metrics that have been used to evaluate topic models. The results of mapping study were then used to identify the most used quality criteria. The evaluation methods related to those criteria were then used to generate two optimized topic models. An experiment was conducted, where the topics generated from those two topic models were provided to a group of 20 subjects. The task was designed, so as to evaluate the interpretability of the generated topics. The performance of the two topic models was then compared by using the Precision, Recall, and F-measure.

Results. Based on the results obtained from the mapping study, Latent Dirichlet Allocation (LDA) was found to be the most widely used topic model. Two LDA topic models were created, optimizing one for the quality criterion Generalizability (TG), and one for Interpretability (TI); using the Perplexity, and Point-wise Mutual Information (PMI) measures respectively. For the selected metrics, TI showed better performance, in Precision and F-measure, than TG. However, the performance of both TI and TG was comparable in case of Recall. The total run time of TI was also found to be significantly high than TG. The run time of TI was 46 hours, and 35 minutes, whereas for TG it was 3 hours, and 30 minutes.Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision,

Conclusions. Looking at the F-measure, it can be concluded that the interpretability topic model (TI) performs better than the generalizability topic model (TG). However, while TI performed better in precision, recall was comparable. Furthermore, the computational cost to create TI is significantly higher than for TG. Hence, we conclude that, the selection of the topic model optimization should be based on the aim of the task the model is used for. If the task requires high interpretability of the model, and precision is important, such as for the prioritization of test cases based on content, then TI would be the right choice, provided time is not a limiting factor. However, if the task aims at generating topics that provide a basic understanding of the concepts (i.e., interpretability is not a high priority), then TG is the most suitable choice; thus making it more suitable for time critical tasks.

Place, publisher, year, edition, pages
2016. , 67 p.
Keyword [en]
Topic models, Topic interpretability, Test cases, Latent Dirichlet Allocation, Topic model optimization
National Category
Software Engineering
Identifiers
URN: urn:nbn:se:bth-13274OAI: oai:DiVA.org:bth-13274DiVA: diva2:1040762
Subject / course
PA2534 Master's Thesis (120 credits) in Software Engineering
Educational program
PAAXA Master of Science Programme in Software Engineering
Presentation
2016-10-26, J1650, Blekinge Tekniska Högskola, Karlskrona, 15:00 (English)
Supervisors
Examiners
Available from: 2016-10-31 Created: 2016-10-28 Last updated: 2016-10-31Bibliographically approved

Open Access in DiVA

fulltext(1610 kB)40 downloads
File information
File name FULLTEXT02.pdfFile size 1610 kBChecksum SHA-512
bfd06441ebaaccec52819cda3af0b1f00a04686a7f8109345f0aeac3d673c85506fb465b04c4a041132b93f0fa8bbca4490d2bb4562e30ca386ac30a956a174c
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Sathi, Veer ReddyRamanujapura, Jai Simha
By organisation
Department of Software Engineering
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 40 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Total: 123 hits
ReferencesLink to record
Permanent link

Direct link