Digitala Vetenskapliga Arkivet

Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Towards Reliable Eager Test Detection: Practitioner Validation and a Tool Prototype
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0003-0066-1792
2025 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Context: Existing tools for detecting eager tests produce many false positives, rendering them unreliable for practitioners. To address this, our previous work introduced a novel definition of the Eager Test smell and a heuristic for more effective identification. Comparing the heuristic’s results with existing detection rules revealed eight test patterns where the rules misclassified the presence or absence of eager tests.

Objective: We aim to gather practitioners’ feedback on our heuristic’s assessment of these eight test patterns and operationalize the heuristic in a tool we named EagerID.

Method: We conducted a survey to collect practitioners’ feedback on the eight identified test patterns and developed EagerID to detect eager tests in Java unit test cases using JUnit. We also preliminarily evaluated EagerID on 300 test cases, which were manually analyzed in our previous study.

Results: Our survey received 23 responses from practitioners with a wide range of experience. We found that most practitioners agreed with the assessment of our heuristic. Furthermore, the preliminary evaluation of EagerID returned high precision (100%), recall (91.76%), and F-Score (95.70%).

Conclusion: Our survey findings highlight the practical relevance of the heuristic. The preliminary evaluation of the EagerID tool confirmed the heuristic’s potential for automation. These findings suggest that the heuristic provides a solid foundation for both manual and automated detection.

Place, publisher, year, edition, pages
2025.
Keywords [en]
Software testing, Test case quality, Test suite quality, Quality assurance, Test smells, Unit testing, Eager Test, Detection tool, Java, JUnit
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
URN: urn:nbn:se:bth-27674OAI: oai:DiVA.org:bth-27674DiVA, id: diva2:1948504
Conference
8th Workshop on Validation, Analysis and Evolution of Software Tests, Montréal, Canada, March 04, 2025
Part of project
GIST – Gaining actionable Insights from Software Testing, Knowledge FoundationSERT- Software Engineering ReThought, Knowledge Foundation
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20220235Knowledge Foundation, 20180010Available from: 2025-03-31 Created: 2025-03-31 Last updated: 2025-04-04Bibliographically approved
In thesis
1. Characterizing and Assessing Test Case and Test Suite Quality
Open this publication in new window or tab >>Characterizing and Assessing Test Case and Test Suite Quality
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Context: Test cases and test suites (TCS) are central to software testing. High-quality TCS are essential for boosting practitioners’ confidence in testing. However, the quality of a test suite (a collection of test cases) is not merely the sum of the quality of individual test cases, as suite-level factors must also be considered. Achieving high-quality TCS requires defining relevant quality attributes, establishing appropriate measures for their assessment, and determining their importance within different testing contexts.

Objective: This thesis aims to (1) provide a consolidated view of TCS quality in terms of quality attributes, quality measures, and context information, (2) determine the relative importance of the quality attributes in practice, and (3) develop a reliable approach for assessing a highly prioritized quality attribute identified by practitioners.

Method: We conducted an exploratory study and a tertiary literature review for the first objective, a personal opinion survey for the second, and a comparative experiment with a small-scale evaluation study for the third.

Results: We developed a comprehensive TCS quality model grounded in practitioner insights and existing literature. Based on the survey, maintainability emerged as a critical quality attribute where practitioners need further support. A well-known indicator of poor test design that can negatively impact test-case maintainability is the Eager Test smell, which is defined as “when a test method checks several methods of the object to be tested” or “when a test verifies too much functionality.” The results of existing detection tools for eager tests are found to be inconsistent and unreliable. To better support practitioners in assessing test case maintainability, we proposed a novel, unambiguous definition of the Eager Test smell, developed a heuristic to operationalize it, and implemented a detection tool to automate its identification in practice. Our systematic approach in the tertiary review also yielded valuable insights into constructing and validating automated search results using a quasi-gold standard. We generalized these insights into recommendations for enhancing the current search validation approach.

Conclusions: The thesis makes three main contributions: (1) at the abstract level, a comprehensive quality model to help practitioners and researchers develop guidelines, templates, or tools for designing new test cases and test suites and assessing existing ones; (2) at the strategic level, identification of contextually important quality attributes; and (3), at the operational level, a refined definition of Eager Test smell, a detection heuristic and a tool prototype implementing the heuristic, advancing maintainability assessment in software testing.

 

Place, publisher, year, edition, pages
Karlskrona: Blekinge Tekniska Högskola, 2025. p. 245
Series
Blekinge Institute of Technology Doctoral Dissertation Series, ISSN 1653-2090 ; 2025:05
Keywords
Software testing, Test case quality, Test suite quality, Test smell, Eager Test
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-27676 (URN)978-91-7295-501-1 (ISBN)
Public defence
2025-05-27, C413A, Karlskrona, 13:15 (English)
Opponent
Supervisors
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2025-04-04 Created: 2025-04-03 Last updated: 2025-04-30Bibliographically approved

Open Access in DiVA

fulltext(237 kB)25 downloads
File information
File name FULLTEXT01.pdfFile size 237 kBChecksum SHA-512
e7f2f02c761a3b3bc654d386a1a44ef87087c5d373aeddb71c79593f94951e9de1eddf3e360f9654978e9ac3ebcccb37e75399b227dd386fd790fe9373aa01c8
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Tran, Huynh Khanh Vi
By organisation
Department of Software Engineering
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 25 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 496 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf