Digitala Vetenskapliga Arkivet

Operational message
There are currently operational disruptions. Troubleshooting is in progress.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Evaluating the Quality of GenAI Applications in Software Engineering: A Multi-case Study
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0001-5949-1375
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0001-7526-3727
Örebro University.
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0002-3646-235x
2026 (English)In: Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616, Vol. 31, no 2, article id 29Article in journal (Refereed) Published
Abstract [en]

Context: Generative AI (GenAI) is increasingly adopted in software development for tasks such as document generation, data analysis, and code generation.However, evaluating the quality of GenAI applications becomes challenging, as traditional quality measurements may not be fully applicable.

Objective: In this study, we explore how practitioners evaluate the quality of GenAI applications and investigate quality evaluation techniques.

Method: We conducted a multi-case study in three industrial projects from software development companies.We examined four GenAI application domains: document generation, data analysis and insight generation, customer service, and code generation.Data were collected through three workshops and 23 semi-structured interviews with industrial practitioners.

Results: We identified fourteen GenAI use cases and 28 metrics currently used to evaluate the quality of GenAI applications' outputs.We synthesized the identified metrics' usage patterns and challenges based on the collected data.

Conclusions: This study presents practical insights into using metrics to measure GenAI-based system qualities in real industrial settings.Our findings indicate that practitioners use custom-built and context‑specific metrics; combining these with academic metrics can strengthen GenAI system quality evaluation.

Place, publisher, year, edition, pages
Springer, 2026. Vol. 31, no 2, article id 29
Keywords [en]
GenAI, Generative artificial intelligence, Large language model, LLM, Metric, Quality evaluation
National Category
Software Engineering Artificial Intelligence
Identifiers
URN: urn:nbn:se:bth-28954DOI: 10.1007/s10664-025-10759-2ISI: 001632325800004Scopus ID: 2-s2.0-105024070431OAI: oai:DiVA.org:bth-28954DiVA, id: diva2:2018499
Part of project
SERT- Software Engineering ReThought, Knowledge Foundation
Funder
Knowledge Foundation, 20180010Available from: 2025-12-03 Created: 2025-12-03 Last updated: 2026-01-05Bibliographically approved
In thesis
1. Quality Evaluation of Generative AI Systems: Processes, Metrics, Methods, and Frameworks for Industrial Software Engineering
Open this publication in new window or tab >>Quality Evaluation of Generative AI Systems: Processes, Metrics, Methods, and Frameworks for Industrial Software Engineering
2026 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Generative Artificial Intelligence (GenAI) is being rapidly adopted in software engineering, introducing a paradigm shift toward human-AI co-creation. However, the non-deterministic, probabilistic, and often black-box nature of GenAI models presents challenges for traditional software quality assurance. Conventional verification and validation techniques are insufficient to handle outputs that are neither predictably correct nor incorrect, but rather stochastically plausible. This discrepancy creates an urgent need for practical processes, metrics, and new governance frameworks to evaluate and manage the quality of GenAI systems in industrial environments.This thesis examines how industrial organizations adopt GenAI, identify metrics, and evaluate system qualities in alignment with ISO quality standards. Case studies were employed to explore real-world adoption processes, identify context-specific industrial metrics, and uncover practical insights within organizations. A snowballing literature review was conducted to systematically identify, categorize, and synthesize academic metrics for evaluating the output of GenAI systems. Finally, a controlled experiment was designed to quantitatively test the efficiency (e.g., E2E generation time) and effectiveness (e.g., accuracy) of GenAI agent choices. The main contributions of this thesis are a synthesized actionable model and framework grounded in both industrial practice and quality standards. The first contribution is a four-stage adoption model, denoted as the IMRM model (Innovate → considerations, Measure → metrics, Realize → values, Manage → improvements) that integrates early-stage risk assessment (e.g., legal, security, and licensing) andquality evaluation throughout the GenAI adoption and usage.The second contribution presents a detailed framework that connects risks andmetrics to concrete decision support, justifying the business value (e.g., quality gates) and technical trade-offs of GenAI solutions. The third contribution provides a structured mapping of GenAI quality to ISO/IEC 25010, 25023, and 25059 characteristics, attempting to ground practical evaluation needs within a standardized vocabulary. This thesis concludes that a structured quality evaluation process, which prioritizes risks and context, is a valuable approach intended to support building the business confidence required to leverage GenAI for efficient and effective software engineering in industry.

Place, publisher, year, edition, pages
Karlskrona: Blekinge Tekniska Högskola, 2026. p. 204
Series
Blekinge Institute of Technology Doctoral Dissertation Series, ISSN 1653-2090 ; 2026:01
Keywords
Quality Evaluation, Metrics, Artificial Intelligence, AI, Generative AI, Empirical Software Engineering
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-28958 (URN)978-91-7295-518-9 (ISBN)
Public defence
2026-01-29, J1630, Karlskrona, 13:00 (English)
Opponent
Supervisors
Available from: 2025-12-08 Created: 2025-12-03 Last updated: 2026-01-29Bibliographically approved

Open Access in DiVA

fulltext(2196 kB)51 downloads
File information
File name FULLTEXT02.pdfFile size 2196 kBChecksum SHA-512
a3d91f9a430f194bd8892d334031c8311c38b7349c62d7ef298461001d61c2010864a1bcf12cc178ea4a31007fdf938def7d408e8de0006af963ad5ae0989af7
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Yu, LiangAlégroth, EmilGorschek, Tony
By organisation
Department of Software Engineering
In the same journal
Empirical Software Engineering
Software EngineeringArtificial Intelligence

Search outside of DiVA

GoogleGoogle Scholar
Total: 51 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 256 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf