Performance, Scalability, and Reliability (PSR) challenges, metrics and tools for web testing: A Case Study
Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
Context. Testing of web applications is an important task, as it ensures the functionality and quality of web applications. The quality of web application comes under non-functional testing. There are many quality attributes such as performance, scalability, reliability, usability, accessibility and security. Among these attributes, PSR is the most important and commonly used attributes considered in practice. However, there are very few empirical studies conducted on these three attributes.
Objectives. The purpose of this study is to identify metrics and tools that are available for testing these three attributes. And also to identify the challenges faced while testing these attributes both from literature and practice.
Methods. In this research, a systematic mapping study was conducted in order to collect information regarding the metrics, tools, challenges and mitigations related to PSR attributes. The required information is gathered by searching in five scientific databases. We also conducted a case study to identify the metrics, tools and challenges of the PSR attributes in practice. The case study is conducted at Ericsson, India where eight subjects were interviewed. And four subjects working in other companies (in India) were also interviewed in order to validate the results obtained from the case company. In addition to this, few documents of previous projects from the case company are collected for data triangulation.
Results. A total of 69 metrics, 54 tools and 18 challenges are identified from systematic mapping study. And 30 metrics, 18 tools and 13 challenges are identified from interviews. Data is also collected through documents and a total of 16 metrics, 4 tools and 3 challenges were identified from these documents. We formed a list based on the analysis of data that is related to tools, metrics and challenges.
Conclusions. We found that metrics available from literature are overlapping with metrics that are used in practice. However, tools found in literature are overlapping only to some extent with practice. The main reason for this deviation is because of the limitations that are identified for the tools, which lead to the development of their own in-house tool by the case company. We also found that challenges are partially overlapped between state of art and practice. We are unable to collect mitigations for all these challenges from literature and hence there is a need for further research to be done. Among the PSR attributes, most of the literature is available on performance attribute and most of the interviewees are comfortable to answer the questions related to performance attribute. Thus, we conclude there is a lack of empirical research related to scalability and reliability attributes. As of now, our research is dealing with PSR attributes in particular and there is a scope for further research in this area. It can be implemented on the other quality attributes and the research can be done in a larger scale (considering more number of companies).
Place, publisher, year, edition, pages
2016. , 142 p.
Web applications, Web testing, Performance, Scalability, Reliability, Quality
IdentifiersURN: urn:nbn:se:bth-12801OAI: oai:DiVA.org:bth-12801DiVA: diva2:945974
Ericsson Research and Development Department, Gurgaon
Subject / course
PA2534 Master's Thesis (120 credits) in Software Engineering
PAAXA Master of Science Programme in Software Engineering
2016-05-31, J1360, Blekinge Tekniska Högskola, 371 79, Karlskrona, Sweden, 11:00 (English)
Unterkalmsteiner, Michael, Post-doctor
Börstler, Jürgen, Professor