Change search
ReferencesLink to record
Permanent link

Direct link
Heuristic evaluation: Comparing ways of finding and reporting usability problems
University of Iceland.
Eidgenössische Technische Hochschule Zurich.
Reykjavik University.
2007 (English)In: Interacting with computers, ISSN 0953-5438, Vol. 19, no 2, 225-240 p.Article in journal (Refereed) Published
Abstract [en]

Research on heuristic evaluation in recent years has focused on improving its effectiveness and efficiency with respect to user testing. The aim of this paper is to refine a research agenda for comparing and contrasting evaluation methods. To reach this goal, a framework is presented to evaluate the effectiveness of different types of support for structured usability problem reporting. This paper reports on an empirical study of this framework that compares two sets of heuristics, Nielsen's heuristics and the cognitive principles of Gerhardt-Powals, and. two media of reporting a usability problem, i.e. either using a web tool or paper. The study found that there were no significant differences between any of the four groups in effectiveness, efficiency and inter-evaluator reliability. A more significant contribution of this research is that the framework used for the experiments proved successful and should be reusable by other researchers because of its thorough structure.

Place, publisher, year, edition, pages
2007. Vol. 19, no 2, 225-240 p.
Keyword [en]
user interface, heuristic evaluation, reporting, web tool, effectiveness, efficiency, comparison framework
National Category
Human Computer Interaction
URN: urn:nbn:se:kth:diva-95312DOI: 10.1016/j.intcom.2006.10.001ISI: 000244575900009OAI: diva2:527675

QC 20120522

Available from: 2012-05-22 Created: 2012-05-22 Last updated: 2013-01-22Bibliographically approved
In thesis
1. User Centred Evaluation in Experimental and Practical Settings
Open this publication in new window or tab >>User Centred Evaluation in Experimental and Practical Settings
2012 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

The objective of this thesis is to obtain knowledge regarding how effective user centred evaluation methods are and how user centred evaluations are conducted by IT professionals. This will be achieved by exploring user centred evaluation in experimental and practical settings. The knowledge gained in these studies should inspire suggestions for further research and suggestions for improvements on the user centred evaluation activity.

Two experimental studies were conducted. One compares the results from using three user centred evaluation methods, and the other examines two factors while conducting heuristic evaluation. The results show that the think-aloud evaluation method was the most effective method in finding realistic usability problems of the three methods. The number of critical problems found during think-aloud evaluation increases, if heuristic evaluation is conducted prior to the think-aloud evaluations.

Further, two studies of user centred evaluation in practical settings were performed. The IT professionals participating in those studies were using the software development process Scrum to plan their work. The results show that user centred evaluation is infrequently conducted in Scrum projects, compared to testing activities like acceptance testing. The main type of evaluation is qualitative. Few participants measure user performance or use surveys to gather quantitative results on the usability and the user experience. IT professionals get feedback from users in an informal way and gather informal feedback from peers. Many participants use a mixture of methods for gathering feedback on their work.

The outcome of this thesis shows that IT professionals should be encouraged to include users whenever possible when evaluating software, for example by using the think-aloud method. Using heuristic evaluation prior to conducting think-aloud evaluations is also recommended. In addition, IT professionals are encouraged to evaluate their software in an informal way frequently, rather than waiting for the right time to conduct a thorough quantitative evaluation.

To advance this field further, researchers who want to improve the evaluation activity for the IT professionals should study how user centred evaluation methods could be combined in an efficient way and how the use of qualitative evaluation methods could be made more effective.

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2012. xii, 80 p.
Trita-CSC-A, ISSN 1653-5723 ; 2012:05
User centred evaluation, Scrum, evaluation methods, agile software development
National Category
Human Computer Interaction
urn:nbn:se:kth:diva-95302 (URN)978-91-7501-357-2 (ISBN)
Public defence
2012-06-08, F3, Lindstedtsvagen 26, KTH, Stockholm, 13:15 (English)
QC 20120522Available from: 2012-05-22 Created: 2012-05-21 Last updated: 2012-05-22Bibliographically approved

Open Access in DiVA

fulltext(398 kB)1443 downloads
File information
File name FULLTEXT01.pdfFile size 398 kBChecksum SHA-512
Type fulltextMimetype application/pdf

Other links

Publisher's full textSciencedirect

Search in DiVA

By author/editor
Larusdottir, Marta Kristin
In the same journal
Interacting with computers
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar
Total: 1443 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 257 hits
ReferencesLink to record
Permanent link

Direct link