A framework for evaluating automatic indexing or classification in the context of retrieval
2016 (English)In: Journal of the Association for Information Science and Technology, ISSN 2330-1635, E-ISSN 2330-1643, Vol. 67, no 1, 3-16 p.Article in journal (Refereed) Published
Tools for automatic subject assignment help deal with scale and sustainability in creating and enriching metadata, establishing more connections across and between resources and enhancing consistency. While some software vendors and experimental researchers claim the tools can replace manual subject indexing, hard scientific evidence of their performance in operating information environments is scarce. A major reason for this is that research is usually conducted in laboratory conditions, excluding the complexities of real-life systems and situations. The paper reviews and discusses issues with existing evaluation approaches such as problems of aboutness and relevance assessments, implying the need to use more than a single “gold standard” method when evaluating indexing and retrieval and proposes a comprehensive evaluation framework. The framework is informed by a systematic review of the literature on indexing, classification and approaches: evaluating indexing quality directly through assessment by an evaluator or through comparison with a gold standard; evaluating the quality of computer-assisted indexing directly in the context of an indexing workflow, and evaluating indexing quality indirectly through analyzing retrieval performance.
Place, publisher, year, edition, pages
2016. Vol. 67, no 1, 3-16 p.
Research subject Humanities, Library and Information Science
IdentifiersURN: urn:nbn:se:lnu:diva-45521DOI: 10.1002/asi.23600ISI: 000368340100001OAI: oai:DiVA.org:lnu-45521DiVA: diva2:842453