This paper treats document–document similarity approaches in the context of science mapping. Five approaches, involving nine methods, are compared experimentally. We compare text-based approaches, the citation-based bibliographic coupling approach, and approaches that combine text-based approaches and bibliographic coupling. Forty-three articles, published in the journal Information Retrieval, are used as test documents. We investigate how well the approaches agree with a ground truth subject classification of the test documents, when the complete linkage method is used, and under two types of similarities, first-order and second-order. The results show that it is possible to achieve a very good approximation of the classification by means of automatic grouping of articles. One text-only method and one combination method, under second-order similarities in both cases, give rise to cluster solutions that to a large extent agree with the classification.
We compared three different bibliometric evaluation approaches: two citation-based approaches and one based on manual classification of publishing channels into quality levels. Publication data for two universities was used, and we worked with two levels of analysis: article and department. For the article level, we investigated the predictive power of field normalized citation rates and field normalized journal impact with respect to journal level. The results for the article level show that evaluation of journals based on citation impact correlate rather well with manual classification of journals into quality levels. However, the prediction from field normalized citation rates to journal level was only marginally better than random guessing. At the department level, we studied three different indicators in the context of research fund allocation within universities and the extent to which the three indicators produce different distributions of research funds. It turned out that the three distributions of relative indicator values were very similar, which in turn yields that the corresponding distributions of hypothetical research funds would be very similar.
Academic libraries collaborate in several ways. For instance, collaboration can concern standards for indexing and statistics, technical solutions or collection development. A question that a given academic library might ask is with which other academic libraries the library should principally collaborate. In this study, we show how bibliometric methods can be used to generate information that can support decision making with regard to the question at stake. We evaluate the amount of research collaboration between Stockholm University and other Swedish academic institutions across five publishing years, and for the whole considered time period, where research collaboration is operationalized as co-publishing. A dataset of publications obtained from Web of Science, where each publication has at least one Stockholm University address, is used in the study. Co-publishing rates, non-fractionalized and fractionalized, across the publishing years and for the whole for period, for Stockholm University and other Swedish academic institutions, are reported. Further, parts of the outcome of the study are visualized in terms of co-publishing networks.
The Discounted Cumulated Impact (DCI) index has recently been proposed for research evaluation. In the present work an earlier dataset by Cronin and Meho (2007) is reanalyzed, with the aim of exemplifying the salient features of the DCI index. We apply the index on, and compare our results to, the outcomes of the Cronin-Meho (2007) study. Both authors and their top publications are used as units of analysis, which suggests that, by adjusting the parameters of evaluation according to the needs of research evaluation, the DCI index delivers data on an author's (or publication's) "lifetime" impact or current impact at the time of evaluation on an author's (or publication's) capability of inviting citations from highly cited later publications as an indication of impact, and on the relative impact across a set of authors (or publications) over their "lifetime" or currently.