A Principal Component Analysis of 39 Scientific Impact Measures

Paper by Johan Bollen, Herbert Van de Sompel, Aric Hagberg, and Ryan Chute

Summary:

Traditionally, the impact of scientific publications has been expressed in terms of citation counts (e.g., Journal Impact Factor – JIF). Today, new impact measures has been proposed based on social network analysis (e.g., eigenvector centrality) and usage log data (e.g. usage impact factor) to capture scientific impact in the digital era. However, among the plethora of new measures, which is most suitable for measuring scientific impact?

The authors performed a principal component analysis on the rankings produced by 39 different measures of scientific impact. They find that scientific impact is a multi-dimensional construct that cannot be adequately measured by any single indicator, although some are more suitable than others.

From the results, they draw four conclusions that have significant implications on the development of scientific assessment.

  1. The set of usage measures is more strongly correlated than the set of citation measures, indicating a greater reliability of usage measures calculated from the same usage log data than between citation measures calculated from the same citation data.
  2. Usage-based measures are stronger indicators of scientific Prestige than many presently available citation measures. Impact factor and journal rank turn out to be strong indicators of scientific Popularity.
  3. Usage impact measures turn out to be closer to a “consensus ranking” of journals than some common citation measures.
  4. Contrary to common belief that JIF is the “golden standard”, usage-based measures such as Usage Closeness centrality may be better “consensus” measures than JIF.

Comments are closed.