Measuring value

There is a correlation between journal impact factors and perceived quality. Many respected journals usually have higher impact factors than others in their field. However, a high number of citations does not necessarily mean high value as articles can be cited for reasons other than a positive evaluation of its content. As many as 50% of papers are never cited at all!

Journals covering more general and all-embracing topics such as Science and Nature generally score higher in tools such as JCR than those that are more specialised. Yet the latter may reach the desired audience more effectively.

The impact of a specific article can not be measured by the impact factor of the journal in which it is published. Conversely an impact factor for a journal title will not show how heavily any specific article in that journal has been used.

Many critics of impact factors highlight the potential for manipulation. Some of the main issues of concern are:

  • Authors self-citing. This is less of a problem now where tools are creating options to exclude self-citations in calculations. Both WoS and Scopus provide this feature.
  • In addition to the above point, groups of researchers may cite each other's work
  • An increase in multiple authorship of papers
  • Splitting outputs into many articles
  • The possibility of strategic behaviour on the part of journal editors and publishers. For example, publishing issues early in the year which may be cited within that year; encouraging authors to cite other papers in their journals; publishing more review articles which tend to receive a higher number of citations compared to other types of articles.

Creative Commons License

My RI by University College Dublin, Dublin City University, Dublin Institute of Technology, The National University of Ireland, Maynooth and the NDLR adapted by Marion Kelt, Glasgow Caledonian University is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Based on a work at