How Impact Factor engineering can damage a journal’s reputation

The dawn of bibliometrics

We’ve all noticed that science has been accelerating at a very fast rate, resulting in what has been called ‘information overload’ and more recently ‘filter failure’. There are now more researchers and more papers than ever, which has led to the heightened importance of bibliometric measures. Bibliometrics as a field is a fairly new discipline, but it has seen an impressive growth in recent years due to advances in computation and data storage, which have improved the accessibility and ease of the use of bibliometric measures (for instance through interfaces such as Sciverse Scopus or SciVal). Bibliometrics are being increasingly used as a way to systematically compare diverse entities (authors, research groups, institutions, cities, countries, disciplines, articles, journals, etc.) in a variety of contexts. These include an author deciding where to publish, a librarian working on changes in their library’s holdings, a policy maker planning funding budgets, a research manager putting together a research group, a publisher or Editor benchmarking their journal to competitors, etc.

Enter the Impact Factor

In this perspective, journal metrics can play an important role for Editors and we know it’s a topic of interest because of the high attendance at our recent webinar on the subject. There are many different metrics available and we always recommend looking at a variety of indicators to yield a bibliometric picture that is as thorough as possible, providing insights on the diverse strengths and weaknesses of any given journal1. However, we are well aware that one metric in particular seems to be considered especially important by most Editors: the Impact Factor. Opinions on the Impact Factor are divided, but it has now long been used as a prime measure in journal evaluation, and many Editors see it as part of their editorial duty to try to raise the Impact Factor of their journal2.

An Editor’s dilemma

There are various techniques through which this can be attempted, some more ethical than others, and it is an Editor’s responsibility to stay within the bounds of ethical behavior in this area. It might be tempting to try to improve one’s journal’s Impact Factor ranking at all costs, but Impact Factors are only as meaningful as the data that feed into them3: if an Impact Factor is exceedingly inflated as a result of a high proportion of gratuitous self-citations, it will not take long for the community to identify this (especially in an online age of easily accessible citation data). This realisation can be damaging to the reputation of a journal and its Editors, and might lead to a loss of quality manuscript submissions to the journal, which in turn is likely to affect the journal’s future impact. The results of a recent survey4 draw attention to the frequency of one particularly unethical editorial activity in business journals: coercive citation requests (Editors demanding authors cite their journal as a condition of manuscript acceptance).

Elsevier’s philosophy on the Impact Factor
“Elsevier uses the Impact Factor (IF) as one of a number of performance indicators for journals. It acknowledges the many caveats associated with its use and strives to share best practice with its authors, editors, readers and other stakeholders in scholarly communication. Elsevier seeks clarity and openness in all communications relating to the IF and does not condone the practice of manipulation of the IF for its own sake.”

This issue has already received some attention from the editorial community in the form of an editorial in the Journal of the American Society for Information Science and Technology5. Although some Elsevier journals were highlighted in the study, our analysis of 2010 citations to 2008-2009 scholarly papers (replicating the 2010 Impact Factor window using Scopus data) showed that half of all Elsevier journals have less than 10% journal self-citations, and 80% of them have less than 20% journal self-citations. This can be attributed to the strong work ethic of the Editors who work with us, and it is demonstrated through our philosophy on the Impact Factor (see text box on the right) and policy on journal self-citations (see text box below): Elsevier has a firm position against any ‘Impact Factor engineering’ practices.

So, what is the ethically acceptable level of journal self-citations?

There are probably as many answers to this question as there are journals. Journal self-citation rates vary between scientific fields, and a highly specialised journal is likely to have a larger proportion of journal self-citations than a journal of broader scope. A new journal is also prone to a higher journal self-citation rate as it needs time to grow in awareness amongst the relevant scholarly communities.

Elsevier’s policy on journal self-citations
“An editor should never conduct any practice that obliges authors to cite his or her journal either as an implied or explicit condition of acceptance for publication. Any recommendation regarding articles to be cited in a paper should be made on the basis of direct relevance to the author’s article, with the objective of improving the final published research. Editors should direct authors to relevant literature as part of the peer review process; however, this should never extend to blanket instructions to cite individual journals. […] Part of your role as Editor is to try to increase the quality and usefulness of the journal. Attracting high quality articles from areas that are topical is likely the best approach. Review articles tend to be more highly cited than original research, and letters to the Editor and editorials can be beneficial. However, practices that ‘engineer’ citation performance for its own sake, such as forced self-citation are neither acceptable nor supported by Elsevier.”

As mentioned in a Thomson Reuters report on the subject: “A relatively high self-citation rate can be due to several factors. It may arise from a journal’s having a novel or highly specific topic for which it provides a unique publication venue. A high self-citation rate may also result from the journal having few incoming citations from other sources. Journal self-citation might also be affected by sociological factors in the practice of citation. Researchers will cite journals of which they are most aware; this is roughly the same population of journals to which they will consider sending their own papers for review and publication. It is also possible that self-citation derives from an editorial practice of the journal, resulting in a distorted view of the journal’s participation in the literature.”6

Take care of the journal and the Impact Factor will take care of itself

There are various ethical ways an Editor can try to improve the Impact Factor of their journal. Through your publishing contact, Elsevier can provide insights as to the relative bibliometric performance of keywords, journal issues, article types, authors, institutes, countries, etc., all of which can be used to inform editorial strategy. Journals may have the options to publish official society communications, guidelines, taxonomies, methodologies, special issues on topical subjects, invited content from leading figures in the field, interesting debates on currently relevant themes, etc., which can all help to increase the Impact Factor and other citation metrics. A high quality journal targeted at the right audience should enjoy a respectable Impact Factor in its field, which should be a sign of its value rather being an end in itself. Editors often ask me how they can raise their journal’s Impact Factor, but the truth is that as they already work towards improving the quality and relevance of their journal, they are likely to reap rewards in many areas, including an increasing Impact Factor. And this is the way it should be: a higher Impact Factor should reflect a genuine improvement in a journal, not a meaningless game that reduces the usefulness of available bibliometric measures.

References

1 Amin, M & Mabe, M (2000), “Impact Factors: use and abuse”, Perspectives in Publishing, number 1

2 Krell, FK (2010), “Should editors influence journal impact factors?”, Learned Publishing, Volume 23, issue 1, pages 59-62, DOI:10.1087/20100110

3 Reedijk, J & Moed, HF (2008), “Is the impact of journal impact factors decreasing?”, Journal of Documentation, Volume 64, issue 2, pages 183-192, DOI: 10.1108/00220410810858001

4 Wilhite, AW & Fong, EA, (2012) “Coercive Citation in Academic Publishing”, Science, Volume 335, issue 6068, pages 542–543, DOI: 10.1126/science.1212540

5 Cronin, B (2012), “Do me a favor”, Journal of the American Society for Information Science and Technology, early view, DOI: 10.1002/asi.22716

6 McVeigh, M (2002), "Journal Self-Citation in the Journal Citation Reports – Science Edition"

Author Biography

Sarah Huggett

Sarah Huggett
PUBLISHING INFORMATION MANAGER, RESEARCH & ACADEMIC RELATIONS
As part of the Scientometrics & Market Analysis team, Sarah provides strategic and tactical insights to colleagues and publishing partners, and strives to inform the bibliometrics debate through various internal and external discussions. Her specific interests are in communication and the use of alternative metrics such as SNIP and usage for journal evaluation. After completing an M. Phil in English Literature at the University of Grenoble (France), including one year at the University of Reading (UK) through the Erasmus programme, Sarah moved to the UK to teach French at Oxford University before joining Elsevier in 2006.