For several decades now, a principal measure of an article's impact1 on the scholarly world has been the number of citations it has received.

An increasing focus on using these citation counts as a proxy for scientific quality provided the catalyst for the development of journal metrics, including Garfield’s invention of the Impact Factor in the 1950s2. Journal level metrics have continued to evolve and refine; for example, relative newcomers SNIP and SJR3 are now used on Elsevier’s Scopus.

In recent years, however, interest has grown in applications at author, institute and country level. These developments can be summarized as follows (see Figure 1):

Figure 1. Types of bibliometric indicators. First, second and third generation reproduced with the permission of Moed & Plume (2011)4, fourth and fifth added by the authors.

The Journal Impact Factor (JIF) was born at a time when there was one delivery route for scholarly articles – paper publications – and computational power was expensive. The migration from paper to electronic delivery (particularly online) has enabled better understanding and analysis of citation count-based impact measurements, and created a new supply of user-activity measurements: page views and downloads.

Over the past few years, the growing importance of social networking - combined with a rising number of platforms making their activity data publicly available - has resulted in new ways of measuring scholarly communication activities: one encapsulated by the term altmetrics5. Although we have added these new metrics to Figure 1, there is no suggestion that superseding generations necessarily replace the earlier ones. In fact, the Relative Impact Measure is still used substantially, even though network analysis exists. The choice of which metric to use is often influenced by the context and question and first, second or third generation metrics may still prove more suitable options.

Although the word altmetrics is still relatively new (not yet three-years-old), several maturing applications already rely on data to give a sense of the wider impact of scholarly research. Plum Analytics is a recent, commercial newcomer, whereas Digital Science's Altmetric.com is a better established, partially-commercial solution. A third mature product is ImpactStory (formerly total-impact.org), an always-free, always-open application.

Altmetrics applications acquire the broadest possible set of data about content consumption. This includes HTML page views and PDF downloads, social usage, (e.g. tweets and Facebook comments), as well as more specialized researcher activities, such as bookmarking and reference sharing via tools like Mendeley, Zotero and Citeulike. A list of the data sources used by ImpactStory appears below. As well as counting activities surrounding the full article, there are also figure and data re-use totals. Altmetric.com also takes into account mass media links to scholarly articles.

To get a feel for how altmetrics work, you can visit www.impactstory.it or www.altmetric.com and enter a publication record. Alternatively, if you have access to Elsevier’s Scopus, you will find many articles already carry an Altmetric.com donut in the right hand bar (the donut may not be visible in older versions of Microsoft Internet Explorer). If there is no data yet available, an Altmetric.com box will not appear on the page. Elsevier also supplies data to ImpactStory, sending cited-by counts to the web-platform.

Altmetrics donut

Figure 2: An example of the Altmetric.com donut which can be found on many Scopus articles. This one, from the paper 'How to Choose a Good Scientific Problem' in Molecular Cell, shows that (at time of writing) the article has been mentioned 89 times on a variety of platforms and saved as a bookmark by more than 4,000 people.

What do all these numbers mean?

Although there is some evidence to link social network activity, such as tweets, with ultimate citation count (Priem & Piwowar et al, 20126, Eysenback, 20117), this field is still in its early stages, and a considerable number of areas still require research. Further investigation aims to uncover patterns and relationships between usage data and ultimate citation, allowing users to discover papers of interest and influence they might previously have failed to notice. Planned areas of research include:

  • Scholarly consumption versus lay consumption. With so much benefit to be gained from encouraging public engagement in science, we need new ways of tracking this. After all, while members of the public are unlikely to cite articles in a formal setting, we may well see increased social sharing. Analysis of usage data might reveal striking differences between scholarly and lay usage patterns. For example, references to research amongst the general public may be primarily driven by mass media references – just as the mass media might be influenced by academic work going viral on Twitter and Facebook: whereas one might hypothesize that activity measured in specialized scholarly tools, such as Mendeley, would be less subject to this influence. This information could be critical in allowing publishers and platform owners to tweak their systems so as to best support use and report on wider usage to funding agencies.
  • When does social networking become marketing and when does it become gaming or cheating? There has been criticism8 that the JIF can be increased by excluding or including reference counts from certain types of articles, and by journals' self-citation policies. Social data is just as prone to influence. For example, while authors' tweets about their papers are perfectly legitimate social marketing of the type previously done through email groups, and while it's reasonable to assume that some mentions of this type will go 'viral' and thus be propelled towards mass media mentions and possibly drive citations, there will inevitably be concerted efforts to build momentum that goes beyond natural self/network marketing. A sophisticated analysis of social networking mapped against author networks might be able to detect and downplay this type of activity.
  • What other factors influence ultimate impact? As we expand our ability to understand what drives scholarly impact and how usage patterns should be interpreted, the scope should increase to include other non-social facets. For example, do cross-discipline papers get a wider readership than simply the disciplines targeted? Do papers with a lay abstract attract a wider lay audience? To what extent does the inclusion of a high-ranking contributor boost citation above what might be predicted?
  • Do any particular consumption activities predicate others? Is there a computable conversion rate for moving from one activity to another? How do these vary over time and by discipline? What activities lead to citation? Are there papers that are less well cited - or not cited at all - that nevertheless appear to have impact in other ways?

Altmetrics is still in its infancy, both as a field of study and a commercial activity. Currently only a handful of smaller organizations are involved and there is no engagement from major web players such as Google or Microsoft. On the publisher front, while all are active with altmetrics in some form, only Macmillan has chosen to get involved via Digital Science's Altmetric.com. That means there is a great deal to play for. We expect to see more emergent platforms and research, and it's not impossible to envisage the development of professional advisers who work with institutions to increase their altmetrics counts – especially now that impact is increasingly tied to funding decisions (e.g. Government funding in the UK via the Research Excellence Framework).

Elsevier is fully engaged with the altmetrics movement. For example, in 2013 the Elsevier Labs team aims to co-publish large scale research which will begin to explore the relationship between the different facets and to establish a framework for understanding the meaning of this activity. It aims to build on the current work to found an empirically-based discipline that analyses the relationship between social activity, other factors and both scholarly and lay consumption and usage. By working together to combine knowledge at Elsevier, we intend to show that no single measurement can provide the whole picture and that a panel of metrics informed by empirical research and expert opinion is typically the best way to analyze the performance of a journal, an author or an article.

References and useful links

  1. http://en.wikipedia.org/wiki/Impact_factor
  2. http://www.garfield.library.upenn.edu/papers/jifchicago2005.pdf
  3. For more information on SNIP and SJR, see Elsevier website www.journalmetrics.com and Henk Moed’s interview on the SNIP methodology on YouTube
  4. Moed H  & Plume (2011). The multi-dimensional research assessment matrix. Research Assessment, May 2011(23). Retrieved from Research Trends
  5. More on the altmetrics movement, conferences and workshops may be found at www.altmetrics.org
  6. "Altmetrics in the wild: Using social media to explore scholarly impact” http://arxiv.org/abs/1203.4745
  7. "Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact” http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3278109/
  8. Show me the data, Rossner, Van Epps, Hill, J Cell Biol 2007 179:1091-1092. Published December 17, 2007

Author Biographies

Mike Taylor

Mike Taylor
TECHNOLOGY RESEARCH SPECIALIST
Mike has worked at Elsevier for 16 years. He has been a research specialist in the Labs group for the last four years, and has been involved with ORCID (and previous projects) throughout that time. Mike's other research interests include altmetrics, contributorship and author networks. Details of his research work can be found on http://labs.elsevier.com.


Judith Kamalski

Judith Kamalski
MANAGER STRATEGIC RESEARCH INSIGHTS & ANALYTICS
Judith focuses on demonstrating Elsevier’s bibliometric expertise and capabilities by connecting with the research community. She is heavily involved in analyzing, reporting and presenting commercial research performance evaluation projects for academic institutes, as well as governments. Judith has worked within several areas at Elsevier including bibliographic databases, journal publishing, strategy, sales and, most recently, within Research & Academic Relations. Judith has a PhD from Utrecht Institute of Linguistics and holds Masters Degrees in Corporate Communications and French Linguistics & Literature.