Book Aid International

Just a few moments of your time could help us raise money for charity.
We have compiled a short survey about Editors' Update and your answers will help us to improve the website.

For each completed survey, Elsevier will donate US$2 to Book Aid International which supports literacy, education and development in sub-Saharan Africa. Thank You.

Take survey Close

Return to Elsevier.com

Tagged:  Journal metrics

EU42_HERO_CakeDonut

Elsevier expands metrics perspectives with launch of new altmetrics pilots

With the increasingly interconnected internet and developments in ‘big data’ analysis, there are now many ways available to measure research impact. Traditional bibliometrics may be supplemented by usage data (pageviews and downloads), while the success of online communities and tools have led to more widespread visibility for learned commentary, readership statistics, and other measures of […]

Read more >


With the increasingly interconnected internet and developments in ‘big data’ analysis, there are now many ways available to measure research impact.

Traditional bibliometrics may be supplemented by usage data (pageviews and downloads), while the success of online communities and tools have led to more widespread visibility for learned commentary, readership statistics, and other measures of online attention. Altmetrics encompass social activity in the form of mentions on social media sites, scholarly activity in online libraries and reference managers and scholarly commentary, for instance, through scientific blogs and mass media references.

Elsevier has long been an advocate for robust informetrics and we are particularly interested in understanding how these new measures can be used in conjunction with usage and citation data, to provide new meaningful indicators to the research community. Mendeley statistics, for instance, appear to provide more insights into the academic use of a paper than Twitter.

Altmetrics, (the measurement of scholarly activity on social networks and tools), is a major buzzword at the moment, and although it is a very new discipline, interest in it is growing fast, as demonstrated by the relative search volumes in the graph below.

The results of a search on Google.com for “altmetrics” (red line) against “bibliometrics” (blue line).

The results of a search on Google.com for “altmetrics” (red line) against “bibliometrics” (blue line).

It’s not only general interest that is growing – scholarly interest appears to show high growth too, as demonstrated by a simple keyword search on Scopus for altmetric* or alt-metric* in the title, abstract, or keywords of any paper, the results of which are visible in the graph below.

The first two papers were Proceedings of the ASIST Annual Meeting in 2011; in 2012 the number of papers published jumped to eight, and in 2013 to 28 (data for 2013 may still be incomplete due to publication and indexation delays). Even though in absolute terms these are still low numbers, this is quite tremendous growth. 

Altmetrics offer an alternative insight into the use and readership of scholarly articles, and this information has driven authors, researchers, editors, and publishers to try to understand the data. To this effect, Elsevier employees are engaged on the NISO Altmetrics project, have spoken at conferences around the world (e.g. altmetrics12, ALM workshop 2013), have written academic papers, and have conducted webinars in this multi-faceted field.

The last year has seen us launch several initiatives - three of which are explored in further detail below: these are pilots on Elsevier.com journal homepages and ScienceDirect and an existing Scopus project. Joining these is an article usage alert program informing authors in participating journals how their article is being viewed. Mendeley data continues to provide an invaluable and free source of data on the discipline, location, and status of researchers.

Elsevier has also formed partnerships with altmetrics start-ups:

  • We use the Altmetric.com explorer product to understand and analyse trends as well as inform some of our marketing campaigns.
  • We have joined a pilot to investigate how interested Elsevier authors are in showcasing their research via Kudos (further details of this pilot will be available in a ‘promoting your journal’ special issue of Editors’ Update due to be published in May).

This year will see even more metrics activity. With a special altmetrics issue of Research Trends planned, more products in the pipeline, and our partnerships beginning to produce solid results, we are continuing to actively support research in this fascinating field.

Altmetric.com pilot on Elsevier.com journal homepages

At the end of last year, we began displaying the Altmetric.com colorful donuts for a journal's top three rated articles on the Elsevier.com homepages of 33 Elsevier titles.

This rating is based on a social media traffic score given by Altmetric.com and an article must have received at least one social media mention within the last six months to qualify. By clicking on the "view all" option beneath the top three list, visitors can review the donuts for the top 10 articles. In both lists, the article name links to the full-text article on ScienceDirect, while the donut links to a breakdown of the news and social media mentions.

An example of an altmetrics pod on the Elsevier.com homepage of the Journal of Experimental Social Psychology.

The pilot is led by Hans Zijlstra, Project Manager for Elsevier's STM Journals Project Management department. He worked closely with Elsevier's e-marketing team in cooperation with Altmetric.com — a company founded by Euan Adie (@Stew), who won Elsevier's Apps for Science Challenge in 2011.

Although it is still early days, Zijlstra will be closely monitoring how much traffic the donuts receive over the coming months. Based on those results and the feedback he receives, the aim is to make this available to all Elsevier journals.

He said: "These additional article metrics are intended to provide authors with extra insight into the various flavors of impact their research may achieve. We believe altmetrics will help them select a journal for article submission by giving a clearer indication of where a journal's strengths and weaknesses lie."

The donuts have also provided useful insights to publishers and editors. He explained: “They help publishers and marketeers determine which media they should engage with more often and publishers and editors can identify hot topics that might merit a special issue.”

Zijlstra and his colleagues are still working on adding to the journal homepage the names of the authors for the top ranked articles. In addition, they plan to include the donuts for participating health and medical titles on their homepages on the Health Advance platform. 

Altmetric.com’s colorful donut explained

.

The Altmetric.com algorithm computes an overall score taking into account the volume, source and author of the mentions a paper receives. This includes mentions of academic papers on social media sites (e.g. Twitter, Facebook, Pinterest, Google+), science blogs, many mainstream media outlets (including The  New York Times, The Guardian, non-English language publications like Die Zeit and Le Monde and special interest publications like Scientific American and New Scientist), peer-review site Publons, and reference managers.

News items are weighted more than blogs, and blogs are weighted more than tweets. The algorithm also factors in the authoritativeness of the authors, so a mention by an expert in the field is worth more than a mention by a lay person. The visual representation – the Altmetric.com donut – shows the proportional distribution of mentions by source type. Each source type displays a different color – blue for Twitter, yellow for blogs, and red for mainstream media sources. Links to the source data are also available. Altmetric.com tracks around a hundred thousand mentions a week, with some 3,000 new articles seen each day.

Altmetric.com pilot on ScienceDirect

Elsevier’s ScienceDirect platform, home to one-quarter of the world’s STM journal and book content, launched a six-month altmetrics pilot in December 2013.

Todd Vaccaro

Until June this year, 26 journals – including The Lancet  – will display alternating altmetrics images on an article level. Visitors landing on the relevant pages have a 50 percent chance of seeing either the traditional Altmetrics.com donut or the information presented in a bar chart form. This reflects ScienceDirect’s AB testing approach – the results will be monitored to discover which design is the most engaging and clear for users. The pilot also includes sharing buttons to promote social media mentions of the covered articles and will provide access to the individual article detail pages, which enables users to explore the actual mentions of the paper.    

During the pilot we will be assessing the popularity of the altmetrics score with users. We will also be trying to determine how far the scores promote use of the article sharing buttons. 

Todd Vaccaro, ScienceDirect Product Manager, commented: “The journals chosen for the pilot represent a good mix. We’ve included titles with a range of Impact Factors, types of attention on the social web, average altmetrics scores and subjects. We’ve also included some society journals and a recent OA title.” 

EU42_SDExample1

Example of the pilot Altmetric.com donut on a ScienceDirect article from Journal of Adolescent Health.

 

EU42_SDExample1

Example of the pilot bar chart altmetrics image on the same ScienceDirect article from Journal of Adolescent Health.

Altmetric.com donuts in Scopus

Michael Habib

Since June 2012, Elsevier’s Scopus – the largest abstract and citation database of peer-reviewed literature – has offered the Altmetric.com donut in the sidebar of document and abstract pages. It can be found on the right hand side of the screen when data is available for the article being viewed. Visitors can click through to scan the content mentioning the article and click on any entry to navigate to the original site. A “demographics” tab will also show a breakdown of where in the world the attention paid to the article is coming from.

Michael Habib, Senior Product Manager for Scopus, said: “Customers and users alike have found this a useful supplement to traditional citations. The primary point of interest hasn’t necessarily been the metrics themselves, but the underlying content. Discovering that a respected science blogger has given a positive review of your article is much more important than knowing how many people have blogged about it. This pilot has proven a powerful tool for uncovering these previously hidden citations from non-scholarly articles.”

University College London (Credit: Wikimedia Commons)

Example of an Altmetric.com donut on a Scopus article from the journal Computers in Human Behavior.

 

Useful Altmetric.com tools

  • For further details on the social media reports, and to see the score for any article containing a DOI, download the Altmetric.com Bookmarklet
  • The newly released Altmetric.com widget generator, built for bloggers, will pull in relevant articles and their related altmetrics score based on your chosen criteria.

Author biographies

Sarah Huggett

Sarah Huggett

Sarah Huggett
SENIOR PUBLISHING INFORMATION MANAGER
As part of the Scientometrics & Market Analysis team in Research & Academic Relations at Elsevier, Sarah Huggett provides strategic and tactical insights to colleagues and publishing partners, and strives to inform the research evaluation debate through various internal and external discussions. Her specific interests are in communication and the use of alternative metrics for evaluating impact. After completing an M.Phil in English Literature at the University of Grenoble (France), including one year at the University of Reading (UK) through the Erasmus programme, she moved to the UK to teach French at University of Oxford. She joined Elsevier in 2006 and the Research Trends editorial board in 2009.

Mike Taylor

Mike Taylor

Mike Taylor
TECHNOLOGY RESEARCH SPECIALIST
Based in Oxford, Mike Taylor has worked at Elsevier for 18 years, the past four as a technology research specialist for the Elsevier Labs group. In that role, he has been involved with the ORCID Registry. His other research interests include altmetrics, contributorship and author networks. Details of his research work can be found on the Elsevier Labs website. He is currently producing a series of three plays about scientists for Oxford-based theater company www.111theatre.co.uk.

Taylor is also one of the organizers of Altmetris 14, a workshop at the ACM Web Science conference in Indiana on June 23, 2014. Submissions are welcome.

Linda Willems

Altmetric pilots help Elsevier authors understand the impact of their articles

The colorful altmetric donut used to indicate an article’s impact in news and social media is now featured on various journal homepages and ScienceDirect

Read more >


Linda Willems | Senior Researcher Communications Manager, Elsevier

The academic community has traditionally looked to citation analysis to measure the impact of scientific and medical research. But with journal articles increasingly disseminated via online news and social media channels, new measures are coming to the fore.

Alternative metrics – or altmetrics – represent one of the innovative ways the reach of articles is now being assessed, and Elsevier has just launched two pilots featuring the highly-recognizable altmetric "donut."

An example of the pilot altmetric pod on the Elsevier.com homepage of the Journal of Experimental Social Psychology.

The first pilot will feature donuts for a journal's top three rated articles displayed on the Elsevier.com homepages of 33 Elsevier titles.

This rating is based on a social media traffic score given by Altmetric.com; an article must have received at least one social media mention within the last six months to qualify. By clicking on the "view all" option beneath this list, visitors can review altmetric donuts for the top 10 articles.

In both lists, the article name links to the full-text article on ScienceDirect, while the donut links to a breakdown of the news and social media mentions.

Hans Zijlstra

Hans Zijlstra

The pilot is led by Hans Zijlstra, Project Manager for Elsevier's STM Journals Project Management department. He said his team will be closely monitoring how much traffic the donuts receive over the coming six months, and depending on up-take, their aim is to make this available to all Elsevier journals.

They are still working on adding to the journal homepage the names of the authors for the top ranked articles. In addition, they plan to include the donuts for participating health and medical titles on their homepages on the Health Advance platform. 

A parallel altmetric pilot for 25 journals will run on ScienceDirect, Elsevier's scientific database of journal articles and book chapters. The ScienceDirect pilot will have a greater focus on medical journals but there will be some overlap in titles between the two trials.

For some time now, Scopus, Elsevier's abstract and citation database of peer-reviewed literature, has been offering donuts on articles for which the relevant metrics are available.

"These additional article metrics are intended to provide authors with extra insight into the various flavors of impact their research may achieve," Zijlstra said. "We believe altmetrics will help them select a journal for article submission by giving a clearer indication of where a journal's strengths and weaknesses lie."

ElsevierConnectLogo_smallThis article first appeared in Elsevier Connect, an online magazine and resource center for the science and health communities with a broad and active social media community. It features daily articles written by experts in the field as well as Elsevier colleagues.

What are altmetrics?

DonutExample

An example of the altmetric donut

The altmetric algorithm computes an overall score taking into account the number of mentions the article receives and the importance of the sources. For example, news is weighted more than blogs, and blogs are weighted more than tweets. It also factors in the authoritativeness of the authors, so a mention by an expert in the field is worth more than a mention by a lay person. The visual representation — the altmetric donut — shows the proportional distribution of mentions by source type. Each source type displays a different color – blue for Twitter, yellow for blogs, and red for mainstream media sources. Links to the source data are also available.

Euan Adie

Euan Adie

The most famous traditional metric, the Impact Factor, averages how often a journal is cited against the number of scholarly articles published in that journal. However, citations can take years to accrue.

One of the advantages of altmetrics is that the impact begins to be assessed from the moment the article is first posted online.

The pilot Altmetric pod for journal homepages has been developed by Elsevier's e-marketing team in cooperation with Altmetric.com — a company founded by Euan Adie (@Stew), who won Elsevier's Apps for Science Challenge in 2011.

Download the Altmetric Bookmarklet

For further details on the social media reports, and to see the score for any article containing a DOI, download the Altmetric Bookmarklet from Almetric.com.

Journals in the altmetric pilot

American Journal of Medicine
Appetite
Applied Catalysis B: Environmental
Biological Conservation
Biomaterials
Bioorganic & Medicinal Chemistry Letters
Journal of Archaeological Science
Computers In Human Behavior
Energy Policy
Evolution and Human Behavior
Geochimica Et Cosmochimica Acta
Earth and Planetary Science Letters (EPSL)
ICARUS
International Journal of Radiation Oncology Biology Physics
Journal Of Econometrics
Journal Of Experimental Social Psychology
Marine Policy
Public Relations Review
Resuscitation
Science of the Total Environment
Social Science & Medicine
Vaccine
Virology
Acta Astronautica
European Journal of Cancer
World Neurosurgery
Computers & Education
Journal of Hazardous Materials
Journal of Catalysis
Food Quality and Preference
NeuroImage: Clinical
International Journal of Surgery Case Reports
American Heart Journal 
HansZijlstra

Journal visualization project benefits from new changes

Earlier this year we unveiled the Journal Insights pilot. The initiative has now been rolled out to more than 800 journals and we highlight some recent improvements.

Read more >


Hans Zijlstra | Marketing Project Manager, STM Journals Project Management department, Elsevier

Back in the March edition of Editors’ Update, we discussed the newly-launched Journal Insights project in the article Increased Transparency benefits both authors and journals.

The Elsevier.com and Health Advance homepages of all journals participating in the project (and the number has now reached 850) feature a new section, ‘Journal Insights’. Authors clicking on this link arrive at a landing page where they can select data visualizations of three key groups of metrics, developed to aid their decision making. When the project pilot was launched earlier this year, those three metrics were:

  • Quality - Graphs displaying the Impact Factor, five-year Impact Factor, Article Influence and Eigenfactor, SNIP and SJR.
  • Speed – Graphical representations of both the average review speed for the journal over a five-year period and the online article publication time (also known as production speed).
  • Authors - A detailed world map allows viewers to swiftly identify the geographical distribution of (corresponding) authors who have published in the journal within the past five years.

We have continued to work on improving the information displayed. Below I have outlined some recent changes which take on board lessons learned during the pilot phase and your early feedback.

  • The metric ‘Quality’ has undergone a name change – it is now referred to as Impact.
  • The Journal Insights homepage now offers an overview containing links to the individual metrics available in each category.
  • There is a new, simplified visualization of SNIP (Source-Normalized Impact per Paper) – Relative Database Potential (RDP) has been replaced by the Raw Impact per Paper (RIP), in the 5-year historical overview the highest value has a full circle and all lower values in the 4 years relate to that visually. RIP provides an interesting alternative to the Thomson Impact Factor, because it is based on Scopus data. 2012 data has also been added to SNIP.
  • Visualizations of the Article Influence & Eigenfactor and SCImago Journal Rank (SJR) have also been updated. Additionally, SJR now includes 2012 data.
  • Changes have been made to the design to aid search engine optimization (SEO).
  • A new drop down menu has been created in the top bar to make navigation easier.

 

An example of the Journal Insights pod, which can be found on the Elsevier.com journal homepage of participating journals.

An example of the Journal Insights pod, which can be found on the Elsevier.com journal homepage of participating journals.

 

Wishlist

For 2014, we would like to add new datasets and visualizations and we will further improve the interface on the basis of author feedback. But we have to be realistic too: there is no such thing as a perfect dataset. That is why not all journals can display all metrics. Also the fact that Journal Insights was built in HTML5 language is still an issue for some old browser versions. HTML5 is optimized for mobile devices but does not display very well in Internet Explorer 8 or lower. Fortunately access via this browser is diminishing and currently comprises less than 10% of our traffic.

Feedback appreciated!

We want to continue improving – if there are changes you would like to suggest, please contact me at h.zijlstra@elsevier.com. And if you would like to see the Journal Insights information introduced on your journal homepage, please contact your publisher or marketing manager.

Example of an Elsevier.com journal homepage displaying the Journal Insights pod

Example of a journal homepage on our Health Advance platform displaying the Journal Insights pod

Linda Willems

Elsevier Announces 2012 Journal Impact Factor Highlights

The 2012 Journal Citation Reports® (JCR) published by Thomson Reuters is now available.

Read more >


Linda Willems | Senior Researcher Communications Manager, Elsevier

The 2012 Journal Citation Reports® (JCR) has been published by Thomson Reuters. Elsevier journals top the rankings in 59 subject categories, up from 57 in 2011. Of Elsevier's 1,600 journals in the JCR, 44% are in the top 25% of their subject category. In 2012, 15 Elsevier journals have risen to the top of their subject category, including Cell, reclaiming the top position in "Biochemistry & Molecular Biology".

Other highlights include:

  • Elsevier saw 53% of its journal Impact Factors increase from 2011 to 2012
  • 17% of Elsevier journals are in the top 10% of their subject category

"Another year of positive results reinforces my view that we're on the right track as a Publisher, but most of the credit should go to the world-class authors, editors and reviewers," said Philippe Terheggen, Managing Director Journals at Elsevier. "When we look at these scores, it's important to keep in mind that while the Impact Factor is an important measure for overall journal influence, it is not to be used for evaluation of individual researchers or articles. We're increasingly looking at additional metrics, including so called Altmetrics, as a measure of influence of journals and authors. Meanwhile, we will continue to invest in enhancing the quality of our content, for example by increased support of peer review, by enriching the online article, and through linking articles to research data sets in external repositories. That's the journey we're on."

Flagship titles and society journals

All of The Lancet journals' Impact Factors increased. The Lancet rose from 38.278 to 39.060, The Lancet Oncology saw an increase from 22.589 to 25.117 moving up its subject category ranking from 4th to 3rd position. The Lancet Neurology increased to 23.917, from 23.462, and The Lancet Infectious Diseases went up from 17.391 to 19.966, both journals maintaining the top rankings in their respective categories.

The journals of Cell Press, an imprint of Elsevier, mostly saw quite stable trends in Impact Factor, with highlights including strong growth from Molecular Cell - an 8% increase to 15.280 and Trends in Cognitive Sciences, which saw a 27% increase in comparison to 2011, to 16.008. Its flagship journal Cell continues to lead in its field with an impact factor of 31.957, and is the number one research journal in the Cell Biology and Biochemistry & Molecular Biology categories.

Of the 420 titles in the JCR that Elsevier publishes on behalf of societies, 261, or 62%, showed a rise in their Impact Factors. Nine of these rank number one in their subject categories, including Evolution and Human Behavior, which rose from 4th position in the category "Social Sciences, Biomedical". Two society journals reached a number one position for the first time: European Urology in "Urology & Nephrology, and Forensic Science International: Genetics ranked highest in the category "Medicine, Legal".

The Impact Factor helps evaluate a journal's impact compared to others in the same field by measuring the frequency with which recent articles in a journal have been cited in a particular year: the 2012 Impact Factor takes into account citations in 2012 to papers published in 2010 and 2011.

Traditional journal bibliometrics meets newcomer altmetrics

Traditional journal bibliometrics meets newcomer altmetrics

Of interest to: Journal editors (key), additionally authors and reviewers

Read more >


Of interest to: Journal editors (key), additionally authors and reviewers

SarahHuggett

Changes to SNIP & SJR metrics

Recent improvements to SNIP and SJR aim to make the metrics more intuitive and easy to understand.

Read more >


Sarah Huggett | Publishing Information Manager, Elsevier

In recent years, computational advances have contributed to acceleration in the field of bibliometrics. While for a long time the journal evaluation landscape was somewhat dominated by a scarcity of measures, there are now many journal metrics available, providing a varied and more integral picture of journal impact [1]. Editors may find these useful to compare their journal to competitors in various systematic ways.

Scopus features two such citation indicators to measure a journal's impact; SNIP (Source Normalised Impact per Paper) and SJR (SCImago Journal Rank). These indicators use the citation data captured in the Scopus database to reveal two different aspects of a journal's impact:

  • SNIP takes into account the field in which a journal operates, smoothing differences between field-specific properties such as the number of citations per paper, the amount of indexed literature, and the speed of the publication process.
  • SJR takes into account the prestige of the citing journal; citations are weighted to reflect whether they come from a journal with a high or low SJR.

These two indicators use a three-year window, are freely available on the web [2] and are calculated for all journals indexed in the Scopus database. The metrics have article-type consistency, i.e. only citations to and from scholarly papers are considered.

Some editors may have noticed changes to both SNIP and SJR values for their and other journals around mid-October 2012. These changes were introduced to make the metrics more intuitive and easy to understand. Following these improvements, the values are now computed and released once a year in the summer.

Further information on both metrics is available on the Journal Metrics website.

SNIP: How does it work?

SNIP was developed by Henk Moed, Senior Scientific Advisor at Elsevier, who was then part of the CWTS bibliometrics group at the University of Leiden, The Netherlands. It is a ratio, with a numerator and a denominator. SNIP's numerator gives a journal's raw impact per paper (RIP). This is simply the average number of citations received in a particular year (e.g. 2013) by papers published in the journal during the three preceding years (e.g. 2010, 2011 and 2012).

SNIP's denominator, the Database Citation Potential (DCP) is calculated as follows. We know that there are large differences in the frequency at which authors cite papers between various scientific subfields. In view of this, for each journal an indicator is calculated of the citation potential in the subject field it covers. This citation potential is included in SNIP's denominator.

SNIP is RIP divided by DCP.

In October 2012, the following changes were applied:

  • A different averaging procedure is now used in the calculation of the denominator to reduce the impact of outliers.
  • A correction factor has been introduced to the weighting of citations from journals with low numbers of references.
  • The new calculation results in a SNIP average score for all journals in Scopus to approximately equal one.

Further details are available on ScienceDirect [3].

Dr Ludo Waltman, Researcher at the Centre for Science and Technology Studies (CWTS) of Leiden University, commented: “SNIP allows the impact of journals to be compared across fields in a fair way, and has been updated following the most recent insights in the fields of bibliometrics and scientometrics. The recent changes ensure the most balanced treatment of journals from different fields, with minimal implications for users.”

SJR: How does it work?

SJR was developed by the SCImago research group from the University of Granada, dedicated to information analysis, representation and retrieval by means of visualization techniques.

SJR looks at the prestige of a journal, as indicated by considering the sources of citations to it, rather than its popularity as measured simply by counting all citations equally. Each citation received by a journal is assigned a weight based on the SJR of the citing journal. A citation from a journal with a high SJR value is worth more than a citation from a journal with a low SJR value.

In October 2012, the following changes were applied:

  • A heavier weighting of the more prestigious citations that come from within the same field, or those closely related.
  • A compensating factor to overcome the decrease of prestige scores over time as the number of journals increases.
  • A more readily understandable scoring scale with an average of one.

Further details are available on ScienceDirect [4].

[1] Bollen J, Van de Sompel H, Hagberg A, Chute R (2009) A Principal Component Analysis of 39 Scientific Impact Measures. PLoS ONE 4(6): e6022. doi:10.1371/journal.pone.0006022

[2] http://www.journalmetrics.com/

[3] Waltman L, van Eck N J, van Leeuwen T N, Visser M S; Some modifications to the SNIP journal impact indicator, Journal of Informetrics, Volume 7, Issue 2, April 2013, http://dx.doi.org/10.1016/j.joi.2012.11.011

[4] Guerrero-Bote V P, Moya-Anegón F; A further step forward in measuring journals journals' scientific prestige: The SJR2 indicator, Journal of Informetrics, Volume 6, Issue 4, October 2012,  http://dx.doi.org/10.1016/j.joi.2012.07.001

SarahHuggett

View The Individual and Scholarly Networks seminar

The recent Research Trends and Elsevier Labs virtual seminar, The Individual and Scholarly Networks, is now available to view in archive.

Read more >


Sarah Huggett | Publishing Information Manager, Elsevier

Research Trends and the Elsevier Labs recently co-hosted their first virtual seminar: The Individual and Scholarly Networks. The event, held on 22nd January, attracted more than 500 attendees from all over the world, and featured six compelling external speakers. We used a novel format aimed to maximise engagement: in addition to audio and slides, we showed videos of the speakers and Twitter feed.

Materials from the event, including recordings of each session and discussion, presentations, and a Q&A transcript for those questions that we were unable to address live, are now all freely available on the Research Trends website, although unfortunately we were not able to get rid of some of the technical issues affecting audio in the second part of the event. A summary of the event and highlights of the discussion are also available

There were two components to the event. The first part focussed on building networks, and the ways in which relationships are formed and maintained, as well as how they are changing the nature of scholarly relationships. In this session, Professor Jeremy Frey  discussed how varying degrees of openness aid scientific collaboration, while Gregg Gordon presented an overview of the Social Science Research Network. Then, Dr William Gunn talked on building networks through information linking, using Mendeley as an example. The second part was about evaluating network relationships, exploring the related areas of alternative metrics, contributorship and the culture of reference. In this session, Dr Gudmundur Thorisson discussed digital scholarship and the recently launched ORCID initiative, while Kelli Barr questioned the purpose of and objectivity of evaluations. Finally, Dr Heather Piwowar  explored various impact flavours, in particular ImpactStory. Each session was followed by lively discussions amongst the presenters, spurred by questions and comments from our remote audience.

Images from webinar

EU37_herodonut2

The Changing Face of Journal Metrics

For several decades now, a principal measure of an article’s impact1 on the scholarly world has been the number of citations it has received. An increasing focus on using these citation counts as a proxy for scientific quality provided the catalyst for the development of journal metrics, including Garfield’s invention of the Impact Factor in […]

Read more >


For several decades now, a principal measure of an article's impact1 on the scholarly world has been the number of citations it has received.

An increasing focus on using these citation counts as a proxy for scientific quality provided the catalyst for the development of journal metrics, including Garfield’s invention of the Impact Factor in the 1950s2. Journal level metrics have continued to evolve and refine; for example, relative newcomers SNIP and SJR3 are now used on Elsevier’s Scopus.

In recent years, however, interest has grown in applications at author, institute and country level. These developments can be summarized as follows (see Figure 1):

Figure 1. Types of bibliometric indicators. First, second and third generation reproduced with the permission of Moed & Plume (2011)4, fourth and fifth added by the authors.

The Journal Impact Factor (JIF) was born at a time when there was one delivery route for scholarly articles – paper publications – and computational power was expensive. The migration from paper to electronic delivery (particularly online) has enabled better understanding and analysis of citation count-based impact measurements, and created a new supply of user-activity measurements: page views and downloads.

Over the past few years, the growing importance of social networking - combined with a rising number of platforms making their activity data publicly available - has resulted in new ways of measuring scholarly communication activities: one encapsulated by the term altmetrics5. Although we have added these new metrics to Figure 1, there is no suggestion that superseding generations necessarily replace the earlier ones. In fact, the Relative Impact Measure is still used substantially, even though network analysis exists. The choice of which metric to use is often influenced by the context and question and first, second or third generation metrics may still prove more suitable options.

Although the word altmetrics is still relatively new (not yet three-years-old), several maturing applications already rely on data to give a sense of the wider impact of scholarly research. Plum Analytics is a recent, commercial newcomer, whereas Digital Science's Altmetric.com is a better established, partially-commercial solution. A third mature product is ImpactStory (formerly total-impact.org), an always-free, always-open application.

Altmetrics applications acquire the broadest possible set of data about content consumption. This includes HTML page views and PDF downloads, social usage, (e.g. tweets and Facebook comments), as well as more specialized researcher activities, such as bookmarking and reference sharing via tools like Mendeley, Zotero and Citeulike. A list of the data sources used by ImpactStory appears below. As well as counting activities surrounding the full article, there are also figure and data re-use totals. Altmetric.com also takes into account mass media links to scholarly articles.

To get a feel for how altmetrics work, you can visit www.impactstory.it or www.altmetric.com and enter a publication record. Alternatively, if you have access to Elsevier’s Scopus, you will find many articles already carry an Altmetric.com donut in the right hand bar (the donut may not be visible in older versions of Microsoft Internet Explorer). If there is no data yet available, an Altmetric.com box will not appear on the page. Elsevier also supplies data to ImpactStory, sending cited-by counts to the web-platform.

Altmetrics donut

Figure 2: An example of the Altmetric.com donut which can be found on many Scopus articles. This one, from the paper 'How to Choose a Good Scientific Problem' in Molecular Cell, shows that (at time of writing) the article has been mentioned 89 times on a variety of platforms and saved as a bookmark by more than 4,000 people.

What do all these numbers mean?

Although there is some evidence to link social network activity, such as tweets, with ultimate citation count (Priem & Piwowar et al, 20126, Eysenback, 20117), this field is still in its early stages, and a considerable number of areas still require research. Further investigation aims to uncover patterns and relationships between usage data and ultimate citation, allowing users to discover papers of interest and influence they might previously have failed to notice. Planned areas of research include:

  • Scholarly consumption versus lay consumption. With so much benefit to be gained from encouraging public engagement in science, we need new ways of tracking this. After all, while members of the public are unlikely to cite articles in a formal setting, we may well see increased social sharing. Analysis of usage data might reveal striking differences between scholarly and lay usage patterns. For example, references to research amongst the general public may be primarily driven by mass media references – just as the mass media might be influenced by academic work going viral on Twitter and Facebook: whereas one might hypothesize that activity measured in specialized scholarly tools, such as Mendeley, would be less subject to this influence. This information could be critical in allowing publishers and platform owners to tweak their systems so as to best support use and report on wider usage to funding agencies.
  • When does social networking become marketing and when does it become gaming or cheating? There has been criticism8 that the JIF can be increased by excluding or including reference counts from certain types of articles, and by journals' self-citation policies. Social data is just as prone to influence. For example, while authors' tweets about their papers are perfectly legitimate social marketing of the type previously done through email groups, and while it's reasonable to assume that some mentions of this type will go 'viral' and thus be propelled towards mass media mentions and possibly drive citations, there will inevitably be concerted efforts to build momentum that goes beyond natural self/network marketing. A sophisticated analysis of social networking mapped against author networks might be able to detect and downplay this type of activity.
  • What other factors influence ultimate impact? As we expand our ability to understand what drives scholarly impact and how usage patterns should be interpreted, the scope should increase to include other non-social facets. For example, do cross-discipline papers get a wider readership than simply the disciplines targeted? Do papers with a lay abstract attract a wider lay audience? To what extent does the inclusion of a high-ranking contributor boost citation above what might be predicted?
  • Do any particular consumption activities predicate others? Is there a computable conversion rate for moving from one activity to another? How do these vary over time and by discipline? What activities lead to citation? Are there papers that are less well cited - or not cited at all - that nevertheless appear to have impact in other ways?

Altmetrics is still in its infancy, both as a field of study and a commercial activity. Currently only a handful of smaller organizations are involved and there is no engagement from major web players such as Google or Microsoft. On the publisher front, while all are active with altmetrics in some form, only Macmillan has chosen to get involved via Digital Science's Altmetric.com. That means there is a great deal to play for. We expect to see more emergent platforms and research, and it's not impossible to envisage the development of professional advisers who work with institutions to increase their altmetrics counts – especially now that impact is increasingly tied to funding decisions (e.g. Government funding in the UK via the Research Excellence Framework).

Elsevier is fully engaged with the altmetrics movement. For example, in 2013 the Elsevier Labs team aims to co-publish large scale research which will begin to explore the relationship between the different facets and to establish a framework for understanding the meaning of this activity. It aims to build on the current work to found an empirically-based discipline that analyses the relationship between social activity, other factors and both scholarly and lay consumption and usage. By working together to combine knowledge at Elsevier, we intend to show that no single measurement can provide the whole picture and that a panel of metrics informed by empirical research and expert opinion is typically the best way to analyze the performance of a journal, an author or an article.

References and useful links

  1. http://en.wikipedia.org/wiki/Impact_factor
  2. http://www.garfield.library.upenn.edu/papers/jifchicago2005.pdf
  3. For more information on SNIP and SJR, see Elsevier website www.journalmetrics.com and Henk Moed’s interview on the SNIP methodology on YouTube
  4. Moed H  & Plume (2011). The multi-dimensional research assessment matrix. Research Assessment, May 2011(23). Retrieved from Research Trends
  5. More on the altmetrics movement, conferences and workshops may be found at www.altmetrics.org
  6. "Altmetrics in the wild: Using social media to explore scholarly impact” http://arxiv.org/abs/1203.4745
  7. "Can Tweets Predict Citations? Metrics of Social Impact Based on Twitter and Correlation with Traditional Metrics of Scientific Impact” http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3278109/
  8. Show me the data, Rossner, Van Epps, Hill, J Cell Biol 2007 179:1091-1092. Published December 17, 2007

Author Biographies

Mike Taylor

Mike Taylor
TECHNOLOGY RESEARCH SPECIALIST
Mike has worked at Elsevier for 16 years. He has been a research specialist in the Labs group for the last four years, and has been involved with ORCID (and previous projects) throughout that time. Mike's other research interests include altmetrics, contributorship and author networks. Details of his research work can be found on http://labs.elsevier.com.


Judith Kamalski

Judith Kamalski
MANAGER STRATEGIC RESEARCH INSIGHTS & ANALYTICS
Judith focuses on demonstrating Elsevier’s bibliometric expertise and capabilities by connecting with the research community. She is heavily involved in analyzing, reporting and presenting commercial research performance evaluation projects for academic institutes, as well as governments. Judith has worked within several areas at Elsevier including bibliographic databases, journal publishing, strategy, sales and, most recently, within Research & Academic Relations. Judith has a PhD from Utrecht Institute of Linguistics and holds Masters Degrees in Corporate Communications and French Linguistics & Literature.

SarahHuggett

Elsevier’s 2011 Impact Factors Highlights

The 2011 Journal Citation Reports® (JCR), have just been published by Thomson Reuters. Learn how Elsevier’s journals fared.

Read more >


Sarah Huggett | Publishing Information Manager, Elsevier

We are pleased to announce the highlights of our recent journal Impact Factor performance. According to the 2011 Journal Citation Reports® (JCR), published by Thomson Reuters, 58% of our journal Impact Factors increased from 2010 to 2011, compared to 54% for non-Elsevier journals.

Our journals top the rankings in 57 subject categories, and nearly 40% of our titles indexed in the JCR are in the top 10 of their subject category, with 188 journals ranked in the top 3. Five of our journals have even seen a decade of continuous Impact Factor increases: Journal of Ethnopharmacology; Journal of Materials Processing Technology; Carbon; Electrochemistry Communications; Renewable Energy.

In a press release announcing the good news, Martin Tanke, Managing Director Journals at Elsevier, said: “We value these results as they are an acknowledgement of the excellent performance by the authors, reviewers and editors we work with. In addition, we believe these outcomes are the result of our continuous focus on quality. We will continue to invest time and resources into quality enhancing initiatives such as increased support and enhancement of the peer review process to speed up review times, and further innovations on the publishing process to deliver faster publication for our authors.”

The Impact Factor helps to evaluate a journal’s impact compared to others in the same field by measuring the frequency with which recent articles in a journal have been cited in a particular year: the 2011 Impact Factor takes into account citations in 2011 to papers published in 2009 and 2010. It is important to note that the Impact Factor is just one perspective on a journal’s quality and influence, and that other metrics (such as SJR or SNIP) provide different perspectives and address some of the shortcomings of the Impact Factor.

The emphasis on the Impact Factor as a measure of journal evaluation can lead to potentially unethical behaviours aiming to inflate a journal’s Impact Factor, a topic discussed in our Impact Factor Ethics for Editors article in the June 2012 issue of Editors’ Update.

EU36_Impact-letters

Impact Factor Ethics for Editors

How Impact Factor engineering can damage a journal’s reputation The dawn of bibliometrics We’ve all noticed that science has been accelerating at a very fast rate, resulting in what has been called ‘information overload’ and more recently ‘filter failure’. There are now more researchers and more papers than ever, which has led to the heightened […]

Read more >


How Impact Factor engineering can damage a journal’s reputation

The dawn of bibliometrics

We’ve all noticed that science has been accelerating at a very fast rate, resulting in what has been called ‘information overload’ and more recently ‘filter failure’. There are now more researchers and more papers than ever, which has led to the heightened importance of bibliometric measures. Bibliometrics as a field is a fairly new discipline, but it has seen an impressive growth in recent years due to advances in computation and data storage, which have improved the accessibility and ease of the use of bibliometric measures (for instance through interfaces such as Sciverse Scopus or SciVal). Bibliometrics are being increasingly used as a way to systematically compare diverse entities (authors, research groups, institutions, cities, countries, disciplines, articles, journals, etc.) in a variety of contexts. These include an author deciding where to publish, a librarian working on changes in their library’s holdings, a policy maker planning funding budgets, a research manager putting together a research group, a publisher or Editor benchmarking their journal to competitors, etc.

Enter the Impact Factor

In this perspective, journal metrics can play an important role for Editors and we know it’s a topic of interest because of the high attendance at our recent webinar on the subject. There are many different metrics available and we always recommend looking at a variety of indicators to yield a bibliometric picture that is as thorough as possible, providing insights on the diverse strengths and weaknesses of any given journal1. However, we are well aware that one metric in particular seems to be considered especially important by most Editors: the Impact Factor. Opinions on the Impact Factor are divided, but it has now long been used as a prime measure in journal evaluation, and many Editors see it as part of their editorial duty to try to raise the Impact Factor of their journal2.

An Editor’s dilemma

There are various techniques through which this can be attempted, some more ethical than others, and it is an Editor’s responsibility to stay within the bounds of ethical behavior in this area. It might be tempting to try to improve one’s journal’s Impact Factor ranking at all costs, but Impact Factors are only as meaningful as the data that feed into them3: if an Impact Factor is exceedingly inflated as a result of a high proportion of gratuitous self-citations, it will not take long for the community to identify this (especially in an online age of easily accessible citation data). This realisation can be damaging to the reputation of a journal and its Editors, and might lead to a loss of quality manuscript submissions to the journal, which in turn is likely to affect the journal’s future impact. The results of a recent survey4 draw attention to the frequency of one particularly unethical editorial activity in business journals: coercive citation requests (Editors demanding authors cite their journal as a condition of manuscript acceptance).

Elsevier’s philosophy on the Impact Factor
“Elsevier uses the Impact Factor (IF) as one of a number of performance indicators for journals. It acknowledges the many caveats associated with its use and strives to share best practice with its authors, editors, readers and other stakeholders in scholarly communication. Elsevier seeks clarity and openness in all communications relating to the IF and does not condone the practice of manipulation of the IF for its own sake.”

This issue has already received some attention from the editorial community in the form of an editorial in the Journal of the American Society for Information Science and Technology5. Although some Elsevier journals were highlighted in the study, our analysis of 2010 citations to 2008-2009 scholarly papers (replicating the 2010 Impact Factor window using Scopus data) showed that half of all Elsevier journals have less than 10% journal self-citations, and 80% of them have less than 20% journal self-citations. This can be attributed to the strong work ethic of the Editors who work with us, and it is demonstrated through our philosophy on the Impact Factor (see text box on the right) and policy on journal self-citations (see text box below): Elsevier has a firm position against any ‘Impact Factor engineering’ practices.

So, what is the ethically acceptable level of journal self-citations?

There are probably as many answers to this question as there are journals. Journal self-citation rates vary between scientific fields, and a highly specialised journal is likely to have a larger proportion of journal self-citations than a journal of broader scope. A new journal is also prone to a higher journal self-citation rate as it needs time to grow in awareness amongst the relevant scholarly communities.

Elsevier’s policy on journal self-citations
“An editor should never conduct any practice that obliges authors to cite his or her journal either as an implied or explicit condition of acceptance for publication. Any recommendation regarding articles to be cited in a paper should be made on the basis of direct relevance to the author’s article, with the objective of improving the final published research. Editors should direct authors to relevant literature as part of the peer review process; however, this should never extend to blanket instructions to cite individual journals. […] Part of your role as Editor is to try to increase the quality and usefulness of the journal. Attracting high quality articles from areas that are topical is likely the best approach. Review articles tend to be more highly cited than original research, and letters to the Editor and editorials can be beneficial. However, practices that ‘engineer’ citation performance for its own sake, such as forced self-citation are neither acceptable nor supported by Elsevier.”

As mentioned in a Thomson Reuters report on the subject: “A relatively high self-citation rate can be due to several factors. It may arise from a journal’s having a novel or highly specific topic for which it provides a unique publication venue. A high self-citation rate may also result from the journal having few incoming citations from other sources. Journal self-citation might also be affected by sociological factors in the practice of citation. Researchers will cite journals of which they are most aware; this is roughly the same population of journals to which they will consider sending their own papers for review and publication. It is also possible that self-citation derives from an editorial practice of the journal, resulting in a distorted view of the journal’s participation in the literature.”6

Take care of the journal and the Impact Factor will take care of itself

There are various ethical ways an Editor can try to improve the Impact Factor of their journal. Through your publishing contact, Elsevier can provide insights as to the relative bibliometric performance of keywords, journal issues, article types, authors, institutes, countries, etc., all of which can be used to inform editorial strategy. Journals may have the options to publish official society communications, guidelines, taxonomies, methodologies, special issues on topical subjects, invited content from leading figures in the field, interesting debates on currently relevant themes, etc., which can all help to increase the Impact Factor and other citation metrics. A high quality journal targeted at the right audience should enjoy a respectable Impact Factor in its field, which should be a sign of its value rather being an end in itself. Editors often ask me how they can raise their journal’s Impact Factor, but the truth is that as they already work towards improving the quality and relevance of their journal, they are likely to reap rewards in many areas, including an increasing Impact Factor. And this is the way it should be: a higher Impact Factor should reflect a genuine improvement in a journal, not a meaningless game that reduces the usefulness of available bibliometric measures.

References

1 Amin, M & Mabe, M (2000), “Impact Factors: use and abuse”, Perspectives in Publishing, number 1

2 Krell, FK (2010), “Should editors influence journal impact factors?”, Learned Publishing, Volume 23, issue 1, pages 59-62, DOI:10.1087/20100110

3 Reedijk, J & Moed, HF (2008), “Is the impact of journal impact factors decreasing?”, Journal of Documentation, Volume 64, issue 2, pages 183-192, DOI: 10.1108/00220410810858001

4 Wilhite, AW & Fong, EA, (2012) “Coercive Citation in Academic Publishing”, Science, Volume 335, issue 6068, pages 542–543, DOI: 10.1126/science.1212540

5 Cronin, B (2012), “Do me a favor”, Journal of the American Society for Information Science and Technology, early view, DOI: 10.1002/asi.22716

6 McVeigh, M (2002), "Journal Self-Citation in the Journal Citation Reports – Science Edition"

Author Biography

Sarah Huggett

Sarah Huggett
PUBLISHING INFORMATION MANAGER, RESEARCH & ACADEMIC RELATIONS
As part of the Scientometrics & Market Analysis team, Sarah provides strategic and tactical insights to colleagues and publishing partners, and strives to inform the bibliometrics debate through various internal and external discussions. Her specific interests are in communication and the use of alternative metrics such as SNIP and usage for journal evaluation. After completing an M. Phil in English Literature at the University of Grenoble (France), including one year at the University of Reading (UK) through the Erasmus programme, Sarah moved to the UK to teach French at Oxford University before joining Elsevier in 2006.


Henk Moed

The Effect of Open Access upon Citation Impact

Does open access publishing increase citation rates?
Studies conducted in this area have not yet adequately controlled for various kinds of sampling bias.

Read more >


Henk F Moed | Senior Scientific Advisor, Elsevier

Does open access publishing increase citation rates?

Studies conducted in this area have not yet adequately controlled for various kinds of sampling bias.  Read on...

The debate about the effects of open access upon the visibility or impact of scientific publications started with the publication by Steve Lawrence (2001) in the journal Nature, entitled ‘Free online availability substantially increases a paper's impact’, analyzing conference proceedings in the field computer science. Open access is not used to indicate the publisher business model based on the ‘authors pay’ principle, but, more generally, in the sense of being freely available via the Web. From a methodological point of view, the debate focuses on biases, control groups, sampling, and the degree to which conclusions from case studies can be generalized. This note does not give a complete overview of studies that were published during the past decade but highlights key events.

In 2004, Stevan Harnad and Tim Brody (2004) claimed that physics articles submitted as pre-print to ArXiv (a preprint server covering mainly physics, hosted by Cornell University), and later published in peer reviewed journals, generated a citation impact up to 400% higher than papers in the same journals that had not been posted in ArXiv. Michael Kurtz and his colleagues (Kurtz et al., 2005) found in a study on astronomy evidence of a selection bias – authors post their best articles freely on the Web -  and an early view effect – articles deposited as preprints are published earlier and are therefore cited more often. Henk Moed (2007) found for articles in solid state physics that these two effects may explain a large part, if not all of the differences in citation impact between journal articles posted as pre-print in ArXiv and papers that were not.

In a randomized control trail related to open versus subscription-based access of articles in psychological journals published by one single publisher, Phil Davis and his colleagues (Davis et al, 2008) did not find a significant effect of open access on citations. In order to correct for selection bias, a new study by Harnad and his team (Gargouri et al., 2010) compared self-selective self archiving with mandatory self archiving in four particular research institutions. They argued that, although the first type may be subject to a quality bias, the second can be assumed to occur regardless of the quality of the papers. They found that the OA advantage proved just as high for both, and concluded that it is real, independent and causal. It is greater for more citable articles then it is for less significant ones, resulting from users self-selecting what to use and cite. But they also found for the four institutions that the percentage of their publication output actually self-archived was, at most, 60%, and that for some it did not increase when their OA regime was transformed from non-mandatory into mandatory.  Therefore, what the authors labeled as ‘mandated OA’ is in reality, to a large extent, subject to the same type of self selection bias as non-mandated OA.

On the other hand, it should be noted that all citation based studies mentioned above seem to have the following bias: they were based on citation analysis carried out in a citation index with a selective coverage of the good, international journals in their fields. Analyzing citation impact in such a database is, in a sense, a bit similar to measuring the extent to which people are willing to leave their car unused during the weekend by interviewing mainly persons on a Saturday at the parking place of a large warehouse outside town. Those who publish in the selected set of good, international journals – a necessary condition for citations to be recorded in the OA advantage studies mentioned above – will tend to have access to these journals anyway. In other words: there may be a positive effect of OA upon citation impact, but it is not visible in the database used. The use of a citation index with more comprehensive coverage would enable one to examine the effect of the citation impact of covered journals upon OA citation advantage; for instance: is such an advantage more visible in lower impact or more nationally oriented journals than it is in international top journals?

Analyzing article downloads (usage) is a complementary and, in principle, valuable method for studying the effects of OA. In fact, the study by Phil Davis and colleagues mentioned above did apply this method and reported that OA articles were downloaded more often than papers with subscription-based access. However, significant limitations of this method are that not all publication archives provide reliable download statistics, and that different publication archives that do generate such statistics may apply different ways to record and/or count downloads, so that results are not directly comparable across archives. The implication seems to be that usage studies of OA advantage comparing OA with non-OA articles can be applied only in ‘hybrid’ environments, in which publishers offer authors who submit a manuscript both an ‘authors pay’ and a ‘readers pay’ option. But this type of OA may not be representative for OA in general, as it disregards self-archiving in OA repositories that are being created in research institutions all over the world.

An extended version of this paper will be published soon in the Elsevier publication Research Trends.


References

Davis, P.M., Lewenstein, B.V., Simon, D.H., Booth, J.G., Connolly, M.J.L. (2008). Open access publishing, article downloads, and citations: Randomised controlled trial. BMJ, 337 (7665), 343-345.

Gargouri, Y., Hajjem, C., Lariviére, V., Gingras, Y., Carr, L., Brody, T., Harnad, S. (2010). Self-selected or mandated, open access increases citation impact for higher quality research. PLoS ONE, 5 (10), art. no. e13636.

Harnad, S., Brody, T. (2004). Comparing the impact of open access (OA) vs. non-OA articles in the same journals. D-Lib Magazine, 10(6).

Kurtz, M.J., Eichhorn, G., Accomazzi, A., Grant, C., Demleitner, M., Henneken, E., Murray, S.S. (2005). The effect of use and access on citations. Information Processing & Management, 41, 1395–1402.

Lawrence, S. (2001). Free online availability substantially increases a paper's impact. Nature, 411 (6837), p. 521.

Moed, H.F. (2007). The effect of “Open Access” upon citation impact: An analysis of ArXiv’s Condensed Matter Section. Journal of the American Society for Information Science and Technology, 58, 2047-2054.

Webcast_bibliometrics_small150

The Impact Factor and other Bibliometric Indicators

Love it or loathe it the journal Impact Factor remains a widely-used benchmark by authors to decide which journal to submit to. Whilst we recognize the Impact Factor at Elsevier, we also nurture the idea of using other indicators of journal performance.

Read more >


Love it or loathe it the journal Impact Factor remains a widely-used benchmark by authors to decide which journal to submit to. Whilst we recognize the Impact Factor at Elsevier, we also nurture the idea of using other indicators of journal performance.

webinar-journal-performance_150

A Discussion on How to Improve Journal Performance

Of interest to: Journal editors (key), additionally authors and reviewers Archive views to date: 310+ Average feedback: 4.4 out of 5

Read more >


Of interest to: Journal editors (key), additionally authors and reviewers
Archive views to date: 310+
Average feedback: 4.4 out of 5

If we begin to publish articles as they become available how would you like us to communicate with you?

View Results

Loading ... Loading ...

Short Communications

  • Postdoc Free Access Programme: Back by popular demand

    The return of the Postdoc Free Access Programme will help young scholars stay current in their field - even in an uncertain job market. Learn more

  • Maintaining the quality of Scopus content – a Board member’s view

    Learn more about the Scopus Content Selection Advisory Board and opportunities for you to become involved. Learn more

  • Research Highlights app makes it easy to keep tabs on new research

    New app allows researchers to create a personalized feed of articles based on keywords, journals, and authors Learn more

  • University College London and Elsevier launch UCL Big Data Institute

    Collaboration will enable researchers to explore ideas for applying new technologies and analytics to scholarly content and data. Learn more

  • Altmetric pilots help Elsevier authors understand the impact of their articles

    The colorful altmetric donut used to indicate an article's impact in news and social media is now featured on various journal homepages and ScienceDirect Learn more

  • Omics Comics

    An ode to omics penned by Dr Gad Gilad, Editorial Board member of International Journal of Developmental Neuroscience. Learn more

  • Journal visualization project benefits from new changes

    Earlier this year we unveiled the Journal Insights pilot. The initiative has now been rolled out to more than 800 journals and we highlight some recent improvements. Learn more

  • Abolish authors’ Conflict of Interest Statement, says author

    Author and Editorial Board Member, Dr Gad Gilad, PhD, argues that it is time to get rid of Author Conflict of Interest Statements. Learn more

  • Reflections on the Life of a Journal Editor

    After 24 years at the helm, Founding Editor of Expert Systems With Applications shares his thoughts on editing as he prepares to retire. Learn more

Other articles of interest

Webinars & webcasts

Discover our webinar archive. This digital library features both Elsevier and external experts discussing, and answering questions on, a broad spectrum of topics.

Learn more about our growing library of useful bite-sized webcasts covering a range of subjects relevant to your work as an editor, including ethics, peer review and bibliometrics.

Editors’ Conferences

Amsterdam, The Netherlands
16-18 May, 2014

Boston, USA (program TBC)
21-23 November, 2014

Learn more about these forums for dialogue with, and between, our senior editors.