Editors' Update is your one-stop online resource to discover more about the latest developments in journal publishing, policies and initiatives that affect you as an editor, as well as other services and support available. Discover and participate in upcoming events and webinars and join in topical discussions with your peers.
Journals offering Registered Reports agree to review study protocols before experiments are conducted. If the protocols are judged to have merit the journal commits, in advance, to publishing the outcomes. In this article, Professor Chris Chambers, Registered Reports Editor of the Elsevier journal Cortex and one of the founders of the Registered Reports concept, discusses […]
Journals offering Registered Reports agree to review study protocols before experiments are conducted. If the protocols are judged to have merit the journal commits, in advance, to publishing the outcomes. In this article, Professor Chris Chambers, Registered Reports Editor of the Elsevier journal Cortex and one of the founders of the Registered Reports concept, discusses the initiative’s origins and its first scientific output.
In 2013, the journal Cortex took a step forward in reforming the culture of scientific publishing. With the support of Chief Editor Sergio Della Sala, we became one of the first journals to offer Registered Reports – an empirical article designed to eliminate publication bias and incentivize best scientific practice. In contrast to conventional publishing, we provisionally accept for publication study protocols that are considered methodologically sound and address an important scientific question. Armed with this provisional acceptance of their work, authors can perform the research safe in the knowledge that the results themselves will not determine the article's publication. At the same time, readers of the final paper can feel more confident that the work is reproducible because the initial study predictions and analysis plans were independently reviewed.
The current issue of Cortex sees the first fruits of this labour: a Registered Report by Jona Sassenhagen and Ina Bornkessel-Schlesewsky from the University of Marburg and the University of South Australia. Sassenhagen and Bornkessel-Schlesewsky pre-registered an innovative experiment for testing whether the P600, an electrophysiological waveform associated with language processing, is in fact an instance of the P3, a waveform associated with attention. Their results are consistent with this hypothesis – these waveforms, considered distinct by some previous studies, may, in fact, reflect the same underlying neural process.
Being the first cab off the rank was always going to be challenging for both the authors and the Editorial Board, but by working together I think we can all be proud of the outcome: our inaugural Registered Report and the first to report an EEG (electroencephalography) study in human cognitive neuroscience. There are also many more Registered Reports approaching final publication.
Cortex timelines for Registered Reports submissions
- Our editorial sub-team initially triages manuscripts within an average of 5 days.
- Excluding the time taken for author revisions, manuscripts typically spend 8-10 weeks undergoing 1-3 rounds of in-depth review of the study protocol (Stage 1).
- Completed manuscripts, including Results and Discussion, are then re-reviewed following study completion (Stage 2).
Registered Reports isn’t just changing the face of cognitive neuroscience. Since our open letter in The Guardian calling for the format to be made widely available across the life sciences, we have seen another 13 journals adopt them, within fields ranging from political science to psychology to cancer biology. Clearly the format is attracting interest from a number of quarters.
As noted in my recent Reviewers Update article, this appeal can be explained by the fact that Registered Reports prevent publication bias (the selective publication of positive results)* while also neutralizing questionable research practices including p-hacking, HARKing, low statistical power, and the lack of data sharing. At the same time, by accepting manuscripts for publication before data exist, Registered Reports provide a positive incentive for scientists to put aside such practices in the first place. Our hope is that articles appearing under this banner will become the poster children for transparent, reproducible advances in science.
The figure below illustrates the editorial pipeline for Registered Reports at Cortex, which has also been adopted by several other journals. Details on review criteria and answers to frequently asked questions are available online.
Some scientists worry that Registered Reports could restrict creativity by requiring authors to adhere to a fixed research methodology. In fact – and this is important to emphasize – the Registered Reports initiative places no restrictions on creativity, flexibility or the reporting of serendipitous findings. While it is true that the pre-specified methods in a Registered Report must be followed, there are no bounds on the reporting of additional unregistered analyses. The only requirement is that such additional material is labelled transparently so that readers know which analyses were pre-registered and which were exploratory.
On this special occasion for Cortex, I want to extend my appreciation to the many people who have made Registered Reports possible, including Chief Editor Sergio Della Sala, the Cortex editorial sub-team consisting of Zoltan Dienes (University of Sussex), Rob McIntosh (University of Edinburgh), Pia Rotshtein (University of Birmingham), and Klaus Willmes-von Hinckeldey (RWTH Aachen University), Cortex editorial assistant Cheryl Phillips, and Elsevier Executive Publisher Toby Charkin.
And, most importantly, we extend our collective thanks to the authors who are breathing life into Registered Reports and making scientific reform a reality.
Chris Chambers (@chrisdc77) is a Professor of Cognitive Neuroscience at Cardiff University, Section Editor for Registered Reports at Cortex and AIMS Neuroscience, and Chair of the Registered Reports Committee at the Center for Open Science. His main research interests include the psychology and neuroscience of human impulse control, the interaction between science and the media, and evidence-based public policy. Professor Chambers will be speaking on Registered Reports at a University College London event on March 17th, 2014.
* See also “Why science needs to publish negative results" in this issue.
For participating journals, reviews of accepted articles will appear in an article format on ScienceDirect with a separate DOI.
Publishing Innovation Manager Dr. Bahar Mehmani | Director Publishing Innovation Dr. Joris van Rossum
While papers published in journals occasionally contain thanks to anonymous or named referees; for many reviewers, the contribution they make to a publication is never publicly acknowledged.
Over the past 25 years, several publishers, societies and institutions have been instrumental in pushing for this to change by advocating a more open peer-review process. In response, from the 1990s onwards, a number of scientific journals began to trial new approaches. BioMed Central (owned by Springer Science+Business Media) now asks reviewers to sign their reviews and publishes them alongside the author’s response. It also publishes the original manuscript next to the published article. Recently, F1000 (Faculty of 1000) launched F1000 Research which publishes review reports with a separate DOI next to submitted articles. Meanwhile, eLIFE publishes sections of the decision letter after review and the associated author responses (subject to author agreement).
Elsevier also believes that the publication of peer review reports can contribute to greater transparency and allow reviewers to receive credit for the important work they do. However, we also understand this greater transparency might not be appropriate for all research areas or audiences.
In January 2012, the Elsevier journal Agricultural and Forest Meteorology began publishing PDFs of editor-selected peer review reports next to its published articles on ScienceDirect. The success of this initiative triggered the Publishing Peer Review pilot, which has now been launched for 5 journals.
Peer review – pros and cons of some common approaches
Single-blind reviewing: This is probably the most widely-used approach with reviewers’ names hidden from the authors. Reviewers can be impartial in their opinions, independent of authors’ reputations and possible future repercussions for the reviewer's career. However, some authors have expressed concerns that reviewers could delay their comments to allow their own work to be published first.
Double-blind reviewing: Both the authors’ and reviewers’ identities are concealed. While it does avoid potential bias against authors and ensure prestigious and influential authors are judged on the paper, rather than their reputation, it can be time-consuming to mask the identity of authors, and it is debatable whether a paper can ever be truly blind, especially in ‘niche areas’.
Open reviewing: With this approach, reviewers and authors are known to one another. It is currently more common in the Health Sciences and it is the approach adopted for this Publishing Peer Review pilot. While some editors feel open reviewing prevents malicious comments, stops plagiarism, prevents reviewers drawing upon their own ‘agenda’, and encourages honest, open responses , others argue that the opposite effect is achieved as junior researchers may sometimes be less honest for fear of affecting their own career or funding opportunities.
Editors of participating journals can choose to have review reports typeset and published, in an article format, alongside the relevant research paper on ScienceDirect but reviewers will be given the option to remain anonymous. The review reports will be freely accessible to all and the first are now available to view. Each review report will also be assigned a separate DOI (Digital Object Identifier – a unique character string used to identify electronic documents, such as research papers). Editors’ comments and reviewers’ comments-to-editor will not be included. Review reports will be grouped together on an annual basis and appear in a separate issue of the participating journal.
The pilot currently contains 5 journals:
It will run until the end of August 2015. We will then examine the feedback we have received from reviewers, editors, authors and readers of the pilot journals before developing the technology necessary to expand the trial.
You have your say
Over the past few months, you've been casting your votes in the Editors' Update poll "Would you review for a journal that made the names of its reviewers public?". At the time of going to press, 244 of you had expressed an opinion - 172 (70 percent) said yes, while 72 (30 percent) voted against.
 Van Rooyen, Susan et al. (1999) “Effect of open peer review on quality of reviews and on reviewers’ recommendations; a randomised trial”, British Medical Journal, 318, 23-27
10 Nov 2014 9 Comments
Peer review is the cornerstone of scholarly publishing, but it can be a source of frustration for researchers. Criticisms levelled against it include claims that it slows down publication speed, consumes too much time and is inherently biased. In this article, we draw upon the results of Elsevier studies that reflect the views of more […]
Peer review is the cornerstone of scholarly publishing, but it can be a source of frustration for researchers. Criticisms levelled against it include claims that it slows down publication speed, consumes too much time and is inherently biased.
In this article, we draw upon the results of Elsevier studies that reflect the views of more than 3,000 active researchers. We not only examine their thoughts on the current peer-review landscape, but explore their expectations for the future.
Most researchers – 70 percent – are happy with the current peer-review process; a satisfaction rate higher than those recorded in similar 2007 and 2009 surveys. When asked if peer review helps scientific communication, 83 percent of those we surveyed agreed, with comments such as, "I have had reviews that were very insightful. When researchers get their nose caught in the lab book, we cannot see the forest through the trees. Having a peer look at your science helps expand the overall view". (Researcher in Environmental Science, Switzerland, aged 36-45.)
However, there is room for improvement; a third of researchers believe that peer review could be enhanced. The improvements they suggested included reviewers being given more guidance, and incentives to encourage reviewers to volunteer their time. Moreover, 27 percent of respondents felt that peer review is holding back scientific research. A Computer Science researcher from the UK explained: "Peer review for journals is often slow and biased. Radical ideas will be rejected as peer review is inherently conservative."
Is pressure increasing on reviewers? Perhaps surprisingly, only 29 percent of researchers surveyed believed that to be the case. However, that is 10 percent more than in the survey conducted in 2009. Reasons cited include time pressure, lack of incentives, lack of specialist knowledge and too many (poor-quality) papers being sent for review.
If we take a look at which countries are bearing the brunt of increased pressure, it seems that US researchers are reviewing more papers than they are writing. On the flip side, Chinese researchers are writing substantially more papers than they are reviewing. The reason for this gap is that Chinese researchers are not being asked to review as frequently, even though they have one of the highest review acceptance rates for any country.
Anonymity appears to be important to researchers when it comes to the review process. We asked them about peer-review options ranging from double blind and single blind to various forms of open peer review and discovered that, as transparency increased, the likelihood of authors submitting papers decreased; 46 percent said they were unlikely to submit to a journal which published the reviewer's name alongside reviewer reports. It would appear some authors do not like the idea of their mistakes being made public. A slightly greater proportion – 49 percent – indicated they would be less likely to review on a journal that offered such open peer review.
Peer review has many purposes from improving the quality of a paper to detecting fraud. Clearly, high expectations are placed upon the process and for the most part researchers believe peer review delivers. However, there are two areas where respondents indicate it is failing - detecting plagiarism and fraud.
Listen to our free peer-review webinar
The contributors to this article, Adrian Mulligan and Dr. Joris van Rossum, recently hosted a webinar The peer review landscape – what do researchers think? The archive version is now available to view.
There are a number of projects underway at Elsevier to support reviewers and the peer-review process. These include:
The new Reviewer Recognition Platform
This initiative provides participating reviewers with a personalized profile page where their reviewing history is documented. They are also awarded reviewer statuses based on the number of reviews they have completed for a specific journal – see Elsevier's Reviewer Recognition Platform prepares for next phase in this issue to discover how the project is evolving.
Integration of CrossCheck into EES
The plagiarism tool CrossCheck has now been integrated into EES (Elsevier's Editorial System) for a large number of journals. Once an author has submitted a manuscript, EES will automatically upload the editor PDF to CrossCheck's iThenticate website, where it will be checked against a huge database of publications. Editors can then view an automatically generated similarity report within EES. Over the past few months, several feature enhancements have been introduced and we will continue to roll out the tool with the aim of making it available to all journals by the end of 2014.
A new direction for the Article Transfer Service (ATS)
As outlined in the article How we can better support and recognize reviewers in Editors' Update Issue 42, for some time now, editors of 600 Elsevier journals have been able to suggest alternative journals to an author whose paper they have just declined. If the author agrees to submit to the new journal, any completed reviews can be transferred along with the paper.
In the article, we mentioned that a new experiment with six Elsevier soil science journals aims to improve on the service by offering editors who decide not to accept a paper, two important new options in EES; decline and reject. Decline simply says the paper is not suitable for the journal and allows the author to submit the paper – and reviews – to another soil science journal in the pilot. Reject means the author will not be invited to submit elsewhere.
Early results are proving encouraging with the number of manuscripts offered for transfer doubling under the new system. Dr. Joris van Rossum, Director Publishing Innovation at Elsevier, commented: "Not only do we see significantly more manuscripts offered a transfer, we also find many more authors accepting the offer to submit the article to another journal. We are hopeful that, in this way, authors will find the final destination for their manuscript faster and more efficiently."
Adrian Mulligan is a Research Director in Elsevier's Research & Academic Relations department. He has more than 15 years' experience in STM publishing and much of that time has been spent in research. He oversees Elsevier's Customer Insights Programs, ongoing tracking studies used to drive action in the business and help shape Elsevier strategy. Alongside these, Research & Academic Relations works in partnership with external groups to deepen understanding of the scholarly landscape across the industry. He has presented on a range of research-related topics at various conferences, including STM, ESOF, AAP, SSP, APE and ALPSP. Mulligan's background is in archaeology; he has a BA Honours degree and a master's of science from University of Leicester. He also has a diploma in Market Research from the Market Research Society.
For the past 12 years, Dr. Joris van Rossum has been involved in the launch and further development of many products and initiatives within Elsevier. From its inception, he worked as a Product Manager on Scopus, the largest abstract and citation database of peer-reviewed literature, and he worked on Elsevier's search engine for scientific information as Head of Scirus. Later, he developed the Elsevier WebShop, which offers support and services for authors at many stages of the publication workflow. In his current role as Director Publishing Innovation, Dr. van Rossum focuses on testing and introducing important innovations with a focus on peer review. He holds a master's of science in biology from the University of Amsterdam, and a PhD in philosophy from VU University Amsterdam.
At Elsevier, we are focused on working with our journal editors and reviewers to improve and streamline the peer-review process. So when we acquired the global research management and collaboration platform Mendeley in 2013, we were keen to explore how peer review might benefit from its advanced collaborative features. Researchers worldwide use Mendeley’s desktop and […]
At Elsevier, we are focused on working with our journal editors and reviewers to improve and streamline the peer-review process. So when we acquired the global research management and collaboration platform Mendeley in 2013, we were keen to explore how peer review might benefit from its advanced collaborative features.
Researchers worldwide use Mendeley's desktop and cloud-based tools to manage and annotate documents, create citations and bibliographies, collaborate on research projects and network with fellow academics.
As reported earlier this year in Reviewers’ Update, we launched a series of small pilots which saw research papers brought within the Mendeley environment with the consent of the relevant reviewers and editors. The journals involved in the trials were Molecular Cell, Neuron, and Cell, which are part of the Cell Press family. While there were variations in each pilot, the common factor in all cases was that the reviewers remained anonymous to one another. In this article, we highlight the results of each trial and our planned next steps.
Reviewers were invited to discuss the article only after they had completed their reviews the conventional way. Ten manuscripts were included in the pilot, and all six editors on the team participated.
Responses were positive. In a post-pilot survey, 17 of 24 reviewers responded, and ~94 percent said they liked the discussion and would be willing to participate in a similar discussion in the future. Five of the 10 authors we approached responded to the survey; they unanimously agreed that the interactive review process made it clearer to them how to revise their paper and the editor’s summary of the discussion was helpful. Like reviewers and authors, editors also found the forum useful, particularly for reaching consensus on the experiments needed for reconsideration, and when reviewers disagreed. The process also prompted interesting meta-discussion about peer review and the role of the reviewer. While there were clear positives, the process took longer than conventional peer review (since the discussion was tacked onto the regular process). Some reviewers expressed concern about the extra work, and editors felt they needed to spend additional time preparing to lead the discussions.
This experiment was more complex than the Molecular Cell trial. Here, reviewers were encouraged to perform the review process end-to-end in Mendeley. Ten manuscripts were included in the pilot and seven editors contributed. Reviewers were encouraged to annotate the PDF directly, but were also asked to provide summary comments that the editors used to initiate discussion.
Authors and reviewers were responsive to our requests to include them in the pilot, with only one author choosing not to participate. We found the reviewers tended not to use Mendeley’s PDF annotation function, with the majority preferring to provide only summary comments. The quality and extent of the interactive discussions varied: in cases where the reviewers were largely in agreement that the paper did not have potential for Neuron, the discussion was more limited, whereas papers that were viewed as being more promising generally led to a more extensive discussion. The editors felt that the process was most valuable in these cases, and this sentiment was reinforced by reviewer feedback.
Reviewers were asked if they would like to directly interact only after the traditional review process was complete and they had been offered the opportunity to see each other’s responses.
The responses were mixed and depended on the circumstances: the greater the divergence of reviewers’ views on the paper, the greater their interest in interacting. It may therefore be wise for editors to make the option for interactive peer review available but not insist on it for every paper.
Enthusiasm for collaborative review appears to be generally high among researchers and we now plan to expand our ability to implement it. Improved efficiency; reducing logistical hurdles and the maintenance of fast turnaround times for authors will remain key priorities. One major challenge with using Mendeley for interactive peer review is coupled with a clear silver lining: so many of our customers already have Mendeley accounts that in order to ensure they could log in anonymously as a peer reviewer, they had to provide us with an alternate email address. With that in mind, we are now exploring how reviewers can use their EM/EES/Evise account information to seamlessly engage in collaborative review.
Thank you to Matthew Green of Mendeley, Elsevier Strategy Analyst Sweitse van Leeuwen, and Cell Press Editorial Coordinator Patrick Hannon for their support in implementing these Mendeley peer-review trials.
Dr. Karen Carniol is Deputy Editor of Cell and has been an editor at Cell Press since 2006. She received her PhD training at Harvard University before joining Elsevier.
Dr. John Pham received his PhD in Biochemistry and Molecular Biology from Northwestern University, and he did postdoctoral work at Harvard Medical School. He joined Cell Press in 2008 and is Editor of Molecular Cell.
At the time of writing, Meredith LeMasurier was Deputy Editor of Neuron.
Earlier this year, we reported on the new Elsevier Reviewer Recognition Platform in the Editors’ Update article How we can better support and recognize reviewers. Since the platform’s launch, we have been monitoring the feedback and advice you have shared with us and we are now ready to begin work on phase 2. Recently, Professor Dr. […]
Earlier this year, we reported on the new Elsevier Reviewer Recognition Platform in the Editors' Update article How we can better support and recognize reviewers.
Since the platform's launch, we have been monitoring the feedback and advice you have shared with us and we are now ready to begin work on phase 2. Recently, Professor Dr. Lutz Prechelt, Professor of Informatics and Head of the Software Engineering Research Group (AG SE) at the Freie Universität in Berlin, joined the program as an advisor.
In this article, Elsevier's Dr. Joris van Rossum and Dr. Bahar Mehmani take a closer look at reactions to the platform's launch and Professor Prechelt talks about our ambitious plans for the initiative's future.
We know from reviewer feedback that while they find reviewing important, they can feel their work is near-invisible to the outside world and is hardly rewarded. The Reviewer Recognition Platform aims to change that. The platform provides participating reviewers with a personalized profile page where their reviewing history is documented. Moreover, reviewer statuses are awarded based on the number of reviews they have completed for a specific journal; a reviewer who completes at least one review within a two-year time period becomes a 'Recognized Reviewer', while those in the top 10th percentile become 'Outstanding Reviewers'. The platform also offers various discounts for Elsevier services, such as the Elsevier WebShop, and reviewers can download a variety of certificates. Future extensions will also describe the quality of reviewing (by means of percentiles among the reviewers of the particular journal) measured with journal-specific criteria.
When the Reviewer Recognition Platform went live in March this year, there were 40 participating journals. It now features more than 260 Elsevier journals from a variety of disciplines and that number is growing - each month, we add the five-year review history of reviewers for 50 new journals. When a review has been completed, reviewers receive an email providing them with a direct link to their Elsevier review profile on the platform. So far we have sent emails to 7,700 reviewers.
Our survey results show that reviewers find the Reviewer Recognition Platform a valuable initiative. They have given the platform a score of 8.20 out of 10 (n=488). More than 68 percent of respondents found the information contained in their profile useful and more than 41 percent mentioned they plan to share their profile and status with others (their department head, colleagues, followers and friends on their social media channels, etc.). We have also received a number of novel ideas and suggestions from reviewers about how we can target rewards and further recognize their invaluable efforts.
Professor Dr. Lutz Prechelt's story
I joined the Reviewer Recognition Platform project as an advisor, because I have been thinking about initiating a similar program – the Review Quality Collector – since 2012. I would like to ensure scientists are publicly recognized for good reviewing performance, journals receive reviews of the best possible quality, and research institutions include reviewing quality information in their evaluation of scientists' overall performance. Elsevier's Reviewer Recognition Platform had already started and could be extended to include the Review Quality Collector's goals, so we decided to join forces.
During the next stages of the project, we want to onboard more journals to the platform. Also, we want reviewers to be able to download their review history so that the reviewing record can easily be shared.
The crucial next step, however, will be the introduction of quality elements in the reviewer status. So far, the status is based on the number of completed reviews only. We want editors and authors to provide feedback about the helpfulness of the review and also record timeliness so that we can take these important aspects into account when recognizing reviewers.
We have already started the process by asking editors and authors of selected journals a few simple questions right after the reviewers have delivered their reports. This includes asking them to rate the submitted manuscript reviews in a generic fashion. Over the next couple of months, we expect to tweak the questions, analyze the results, and hold discussions with editors. Our aim is to arrive at a criteria framework that can be customized for each journal in such a way that it will provide a sound method to gauge the qualitative contribution of a reviewer.
In the long term, we hope the statuses and procedures we develop will evolve into industry standards that will be adopted by other publishers. It is time reviewers are appropriately recognized for their important contribution to the progress of research.
Professor Dr. Lutz Prechelt became full Professor of Informatics at Freie Universität Berlin in 2003 after working as a manager in the software industry for several years. His PhD in Informatics is from Universität Karlsruhe (now KIT), Germany. His research is primarily concerned with understanding the human aspects of software development better. He has always been deeply interested in issues of research methodology and research quality. He considers himself a thorough reviewer.
Dr. Bahar Mehmani is Publishing Innovation Manager in the Innovation and Publishing Development department (IPD) at Elsevier's Amsterdam office. She is working on a number of reviewer-related projects, all of which are designed to recognize reviewers' contribution to the progress of science. She received her PhD in Theoretical Physics from the University of Amsterdam (UvA) in 2010. Before joining Elsevier, she was a postdoc researcher at Max Planck Institute for the Science of Light (MPL).
For the past 12 years, Dr. Joris van Rossum has been involved with the launch and further development of many products and initiatives within Elsevier. From its inception he was a Product Manager for Scopus, the largest abstract and citation database of peer-reviewed literature, and he worked on Elsevier's search engine for scientific information as Head of Scirus. Later, he developed the Elsevier WebShop, which offers support and services for authors at many stages of the publication workflow. As Director Publishing Innovation, Dr. van Rossum is focused on testing and introducing important innovations with a focus on peer review. He holds a master's of science in biology from the University of Amsterdam, and a PhD in philosophy from VU University Amsterdam.
Ask any journal editor to name their top three pain points and you will frequently hear “finding reviewers”. With researchers so busy and the number of journal submissions on the rise, it is a common challenge for the editor community. Edward H Shortliffe, MD, PhD, is Professor of Biomedical Informatics at Arizona State University in Phoenix, […]
Ask any journal editor to name their top three pain points and you will frequently hear "finding reviewers".
With researchers so busy and the number of journal submissions on the rise, it is a common challenge for the editor community.
Edward H Shortliffe, MD, PhD, is Professor of Biomedical Informatics at Arizona State University in Phoenix, USA. In his role as Editor-in-Chief of the Journal of Biomedical Informatics, he has come up with a simple yet novel approach to solving the problem; researchers interested in reviewing for the journal are invited to fill in this form on the journal homepage.
Here he outlines the origins of the initiative and his observations since the form’s introduction.
When I became Editor-in-Chief of Journal of Biomedical Informatics (JBI) around 14 years ago, I began receiving email messages from people interested in reviewing for the journal.
As is true for every journal, sometimes we struggle not only to source enough reviewers, but to find people with the correct areas of expertise to review the research we publish.
Some of the people who volunteered were clearly extremely knowledgeable in their field, but that field was outside the scope of our journal. In other cases I would receive emails with hardly any information – which meant I had to contact the sender to request more details – or need to deal with other emails that would be pages long.
I realized that by creating a form with a standard set of questions, we could both formalize and simplify the volunteer process.
How does it work?
People visiting the Elsevier.com homepage of Journal of Biomedical Informatics see a link inviting them to volunteer as a reviewer.
Those clicking on the link are taken through to the form below. It contains a range of questions, the answers to which help Professor Shortliffe to decide whether the researcher is a suitable reviewer candidate.
As well as the usual information – name and contact details – I ask them for a short bio and their publishing and reviewing history. The question I find especially useful is “What is your motivation to review for JBI?”. It helps me to assess whether they have a clear understanding of the journal’s aims and scope.
The way someone fills in the form also tells me a lot about them. If it has been completed in a slipshod manner I can see they wouldn’t be the kind of reviewer JBI is looking for.
Interestingly, I have found that around 90 percent of the people asking to review for the journal are Asian postdocs in the US.
The papers we publish tend to be concerned with information technology rather than medical devices, and to focus on underlying methods rather than applied system descriptions or summative evaluations. However, a number of those who apply to review are concerned with the biological end of the spectrum – I would love to get more people on the clinical end applying – and many of the volunteers are not really familiar with informatics at all.
Having said that, I approve around 50 percent of the people who fill in the form. I simply send their details to the Journal Manager who adds them to our reviewer database and informs the individual. Whether they ever get asked by the editor to review is another matter and that is quite difficult to track.
The form is not the only avenue we use to find new reviewers; all first authors of accepted papers are automatically added to our reviewer database, for example.
Professor Edward H Shortliffe has been Editor-in-Chief of JBI since 2001, when the former Computers and Biomedical Research was reconstituted as the Journal of Biomedical Informatics. A physician with a PhD in computer science, he has research expertise in computer-based clinical decision support, knowledge representation, clinical systems, and the role of the internet in health care. He is Professor of Biomedical Informatics and Senior Advisor to the Executive Vice Provost for the College of Health Solutions at Arizona State University. Based for much of the year in New York City, he is a Scholar in Residence at the New York Academy of Medicine and holds academic positions as Adjunct Professor of Biomedical Informatics at Columbia University's College of Physicians and Surgeons and as Adjunct Professor of Health Policy and Research (Health Informatics) at Weill Cornell Medical College.
In his role as Editor-in-Chief of the Journal of Economic Behavior & Organization, Professor William S Neilson relies heavily on the information contained in Elsevier’s Scopus – the largest abstract and citation database of peer-reviewed literature. In this article he explains why he finds it so useful. Editors have many responsibilities, but the greatest one […]
In his role as Editor-in-Chief of the Journal of Economic Behavior & Organization, Professor William S Neilson relies heavily on the information contained in Elsevier’s Scopus – the largest abstract and citation database of peer-reviewed literature. In this article he explains why he finds it so useful.
Editors have many responsibilities, but the greatest one may well be separating those manuscripts having enough interest to warrant publication from those that do not. This becomes especially hard when one edits a journal with broad scope, too broad for any single person or small team to master. Nevertheless, publishers give editors this responsibility, and we must find a way to perform this sorting process. How do we judge an article’s interest when we are unfamiliar with the literature?
I edit the Journal of Economic Behavior & Organization (JEBO), which fits into the above categorization of a broad journal. JEBO is an economics journal that receives papers from across the spectrum of economic specializations as well as from finance and management. The manuscripts use all the different tools at an economist’s disposal from mathematical or computational to empirical or experimental. Individual researchers typically specialize in a topic-technique pair, so no single person could have expertise in everything our authors do. Furthermore, JEBO is a large journal; each year we receive more than 1,000 new submissions and publish more than 150 articles.
Every new submission begins with a step in which I determine whether the paper is too narrow to pursue or potentially interesting enough to send on for review. Those in the former category receive desk rejections. While I have accumulated expertise over many of the topics covered by the manuscripts, I cannot claim it in all of the narrow literatures in which these papers hope to contribute. Scopus allows me to compensate for this, and I use it almost daily.
Elsevier’s Scopus database does many things, but for my editorial duties the citation tracking feature helps the most. For any article in the 21,000 titles indexed by Scopus, one can readily find every other article that cites it, and from there find the references for – and citations to – those articles, the references for – and citations to – those articles, and so on. Many of us are familiar with the ISI Web of Science, which shares these features. For me, the primary differences are how quickly new information appears in the database (Scopus is updated daily) and how easy the interface is to use. Scopus has the advantage in both, and both matter for how I use it.
When I receive a manuscript addressing a literature I do not know well, I look for a reference to use as the key to my search. I look for one that was published 3-5 years ago in a prominent journal and which every subsequent paper on that topic should have cited. Let’s refer to this article as “Eve.” I search for Eve in Scopus, either using the article title or an author’s name. When I find it, the number of citations to the article appears prominently on the right of the screen. If that number is too small, I know that the literature addressed by my manuscript is too narrow to warrant further review, and I desk-reject it. If the number of citations is large, I do more digging.
Example: Let’s pretend the submitting article has referenced “The evolutionary basis of risky adolescent behavior: Implications for science, policy, and practice” (1). A search for this article reveals that it has 47 citations:
Clicking on the number leads to a screen listing all of the published articles citing Eve. This screen has proven extraordinarily valuable, and I use the information in a number of ways. First, a quick scan reveals the number of distinct authors citing Eve, which provides an idea of the number of researchers who might find the submitted manuscript worth reading. The same quick scan tells me how well these citing papers have published – this is measured by the titles of the journals in which they have appeared.
Second, I can trace the trajectory of the literature growing from Eve to get an idea of whether the submitted manuscript is at the frontier or just fills gaps in the literature. Scopus makes it simple to retrieve abstracts or get to the publisher’s page for the article, so I can find any similarities between already-published papers and the submitted manuscript. As an added bonus, this step would uncover any plagiarism of an already-published article.
If these first two steps identify the submission as potentially contributing to the frontier of an active literature, I then assign the paper to a co-editor or an associate editor. Even though the editorial board has broad coverage, sometimes submissions fall into literatures outside of the expertise of everyone on the board. The role of Editor-in-Chief makes me the residual claimant for these submissions, so I have to find reviewers. Because Scopus leads me to every paper citing Eve that has been published in the past few years, and all of these authors are familiar with the literature to which the submission hopes to contribute, this constitutes my reviewer pool. It means I can successfully find reviewers when I know neither the literature nor any of its participants.
My primary day-to-day task for Scopus is to learn more about submission-specific literatures, but the database proves useful for ex post assessment of the editorial team. Scopus makes it possible to search by journal title, and it then reports the number of articles published by year. These articles can be sorted by the number of citations they have generated, making it easy to find out which articles are becoming important and which are not. This in turn tells us where we can focus our future efforts. I can also perform the same exercise for aspirational journals to see what topics are becoming hot in the literature. This analysis can inform a number of decisions, from when and whom to add to the editorial board to what special issues to pursue.
In short, Scopus informs every phase of the editorial process. I would not want to do this job without it, and I intend to continue using it throughout my career.
In March this year, Scopus launched the Cited References Expansion project to expand the database's citation data back to 1970. The project is scheduled to run until 2016, when an estimated 8 million articles will have been re-indexed to include cited references. The first of the new content will be searchable and viewable by Scopus users as early as the end of this year. Correct citation data for the archival content will make it possible to measure its impact, perform historical trend analysis and conduct more accurate evaluations of authors who have published prior to 1996. There will also be higher h-index rankings for those senior researchers – many of whom have subsequently become key influencers and decision makers – who published most prolifically before the mid-1990s.
(1) Ellis, Bruce J., Marco Del Giudice, Thomas J. Dishion, Aurelio José Figueredo, Peter Gray, Vladas Griskevicius, Patricia H. Hawley, W. Jake Jacobs, Jenee James, Anthony A. Volk, and David Sloan Wilson. "The Evolutionary Basis of Risky Adolescent Behavior: Implications for Science, Policy, and Practice." Developmental Psychology 48.3 (2012): 598-623. Scopus. Web. 14 Aug. 2014.
William Neilson is a Professor of Economics and holder of the J. Fred Holly Chair at the University of Tennessee, Knoxville, USA, where he has taught for eight years and serves as department head. Prior to that, he taught for 18 years at Texas A&M University. He has been Editor-in-Chief of the Journal of Economic Behavior & Organization since January, 2011, and served as Editor-in-Chief of Economic Inquiry from 1997-2001. He is an economic theorist specializing in decision theory, game theory, and behavioral economics.
10 Jul 2014 3 Comments
Improvements to the Find Reviewers tool in EES have simplified the process of searching for potential referees. Find out more…
Egbert van Wezenbeek | Director Publication Process Development, Elsevier
The Find Reviewers application in the Elsevier Editorial System (EES) was built to help you locate appropriate reviewers.
Since its launch in 2010, editor feedback has been positive but we know that you have been keen to see better integration of the tool with EES.
We are pleased to inform you that following a recent update to EES this is now the case, resulting in a new workflow, details of which are outlined below.
In addition to the improved workflow outlined above, visibility of the Find Reviewers tool on the 'Search for Reviewers' page has been improved by adding a logo and hyperlinking the entire phrase that follows.
More detailed information on how to use the Find Reviewers tool can be found on our support pages and your publisher can also help with any queries you might have.
Researcher Mounir Adjrad dwells on why constructive reviews are so important to the peer-review process.
Mounir Adjrad is currently a researcher within the Engineering and Design department at London South Bank University (LSBU). His current research interests are Ultra Wideband (UWB) technology exploitation for biomedical engineering and communication applications. He has a multidisciplinary research experience in industry and academic institutions working on topics such as Global Navigation Satellite Systems (GNSS), satellite engineering, radar and transport engineering applications.
Reviewers need to remember that they are on a mission, one of “evaluating” others’ work. Simply stated, to evaluate is to examine the worth of the author’s/authors’ efforts. The review output, besides the suggestion of whether the work is (or is not) acceptable for publication, needs to state whether the author is encouraged to continue his/her effort, list areas for making improvements and explain how these improvements could be implemented. The review report needs to be positive, in a broad sense, and the end message to the author should be as far as possible from a savage one.
On that latter point, a couple of years ago while working in industry, I submitted an article to a specialized technology magazine. The published work presented the validation results of a developed product with quantified efficiency figures. The reviews referred to the presented figures as being the result of “black magic” and the product was called “snake oil”. It was obvious that the reviewers did not believe the reported figures, but the review fell short of explaining rational reasons behind this sceptical attitude. The only explanation the reviewers could offer was the fact they had dealt with previous similar claims from other companies but when the results were scrutinised it turned out that the figures had been falsified.
Thankfully, the review process was an open one, which allowed me to identify who the reviewers were, and perhaps this was the only useful information I could take back from those reviews. It turned out that these reviewers were involved in consultancy work for companies that were already marketing a similar product (function-wise). The article was published following a reply to the editor highlighting the issue with the choice of the reviewers and a request for a second review of the article, with impartial reviewing panel.
The issue of hostility in reviews was addressed in the column On Civility in Reviewing by Robert J. Sternberg in Observer, 2002. He rightly pointed out that hostile and savage reviews violate the fundamental ethics Golden Rule: to act toward others as we would have them act toward us. This brings me to conclude that any hostility expressed in any review, regardless of the field of research, is totally unacceptable.
But how do we prevent the occurrence of negative reviews? I believe this is a joint effort between editors and reviewers: the editor needs to address the question of whether the reviewers have subject matter expertise and experience qualifying them to thoughtfully evaluate the work; whereas the reviewers need to honestly assess whether they can provide a fair and unbiased review of the work based solely on its merits. Specifically, for the reviewers, I believe a key element in avoiding the hostility trap is to assess, at an early stage of the review process, whether they will be able to evaluate the work with an open mind, and they should decline the review if they feel negatively predisposed to the submitted work.
In conclusion, across academic generations and disciplines, there is a cycle of negative reviews that needs to be broken to make peer review a healthy process.
P.S. the “snake oil” product and the company behind it are both enjoying great success.
22 Apr 2014 1 Comment
Discover the lessons learnt during a reviewer experiment on the Journal of Public Economics.
Together with two colleagues - Emmanuel Saez, E. Morris Cox Professor of Economics and Director of the Center for Equitable Growth at the University of California Berkeley, and László Sándor, a PhD candidate in Economics at Harvard University - Chetty ran an experiment evaluating the effects of cash incentives, social incentives, and nudges on the behavior of referees at the Journal of Public Economics. The interesting, and sometimes surprising, results of their study will be published this summer in the Journal of Economic Perspectives. Here Chetty talks about the rationale behind the trial and lessons learnt.
Since I took on the role of Editor-in-Chief, I have been interested in how we can better serve authors and improve the review times on our journal. And, as an author myself, I would love to see my papers reviewed more quickly.
To design the trial, we began by reading the literature and thought about what might prove effective in motivating reviewers to submit high-quality reports more quickly. We formulated three interventions.
First, naturally, as economists, we thought that paying people might prove an incentive. But psychologists suggest that payment can crowd out “intrinsic motivation” and actually lead to worse performance. So these contradictory hypotheses seemed very natural to test.
Second, the psychology literature suggests that simple nudges and reminders can affect people’s behavior, so we decided to try changing the deadline by which reports were due.
Third, sociologists have suggested that social incentives – namely, how people are perceived by their peers – may be a key determinant of behavior.
To test these hypotheses, we randomly assigned referees to four groups:
In total, the experiment included 1,500 referees who submitted nearly 2,500 reports from February 2010 to October 2011.
I should mention that all the interventions we tested have been used by other journals, but until now there has never really been a clear examination of which factors work best.
First, a change in timeframe is very effective; if you shorten the deadline by two weeks you receive reviews two weeks earlier on average. In fact, we noticed that whatever timeframe you give, most people submit their review just prior to the deadline. Editors might worry that if you ask reviewers to review more quickly, they submit lower-quality reviews. However, we found no significant changes in the quality of referee reports, as judged, for instance, by the editor’s propensity to follow the advice in the report.
Second, if a journal has the money available, cash incentives also work very well. The $100 payment reduced review times by about 10 days on average. Hence, it is clear that the “crowd-out of intrinsic motivation” that psychologists have been concerned about is actually not a serious concern in this context.
Third, the social incentive was less effective but still surprisingly successful in reducing review times, particularly with tenured professors, who were less sensitive to cash and deadlines. This confirms that people care about how they are perceived and suggests that gentle, personalized reminders from editors could be very effective in improving referee performance.
Overall, my biggest takeaway was that, as editors, we shouldn’t believe that the performance of our journals is something we can’t change. We can greatly improve the quality of our journals’ review process through simple policy changes and active editorial management.
Personally, I was surprised by how effective the shorter deadline was. There was no consequence for reviewers who didn’t meet it, yet they were still very receptive. The advantage for journals is that this approach is cost-free. I would probably be less responsive to the cash incentive, so I was also quite surprised by how successful that proved to be. However, if I have to do something anyway and by doing it today I get $100 then perhaps it’s not so surprising it has some effect.
Going forward, we would love to see other journals adopting some of these policies. And for reviewers, I would suggest that often what is useful to editors, especially if you are going to recommend rejection, is a short, clear - and on time - report, rather than something that is more detailed which takes longer to draft. By focusing on the big picture, you not only save yourself time but better serve editors and the author community too.
Chetty's research combines empirical evidence and economic theory to help design more effective government policies. His work on tax policy, unemployment insurance, and education has been widely cited in media outlets and Congressional testimony.
Chetty was recently awarded a MacArthur "Genius" Fellowship and the John Bates Clark medal, given by the American Economic Association to the best American economist under age 40. He received his PhD from Harvard in 2003 at the age of 23 and is one of the youngest tenured professors in the university's history.
New reviewer and revision deadlines have been established for a number of Elsevier journals. Find out why.
Arnout Jacobs | Business Development Director, Elsevier
Sometimes, even simple things can make a big difference. An Elsevier project designed to reassess reviewer and revision times is large-scale, involving 1,100 journals. Yet we are focused on one small aspect: optimizing the deadlines we give authors and reviewers. The idea was born at Cell Press, where we decided to look at what would happen if a journal set its review deadline a few days earlier. Results were encouraging: reviews did indeed come in earlier, and there was no difference in reviewer response rates.
Recently, via a controlled experiment on the Journal of Public Economics, we also received confirmation from the scientific community that shorter review deadlines can work. You can find out more in the Short Communication How small changes can influence reviewer behavior.
Building on the lessons learnt at Cell Press, we made an inventory of deadlines across all of our titles. Some journals did not mention any. We also found titles where contributions were routinely received well before the stated deadline. And then there were journals that were still using timeframes from the days when manuscripts were physically sent around the world and back, with deadlines extending up to a year! For journals publishing on arctic geology in the 1980s, this may well have been understandable, but with today’s instant communication, a new policy was due.
Whereas speed has always been important, that importance is increasing in today’s publishing environment. In the past, even if an article was ready, it may still have had to wait for backlogs to clear and issues to be complete. Today, we aim to publish articles online as quickly as possible after they are accepted. So a day saved in peer review, means a day quicker online!
So, how did we set our new deadlines? Our first principle was not to disrupt existing practices. Some fields are slower than others, and usually there is a good reason for this. Other journals are already very fast, and there is little gain in asking contributors to submit within 4 days instead of the existing 5. So we looked at actual reviewing and author revision times, and at stated deadlines, and we used these as a starting point. The biggest gains were to be found in author revision times, where articles can sometimes linger for months. We then came up with proposed new deadlines, and consulted with you as editors. As a result, new reviewer and revision deadlines were implemented at the beginning of the year for around 600 titles.
It’s still early days, so we do not yet know what the results will be. We are keeping a close eye on measurable items, such as response rates, compliance, and submission-to-acceptance times. But qualitative feedback is equally important. Ultimately, we hope that this initiative will speed up the publication process, while keeping all participants satisfied.
3 Mar 2014 11 Comments
We know that finding, retaining and rewarding reviewers are long-term pain points for editors. Scientists are increasingly busy and often find it difficult to free up time to do reviews. At the same time, new approaches to peer review are being developed, for example, working in a more open and collaborative manner or making use […]
We know that finding, retaining and rewarding reviewers are long-term pain points for editors. Scientists are increasingly busy and often find it difficult to free up time to do reviews. At the same time, new approaches to peer review are being developed, for example, working in a more open and collaborative manner or making use of the latest technology. That makes these times challenging, as well as exciting, and this is reflected in the enthusiasm and energy with which new experiments are being launched within our organization.
My team is behind a number of these peer-review pilots and our decision to carry them out in an experimental setting, i.e. test the concepts with a limited number of journals, is deliberate. It means we can learn quickly and be flexible. If a pilot proves unsuccessful, we can swiftly shift our attention to other areas. However, if the results are encouraging, we can upscale and roll it out to more journal titles. Below I outline a few of the pilots currently taking place.
This experiment looks at addressing the need of reviewers to be better recognized for their work. Reviewers indicate that they like to review manuscripts; they feel it is an important service to their communities and it keeps them abreast of the latest developments. At the same time, we know they often feel that they are not fully recognized for their work.
With this in mind, Elsevier set up a Peer Review Challenge in 2012. We asked entrants to submit an original idea that would significantly improve or add to the current peer-review process. The winner was Simon Gosling, a Lecturer in Climate Change and Hydrology at The University of Nottingham. He proposed the creation of a ‘reviewer badges and rewards scheme’ as an incentive for reviewers. Elsevier has since been working with him to implement his vision and, in early February, we began piloting a digital badge system with a selection of journals in our Energy portfolio. Via Mozilla OpenBadges, reviewers are issued with badges that they can display on their Twitter, Facebook and Google+ pages.
A second phase of the pilot is due to be launched this month - a ‘reviewer recognition’ platform for approximately 40 journals. Upon completion of a review for one of these titles, reviewers are provided with a link to a personal page on the platform that displays their reviewer activity. Based on their contributions to the journal, they are appointed statuses – for example, ‘recognized reviewer’ for those completing one review within two years, and ‘outstanding reviewer’ for those that have completed the most reviews. They are also able to download certificates based on their achievements and discount vouchers. We hope the platform will make the important work of reviewers more visible and encourage them to engage with Elsevier journals. Following the pilot, our aim is to make the platform available to all Elsevier titles.
We are continuously looking at how we can increase the visibility of the contribution made by reviewers; in another pilot, the journal Agricultural and Forest Meteorology has been making its review reports accessible on ScienceDirect. We now want to extend the experiment to more journals and see if we can provide the reports with DOIs (Digital Object Identifiers). In this way, the reports will be better acknowledged as an essential part of the scientific literature.
As an editor, you may frequently be confronted with manuscripts that are out of scope or are simply not suitable for the journal; however, they still contain sound research. For some time now, we have been offering the complementary Article Transfer Service (ATS), which is currently active for more than 300 of our journals. ATS allows editors to recommend that authors transfer their submitted papers – and any accompanying reviews – to another Elsevier journal in the field, without the need to reformat them.
A new experiment with six Elsevier soil science journals aims to improve on this service. If participating editors decide not to accept a paper, they can now choose from two important options in Elsevier’s Editorial System (EES):
Gilles Jonker, Executive Publisher for soil science, explained: “The editors of these journals were confronted with a strong growth in submitted articles and found it increasingly difficult to find reviewers. To help address these issues, an agreement was reached to harmonize the editorial policies of the six journals, honor another editor’s decision to reject a paper, as well as give authors more autonomy in finding an alternative journal.”
Early pilot results show a good uptake by editors of the ‘decline’ decision option. Authors are also embracing the concept and are accepting transfers to journals within the cluster that better fit the scope of their articles. “Later this year we should be able to see whether this pilot study has indeed addressed reviewer fatigue and improved the quality of submitted articles,” said Jonker.
Last, but not least, Elsevier is exploring ways in which Mendeley can be used to improve the peer-review process. Mendeley, a London-based company that operates a global research management and collaboration platform, was acquired by Elsevier in April 2013. Researchers worldwide use Mendeley’s desktop and cloud-based tools to manage and annotate documents, create citations and bibliographies, collaborate on research projects and network with fellow academics. These advanced collaborative features could benefit the peer-review process. Manuscripts can be annotated online, and these annotations can be shared in private groups. Moreover, editors and reviewers can discuss manuscripts in discussion forums. We are curious to see whether peer review within this environment will streamline the peer-review process, increase its efficiency and, in the end, lead to a better manuscript review. As part of this experiment, papers will be brought within the Mendeley environment - naturally only with the consent of the reviewers and editors. This pilot began with a few titles earlier this year. If it proves successful we will look to make it more widely available.
If you have any comments or suggestions for new peer-review pilots, I would really like to hear from you. You can contact me at email@example.com
Dr. Joris van Rossum
DIRECTOR PUBLISHING INNOVATION
For the past 12 years, van Rossum has been involved in the launch and development of products and initiatives within Elsevier. From its inception he worked as a Product Manager on Scopus, the largest abstract and citation database of peer-reviewed literature, and he worked on Elsevier’s search engine for scientific information as Head of Scirus. Later, he developed the Elsevier WebShop, which offers support and services for authors at many stages of the publication workflow. In his current role, van Rossum is focused on testing and introducing important innovations with a focus on peer review. He holds a master’s of science in biology from the University of Amsterdam, and a PhD in philosophy from VU University Amsterdam.
New peer-review system piloted by the journal Virology avoids the need to ‘start over’ with new reviews if paper is rejected from a high-impact journal.
Dr Michael Emerman | Editor-in-Chief of the Elsevier journal Virology
Established in 1954, Virology is one of the oldest journals in its field and publishes the results of basic research in all branches of virology. Dr Michael Emerman, who researches HIV replication at the Fred Hutchinson Cancer Research Center in Seattle, is a long-serving editor of Virology and took on the role of Editor-in-Chief in January. Since then, he has instigated some big changes, dramatically increasing submissions. Recent changes include the launch of a blog, Virology Highlights; bringing on board new editors; and the introduction of Streamline Reviews. Here he explains why he has high hopes that the latter will contribute to the journal’s success.
In January, Virology introduced a new program — Streamline Reviews — with the aim of capturing and publishing manuscripts that have been rejected by journals with high Impact Factors. The idea came from one of our editors who described the frustration of resubmitting a rejected manuscript from one high-impact journal to another because of the need to respond to a completely new set of reviewers.
The way Streamline Reviews works is simple. If an author has a manuscript that has been reviewed and rejected by a journal with an Impact Factor higher than 8 that publishes papers on the basic science of viruses, (such as Cell Host & Microbe, Nature, PLOS Pathogens, Proceedings of the National Academy of Sciences and Science), they can send us the original reviews, their rebuttal and a revised manuscript. They should include these extra items as part of their cover letter.
We will then consider the manuscript based on the reviews and usually send the manuscript, reviews and response to one additional expert for an opinion. In theory, this should speed up the review process for these manuscripts — authors do not need to start over at the beginning, and it is easier for someone to give an opinion on the paper with reviews already to hand. This option works best for those manuscripts rejected for perceived reasons of impact, novelty or significance.
The program is still in its infancy. We have received a handful of Streamline Review submissions, but we believe more papers will be submitted this way once the initiative becomes better known. What has been interesting is the very positive feedback we have received from editorial board members and community members, many of whom have experienced the long process of resubmitting a very good manuscript that has just missed the mark at a high-impact journal. In fact, they wonder why Streamline Reviews is not already standard practice amongst journals.
As I mentioned, we have set our bar at an Impact Factor of higher than 8. We decided on that figure after identifying which of our competitor journals featured the kinds of papers we are interested in.
In practice, it has worked well so far. In one case, the additional expert reviewer had also reviewed the paper for the high-impact journal and recommended it be accepted right away since the authors had addressed all previous concerns. In other cases, we have asked for additional changes, but these mostly related to the way the paper had been written and didn’t require the author to carry out additional experiments.
While there was initially some concern that we would not know the identities of the reviewers for the high-impact journal, this has not proved a problem when it comes to evaluating the manuscripts.
Despite complaints, I think the peer-review system serves a wonderful purpose. The role of the editor is to weed out the poor reviews and to use the peer-review system to turn out better papers, and I have seen many papers over the years become vastly improved by reviews. I think that the Streamline Review process is a means to help good papers get published in a faster and more efficient manner without sacrificing any of the benefits of stringent peer review.
This article first appeared in Elsevier Connect.
26 Jun 2013 5 Comments
We know that finding and retaining good reviewers is one of the greatest challenges our editors face. In March this year, we collaborated with editors on 32 journals to find a simple way to recognize the contributions of ‘top’ reviewers – those who have really gone that extra mile for a journal. The result was […]
We know that finding and retaining good reviewers is one of the greatest challenges our editors face.
In March this year, we collaborated with editors on 32 journals to find a simple way to recognize the contributions of ‘top’ reviewers - those who have really gone that extra mile for a journal. The result was the Certificate of Excellence in Reviewing, featured in figure 1 below.
Editors from each of the participating journals nominated their 25 best reviewers. Those reviewers then received a personalized HTML email containing a link to a high-resolution PDF file of their certificate, suitable for printing. Each certificate was created using a unique PDF generation tool developed by Elsevier WebShop for the Top25 Hottest Articles and Certificate of Publication.
The response from reviewers was immediate and encouraging. One of the participating editors, Dr Sandra Shumway, co-Editor-in-Chief of the Journal of Experimental Marine Biology and Ecology (JEMBE) admitted: “I didn’t expect it to do anything. But I had a couple of people write back and say thank you.”
Comments from the reviewers who contacted Shumway included, “thank you….a really nice surprise!” and “just returned from a trip and was cleaning up email when I came across the Excellence in Reviewing Certificate. Nice to be appreciated. Many thanks!”. Shumway added: “People need to publish and people need to review. My list included those who I knew had really come through for me; those who responded when I needed them for reviews. I believe the newer and younger reviewers will be attracted by this recognition.”
The journal Midwifery also participated in the project and Editor-in-Chief, Professor Debra Bick, is keen to continue using the certificate. She said: “This initiative worked very well with our reviewers, many of whom contacted myself or Sarah (Sarah Davies, her Elsevier Publisher) to say how pleased they were to be identified in this way. We would certainly do it again, as the journal feedback is also an excellent way for our reviewers to reflect in their CVs how they are contributing to research and scholarship.”
Across the board, the response to this initiative was positive, both in quantitative and qualitative feedback. Data shows that more than 65% of the email recipients went on to download their certificate.
Based on these early results, we plan to turn the study into an annual initiative available to every journal, beginning in 2014. The input required by editors will be minimal. As the time approaches for the certificates to be distributed, we will approach you to ask for your list of ‘exceptional’ reviewers – those who have really excelled that year. The rest of the process will be administered by your journal’s Marketing Manager. While we recommend that you choose 25 reviewers, that number will remain flexible. Shumway said: “I found it hard to create a shortlist of 25 reviewers, but this number seems suitable. When it’s ready to go again, I’ll be ready with my list.”
Philippe Terheggen, Executive Vice President of Science, Technology and Medicine Journals, has been a strong supporter of the project within Elsevier. He explained: “I’m delighted that this new initiative has landed so well with both our reviewers and the editors that participated. It shows how small steps to provide formal recognition provide a valuable tool for retaining these sought-after people. Perhaps there are some framed Certificates of Excellence in Reviewing hanging on a few walls around the world right now.”
Elsevier understands that even one review is a great contribution. To this end, there is also an annual ‘Thank you Reviewers!’ initiative. Run at the beginning of each year for all participating journals, a special announcement is placed on the journal homepages, together with full page print adverts in the journals, thanking the reviewers for their valued contributions. In addition, the initiative links to the reviewer benefits page on Elsevier.com which reminds them we provide free Scopus and ScienceDirect access and outlines the other benefits Elsevier offers.
These programs are just two of the initiatives we are exploring to help editors find or retain reviewers. Find out more about some of these in Exploring Improvements to the Peer-Review System featured in issue 36 of Editors’ Update.
Hot off the press!
As of this week, Elsevier reviewers can feature the journal for which they review in their e-mail signature or on their personal webpage using a new badge we have created. The journal-specific badge can be claimed within seconds via an online tool at http://www.elsevier.com/reviewerbadge
What do you think about these initiatives? Do you have suggestions for your colleagues on how to attract and retain reviewers? Please take a few moments to post a comment below.
Ursula van Dijk
HEAD OF MARKETING COMMUNICATIONS
Ursula has more than 20 years of experience in Science, Technology and Medicine journal marketing. She is based in Amsterdam and leads a team of marketers within the Physical, Formal and Applied Sciences area with a focus on supporting publishing initiatives by communicating and interacting with our editors, authors and reviewers.
8 May 2013 6 Comments
Editors of the journal Cortex are experimenting with an innovative new approach which will see the peer-review process split into two stages. Find out more…
Dr Chris Chambers and Professor Sergio Della Sala | Associate Editor and Editor-in-Chief of the Elsevier journal Cortex
On May 1st, Cortex launched a new innovation in scientific publishing called a Registered Report. Unlike conventional publishing models, Registered Reports split the review process into two stages. Initially, experimental methods and proposed analyses are pre-registered and reviewed before data are collected. Then, if peer reviews are favourable, we offer authors “in-principle acceptance” of their paper. This guarantees publication of their future results providing that they adhere precisely to their registered protocol. Once their experiment is complete, authors then resubmit their full manuscript for final consideration.
Cortex is an international journal devoted to the study of cognition and of the relationship between the nervous system and mental processes.
Why should we want to review papers before data collection? The reason is simple: because the editorial process is too easily biased by the appearance of data. Rather than valuing innovative hypotheses or careful procedures, too often we find ourselves applauding impressive results or being bored by non-significant effects. For most journals, issues such as statistical power and technical rigor are outshone by novelty and originality of findings.
By venerating findings that are eye-catching, we incentivize the outcome of science over the process itself, forcing aside other vital issues. One of these sacrificial lambs is statistical power – the likelihood of detecting a genuine effect in a sample of data. Several studies in neuroscience suffer from insufficient statistical power, so – driven by the need to publish – scientists inevitably mine their underpowered datasets for statistically significant results. Many will p-hack, cherry pick, and even reinvent study hypotheses to ‘predict’ unexpected findings.
Such practices cause predictable phenomena in the literature, such as poor repeatability of results, a prevalence of studies that support stated hypotheses, and a preponderance of articles in which obtained p values fall just below the significance threshold. Furthermore, an anonymous survey recently showed that these behaviours are not the actions of a naughty minority – in psychology and neuroscience they are the norm. We ourselves are guilty.
Registered Reports will help minimise these practices by making the outcome of experiments almost irrelevant in reaching editorial decisions. Cortex is the first journal to adopt this approach, but our underlying philosophy is as old as the scientific method itself: If our aim is to advance knowledge then editorial decisions must be based on the strength of the experimental design and the likelihood of a study revealing definitive results – and never on how the results themselves appeared.
We know that other journals are watching Cortex to gauge the success of Registered Reports. Will the format be popular with authors? Will peer reviewers be engaged and motivated? Will the published articles be influential? We have good reasons to be optimistic. In the lead-up to Registered Reports, many scientists have told us that they look forward to letting go of the toxic incentives that drive questionable research practices. And our strict peer review will ensure that our published findings are among the most definitive in cognitive neuroscience.
Since the launch of the EES user consolidation project in December last year, thousands of researchers have responded. Find out more…
Edward O'Breen | Marketing and Brand Manager, Elsevier
In a recent post on the Short Communications Board, our Vice President of Corporate Relations, Tom Reller, discussed the hacking of EES, our online platform for managing the submission and peer-review process.
He explained that in late October last year, one of the editors of Optics & Laser Technology (JOLT) alerted our EES team that reviewers for two of his assigned submissions had been invited but not by him. Our team immediately launched an investigation and discovered that someone had been able to retrieve the EES username and password information for this editor.
Tom went on to outline the various steps we are taking to reduce these risks, and that one of these innovations - user profile consolidation – had become available to all EES users on December 3, 2012.
Consolidation of user profiles was a project the EES team was working on prior to the hacking. A regular audit of EES had identified the many advantages that enabling researchers to use a single username and password across all EES journal sites would provide. Not only would it streamline their workflow, it would increase security levels too.
Since December 3, about 350,000 users have consolidated more than 950,000 individual EES accounts into about 350,000 consolidated user profiles.
Alongside the user profile consolidation, we have also introduced enhancements in security and user data protection. EES users can now reset their passwords via a self-chosen security question. They will receive a confirmation by email and only the user will have access to the password and security question. This makes the end user responsible for his/her own data and helps to avoid abuse of EES accounts.
On December 19 last year, we surveyed those EES users who had consolidated their accounts since the December 3 launch. More than 400 researchers provided their feedback, which revealed:
For a few days following the December 3 launch, EES servers were slow to respond due to the large number of users consolidating their profiles. We appreciated this was very frustrating for users and worked on improving the situation. Luckily, only very few users still experience this problem and we have seen calls to our Elsevier Customer Services team fall from 1.6% to 0.2%.
Someone found a way to infiltrate the Elsevier Editorial System; Tom Reller, Vice President of Global Corporate Relations at Elsevier, explains what happened and what we’ve done.
Tom Reller | Vice President of Global Corporate Relations, Elsevier
This article originally appeared in Elsevier Connect
Yesterday, Ivan Oransky of Retraction Watch reported that Elsevier Editorial System (EES), our online platform for managing the submission and peer-review process, had been hacked in November. His article, “Elsevier editorial system hacked, reviews faked, 11 retractions follow,” is an accurate account of what happened and a good example of the positive role Retraction Watch can play in monitoring the scientific literature.The Retraction Notices posted by the Elsevier journals themselves provided details about the falsified reports:
A referee’s report on which the editorial decision was made was found to be falsified. The referee’s report was submitted under the name of an established scientist who was not aware of the paper or the report, via a fictitious EES account. Because of the submission of a fake, but well-written and positive referee’s report, the Editor was misled into accepting the paper based upon the positive advice of what he assumed was a well-known expert in the field. This represents a clear violation of the fundamentals of the peer-review process, our publishing policies, and publishing ethics standards. The authors of this paper have been offered the option to re-submit their paper for legitimate peer review.
What happened here is that in late October, one of the editors of Optics & Laser Technology (JOLT) alerted our EES team that reviewers for two of his assigned submissions had been invited but not by him. Our team immediately launched an investigation and discovered that someone had been able to retrieve the EES username and password information for this editor
Fake reviews are becoming an increasingly challenging issue for publishers, but one we’re prepared to confront. We participated in a story in The Chronicle of Higher Education back in September, also stemming from someone creating fake reviewer accounts. In that case, the editors noticed the reviews were coming in from emails with generic email contacts (i.e., yahoo or gmail) and not institutional emails. Here, it was clear the author himself had created the fake reviewer accounts.
We regularly conduct an audit of EES tools and processes to determine where improvements can be made. The major recommendations from the most recent audit prompted a security change that was introduced: User Profile Consolidation. Consolidated profiles in EES are protected from the malicious use that occurred in this scenario because the registered user has total control over the personal information in the user profile. More information about the benefits of User Profile Consolidation can be found on this Profile Consolidation FAQ.
In July, we ran a pilot to make user profile consolidation in EES available to almost 1,000 “very active” users. The first pilot was successful, with 90 percent of these pilot users consolidating approximately 4,000 entitlements. Pilot users were surveyed for feedback on the process, including level of effort, provision of help and support. This pilot ran for 10 weeks, and the process itself, the supporting documentation and the communication was improved prior to introducing a second pilot on October 10. This second pilot introduced user profile consolidation for 16,500 additional users and has also proven to be very successful.
After the successful pilots, user profile consolidation became available to all users on December 3. Elsevier encourages all EES users to complete this process as soon as possible; we’ve already seen more than100,000 unique users consolidate their accounts. In the coming weeks, we will proactively support larger numbers of frequent users through this process as necessary.
In addition to User Profile Consolidation, we have implemented other changes that were recommended by Elsevier’s internal Security and Data Protection team, not all of which would be wise for us to discuss publicly. It has also been suggested that the new ORCID program also has the potential to reduce this type of fraud.
The challenge for us is not so different from that of other companies, and that’s finding the right balance between security control and customer ease of use. One result of this is that editors may have to do more to keep their accounts safe — much like people have to do more to access their online bank accounts —though clearly, there are differences here. Another important aspect of fraud detection in academic publishing is that no matter how strong we make protocols and controls, there is always going to be a human element – a role for editors and publishers to flag when something looks out of line.
Scientific fraud and misconduct is a growing concern in the scientific community and is something Elsevier contributes a significant amount of resources to confront. That includes an information security team that is acutely aware of the risks and vulnerabilities of any online system. The reality today is that hacking and spoofing can and will occur, though here we believe we acted quickly, the impact is minimal and that we have taken the necessary steps to eliminate the threat posed, at least through this method.
We’ll be paying close attention to the discussion surrounding this incident and will try to address any questions that arise.
“This Challenge has shown that the scientific community is keen to publicly and systematically acknowledge reviewers’ work…” Philippe Terheggen, Senior Vice President, Physical Sciences II Since the 1660s, peer review has proved an essential dividing line for judging the difference between science and speculation. Over the years, however, pressure on the system has continued to […]
"This Challenge has shown that the scientific community is keen to publicly and systematically acknowledge reviewers’ work..." Philippe Terheggen, Senior Vice President, Physical Sciences II
Since the 1660s, peer review has proved an essential dividing line for judging the difference between science and speculation.
Over the years, however, pressure on the system has continued to grow as the global academic community expands and manuscript submissions rise.
We know that it can be tough for you as Editors to not only find appropriate reviewers willing to review for your journal, but to motivate them to keep reviewing, time and time again. Together with the research community, we are keen to explore ways to ease that pressure and are currently working on a number of peer-review pilots and innovations. One avenue we recently explored was a global competition seeking researchers’ thoughts on the future of peer review. This was inspired by the success of similar competitions such as the Elsevier Grand Challenge and the Executable Paper Grand Challenge.
In an issue of Reviewers’ Update earlier this year, we threw down the gauntlet to readers, asking them to submit ideas that could significantly add to the current peer-review system. We also invited entries that explored how Publishers and Editors can help early career researchers become reviewers, or how reviewers can be recognized by either their institutes or Publishers.
More than 800 readers took up the challenge and entries embraced all aspects of peer review, from improvements to the actual ‘mechanics’ of the process to how to reward reviewers. It was interesting to note that a great many entries detailed suggestions for rewarding reviewers in ways that can be noted on CVs. In addition, there was an intriguing dichotomy between entries that were firmly in favour of the extension of double blind type peer-review systems and those who advocated completely open peer review. The judges chose a list of finalists, whose ideas were published on the Peer Review Challenge website with an invitation to the academic community to post their comments. More than 300 of you responded and those comments were taken into account by the judges when making their final decision. Many teleconference calls and emails later, the final three winners were chosen and their winning entries can be found below.
Philippe Terheggen, Senior Vice President Physical Sciences II, sat on the judging panel, and was impressed by the quality of the entries. “Thank you to everyone who took the time to enter, or provide us with their feedback. We were really pleased by the number of entries, as well as the thought and initiative invested in them. The next stage is to work with our winners to develop their ideas. We need to determine whether they can be integrated into existing systems, like the peer review annotation idea, or develop the framework necessary to manage and deliver the reviewer points system in a logical way.”
He added: “This Challenge has shown that the scientific community is keen to publicly and systematically acknowledge reviewers’ work, as well as to utilize platforms that make reviewing easier and more transparent for all players. It is our task as Publishers to work with the community, and your task as Editors to make sure that these needs are met.”
If you would like to get involved in any of the pilots mentioned below, please do let me know at firstname.lastname@example.org
Elsevier Reviewer Badges and Rewards
Simon Gosling, Lecturer in Physical Geography, Nottingham University, UK
Simon suggested introducing a standardized way to recognize the cumulative effort of a particular reviewer. He explains: “Peer reviews are an important service to the academic community and fundamental to academic publishing. I entered the Challenge because I feel strongly that reviewers should be recognized and rewarded in some way for the time and effort they invest – often on top of a very busy schedule – in preparing article reviews for journal Editors. I see the future of peer review as a positive feedback cycle whereby reviewers (especially early career researchers) – encouraged by an opportunity to enhance their reputation as a reliable and good reviewer and to receive journal and/or book discounts – prepare high quality helpful reviews that in turn help to make the job of journal Editors more straightforward. This way, authors, Editors and reviewers all benefit from the peer-review process. To date, reviewers have received little benefit, let alone anything tangible. I envisage the acknowledgment of reviewers’ time and efforts through a widely recognized tangible accrediting system that would be well-known and citeable on a CV, which I have termed “Elsevier Reviewer Badges”.
Visit the Peer Review Challenge website for further details of Simon’s winning entry.
Top Reviewer Incentives
Michael Muthukrishna, Graduate student, Department of Psychology, University of British Columbia, Canada
Michael’s idea also concentrates on reviewer rewards and develops the idea of a public points system for reviewers, inspired by websites such as Reddit and Slashdot. Michael elaborates: “As a researcher and a technologist, I was thrilled to hear about Elsevier’s Peer Review Challenge. Technology affords new and often better ways of approaching old problems. The peer-review process consists of established and largely successful practices that support good science and I am cognizant that caution is required when tampering with it. Nevertheless, I see the peer-review process gradually taking advantage of the advancements tested in public, peer-reviewed forums, such as Reddit, Slashdot.org, StackOverflow, Amazon, and many open source software projects. The successful processes in these domains have converged on solutions supported by psychological research, including the power of reputation, which I focused on in my entry.”
Visit the Peer Review Challenge website for further details of Michael’s entry.
Peer Review Annotation Application
Koen Hufkens, Postdoctoral research associate, Faculty of Bioscience Engineering, Ghent University, Belgium
In contrast to the previous ideas, Koen’s entry suggests that the online platform facilitating peer review could be improved. He suggests creating an integrated review application, which would, among other things, include an integrated PDF reader with annotation possibilities and generate a review template listing the changes to be made based upon annotations on the pdf. This would save reviewers’ time. Of his idea, Koen says: “As a scientist, reviewing for journals is considered an integral part of your work. However, given the voluntary nature of this task, often little time is budgeted towards it. Talking to colleagues it became apparent to me that reviewing for journals is something done when one has a few minutes to spare, e.g. on a flight, train or bus to the next meeting or commuting home. I entered the competition with the idea that a lot of time could be saved by integrating some of today's technology into a reviewing platform.
Note from Ed:
Koen’s ideas chime perfectly with current plans for Evise, Elsevier’s next-generation editorial system. The roll-out of Evise will begin in the second half of 2013 and will include online viewing and annotation of submissions. Reviewers and Editors will be able to view the manuscript and add comments to it online. These comments will then be saved with the submission and can be made available to the author. Koen’s winning entry will be shared with the Evise team.
This platform would integrate reading and annotating on the original document, summarizing these 'in line' notes into a concise summary for the authors to read. Ideally this platform would be online and cloud-based so you could easily pick up and continue where you left of the day before. It's easy to perceive how trends such as tablet applications could extend from this basic idea to provide even greater flexibility and ease of use for the reviewer on the go, or on the couch. As a good work-life balance is limited by available free time, I feel that providing the tools to free up extra time should be considered a major incentive for most scientists to continue reviewing for Elsevier.”
Visit the Peer Review Challenge website for further details of Koen’s entry.
PUBLISHER, ENERGY AND PLANETARY SCIENCES
Clare graduated from University College Cork, Ireland, with a PhD in Marine Ecology in 2004 and since then she has worked in various aspects of publishing. She has been working with Elsevier since 2006 and is a Publisher on the Energy and Planetary Sciences portfolio where she has responsibility for 11 journals, across the nuclear energy, solar materials, bioenergy and greenhouse gas areas, along with three planetary sciences journals.
Dr Michael L Callaham, Editor-in-Chief of Annals of Emergency Medicine, writes about his research looking into methods of educating peer reviewers.
Dr Michael L Callaham, MD | Editor-in-Chief, Annals of Emergency Medicine
Dr Michael L Callaham, MD, is Chair of the Department of Emergency Medicine and Professor of Emergency Medicine at the University of California, San Francisco (UCSF) School of Medicine. He is also Editor-in-Chief of Annals of Emergency Medicine, the official journal of the American College of Emergency Physicians. He received his MD in Medicine from UCSF in 1970 and carried out his residency in Emergency Medicine at the USC Medical Center, Los Angeles, CA. He is a member of the Institute of Medicine, National Academy of Science.
As a result of his editing and publishing experience, his research interests have turned to trying to better understand the scientific peer review publication process through research into methods of educating peer reviewers, as well as research into bias and its impact on scientific publication.
Quality peer reviewers play a major role in the quality of the science a journal publishes, and many journals have trouble finding a sufficient supply of reliable ones. Therefore it would be valuable for journals to know what characteristics identify a good reviewer in advance, and/or how to improve their skills once they are reviewing. In the past decade our understanding of this topic has deepened, but the results are not encouraging.
It would be very desirable for editors to be able to identify high quality reviewers to target for recruitment, or at the time of recruitment, to help weed out those who will not perform well. Several studies, one including 308 reviewers and 32 editors, showed that factors such as special training and experience (including taking courses on peer review, academic rank, experience with grant review, etc.) were not reflected in the quality of reviews subsequently performed by reviewers. There was a trend towards better performance in those who had a degree in epidemiology or statistics, as well as those who had already served on an editorial boards. Several papers found that more experienced reviewers (> 10 years out of residency) performed more poorly, but for all these variables, the relationship was weak and the odds ratios were less than 2.
Therefore, if we cannot identify good reviewers in advance, perhaps we can train them to perform good reviews once on board. A number of studies have examined the impact of formal reviewer training, most of them focusing on the traditional half day voluntary interactive workshop format. In all these studies, attendees were enthusiastic about the workshop training, felt it would improve the quality of their subsequent reviews, and performed better on a post-test of their understanding of peer review. Unfortunately, even when compared to controls with similar previous volume and quality ratings, none of these predictions came true and the objective quality scores of attendees did not change at all. At the journal in these studies, this led to abandonment of these methods, with however a subsequent steady rise in review quality due to other interventions.
These failures led to study of more substantial interventions that would still be reasonable logistically for a journal to implement. One involved increased feedback to reviewers, who were not only given explicit information about what was expected in the review, but also received copies of other reviews of the same manuscript with the editor’s rating of each of those reviews, a copy of a truly superb review of a different manuscript, as well as being told the rating they received on their actual review. These interventions (carried out on about 4 reviews for each subject) had no significant impact on subsequent quality performance. Finally, a recent study identified volunteer mentors among reviewers who had the highest performance ranking for review quality, matching them up with randomly selected reviewers new to the journal and encouraging them to discuss each review by phone or email. Like previous studies, for reasons of practicality this typically involved only 3 or 4 reviews per subject, and like other interventions it had no effect compared to the control group who received no special effort.
We can conclude that so far none of the fairly easy approaches to reviewer training have been shown to have any effect, probably because the amount of feedback and interaction needed to teach the complex skills of critical appraisal is much greater than the time allotted to this task by editors and senior reviewers.
What then is a poor editor to do? We cannot identify good reviewers in advance, and we can’t train them in any relatively easy, low-resource fashion. This makes it all the more crucial to adopt a validated and standardized editor rating of review quality and use it on all reviews. This allows identifying reviewers by quality performance, and then periodic stratifying of those reviewers and steering more reviews to the good ones, has been shown to have a significant effect on the quality and timeliness of reviews as a whole. All this, of course, assumes that one has enough reviewer raw material to make choices, which unfortunately is a luxury many smaller journals do not possess.
This article first appeared in Reviewers' Update, Issue 12, October 2012
Editor Dr David L Schriger muses on what can be done to improve both the quality of science and its reporting.
Dr David L Schriger| Deputy Editor, Annals of Emergency Medicine
As well as his role as Deputy Editor of the Annals of Emergency Medicine, Dr Schriger is also a member of the CONSORT and EQUATOR initiatives. His research focuses on improving the credibility of medical literature through the detailed presentation of results via figures and tables.
The last 20 years have seen much written about the poor quality of medical literature1. Recent endeavors such as EQUATOR (and its component reporting guidelines) and the Peer Review Congress (which has fostered interest in journal quality) have sparked considerable improvement. However, there is more to be done to improve both the quality of the science and the quality of the reporting of the science. Journals can play an important role in both areas. The first step for journals interested in doing so is to step beyond three common misconceptions.
First, there is a misguided obsession with statistics, a misconception that distracts authors, reviewers, and readers from more fundamental issues2. Classical statistics is concerned with differentiating observations expected by chance alone from those unlikely to be due to chance, thereby suggesting a potentially important association. While random error is a legitimate concern, particularly for small studies with positive results, in clinical research, concerns about random error are dwarfed, or should be dwarfed, by concerns about non-random error which is also known as confounding or bias3.
When problems occur in clinical studies they are typically related to the methodology of the study, not the statistics. In reviewing more than 2,500 papers for Annals of Emergency Medicine and other journals over the past 25 years, I have seldom found a paper for which the main deficiency was the use of the wrong statistic or the miscalculation of a statistic. In contrast, I routinely read studies that are poorly designed or fail to account for the presence of confounding in their analyses or conclusions. I also commonly find studies that devote multiple paragraphs in the Methods and Results sections to statistical concerns but fail to include even a single sentence about non-random error. A skeptic might think that the obsession with statistics is a diversionary smokescreen designed to distract readers from fundamental problems with confounding and bias.
Second, journals often have ill-defined goals for their review process. Review processes can ask several questions including:
a) Is the topic paper appropriate for our audience?
b) Is the reporting of the science complete? Does the paper provide all of the information that a knowledgeable, critical reader needs to reach a conclusion about the work
c) Is the science correct?
A common misconception is that c) is a legitimate goal of peer review. While it is certainly appropriate that the peer-review process filters out abject garbage (papers whose claims are unsubstantiated or ludicrous), caution should be taken to ensure that reviewers are critiquing the research design, analytic methods, and the quality of the reporting of the results, not the conclusion. Otherwise, journals will reject articles that conclude that ulcers are caused by bacteria just because the conclusion is unexpected. Instead, peer review should focus on ensuring that readers have all the information they need to reach their own decisions about the paper's conclusion. From this perspective, peer review's purpose is to bring to readers complete presentations that meet methodological standards and standards for comprehensive reporting. Don't worry whether the authors have found truth, worry about whether they have told a complete story. The scientific process will take care of the rest4.
The third misconception is that article quality is the responsibility of the authors, not the journal. While it is certainly true that better journals tend to get better papers, there is ample evidence that the papers of the highest impact journals have problems with incomplete or suboptimal reporting5-6. Research suggests that these problems are only corrected if the journal identifies them and insists that they be fixed7-8. A journal must take an active role in setting expectations and enforcing them if the reporting of science is to be improved.
At Annals of Emergency Medicine, we recognized these issues and have taken a series of steps to improve our journal. I share with you a number of them so you may consider whether they would be appropriate for your journal.
In 1997, the editors recognized that bias was the greatest threat to the veracity of the work being published and decided that all research papers would be reviewed by one of a small cadre of ‘methodology/statistics’ reviewers in addition to the typical content reviewers. Experience had shown us that the ideal person to perform this function is not a full-time statistician but a clinician-researcher who thoroughly understands methodology and knows enough statistics to know when formal statistical review is needed. This program has proved successful - the quality of reviews has improved as has the quality of the published papers9-11. Starting six years ago, this program was supplemented by a check for the appropriateness and quality of tables and figures in papers about to be offered acceptance or revision12-13.
These two programs have improved the journal and have slowly trained the author community about the journal's standards (which are stated in detailed Instructions for Authors initially composed in 200314-15). Over time, the methodology/statistical reviewers have had an easier time because papers come in with many of our requirements already met. In summary, our experience leads me to offer the following guidance to journals trying to improve their quality:
1) The main problem is study methodology, not statistics. Put your efforts into carefully critiquing each paper's methodology. Do not assume that regular reviewers will do this well. Identify reviewers who are capable of doing this job and use them. With more and more physicians getting clinical epidemiology training in public health and other graduate programs, finding such reviewers is getting easier. If you want them to do lots of reviews, compensate them.
2) The second problem is the quality of reporting. Get familiar with EQUATOR-network.org and the reporting guidelines for different types of research (CONSORT, STAR-D, PRISMA, STROBE...). Recognize, however, that these guidelines may be insufficiently detailed regarding specific nuances of your field and are not as strong on the presentation of results as they are on the presentation of methods. Augment them as needed.
3) Discourage papers that hide behind a torrent of statistics and models instead of showing readers the actual data. Editors and reviewers should ask "are methods and results presented in sufficient detail that learned readers can decide whether they agree or disagree with the conclusion"? Focus on whether the paper is fully reported rather than whether the science is correct or not.
By refocusing peer review on the paper's methodology - as opposed to its statistics - and on the quality of the reporting of the science, editors can improve the quality of research articles in their journals.
1 DG Altman. The scandal of poor medical research. BMJ 1994;308:283–284
2 Schriger DL. Problems with current methods of data analysis and reporting, and suggestions for moving beyond incorrect ritual. Eur J Emerg Med. 2002;9:203-7.
3 Goodman, S.N. Toward evidence-based medical statistics. 1: The p-value fallacy. 1999 Ann. Int. Med;130:995–1004.
4 Ziman JM. Reliable Knowledge: An Exploration of the Grounds for Belief in Science. Cambridge University Press: Cambridge, 1978.
5 Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing from descriptions of treatment in trials and reviews? BMJ 2008; 336:1472–1474.
6 Hopewell S, Dutton S, Yu LM, Chan AW, Altman DG. The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in PubMed. BMJ 2010; 340:c723.
7 Plint AC, Moher D, Morrison A, Schulz K, Altman DG, Hill C, Gaboury I. Does the CONSORT checklist improve the quality of reports of randomised controlled trials? A systematic review. Medical Journal of Australia 2006; 185:263–267.
8 Goodman SN, Berlin J, Fletcher SW, Fletcher RH. Manuscript quality before and after peer review and editing at Annals of Internal Medicine. Ann Intern Med. 1994;121:11-21.
9 Schriger DL, Cooper RJ, Wears RL, Waeckerle JF. The effect of dedicated methodology and statistical review on published manuscript quality. Ann Emerg Med. 2002;40:334-337.
11 Day FC, Schriger DL, Todd C, Wears RL. The use of dedicated methodology and statistical reviewers for peer review: A content analysis of comments to authors made by methodology and regular reviewers. Ann Emerg Med. 2002;40:329-33.
12 Cooper RJ, Schriger DL, Tashman D. An evaluation of the graphical literacy of the Annals of Emergency Medicine. Annals of Emergency Medicine. 2001;37(1):13-19.
13 Cooper RJ, Schriger DL, Close RJ. Graphical literacy: The quality of graphs in a large-circulation journal. Ann Emerg Med. 2002;40:317-22.
14 Cooper RJ, Wears RL, Schriger DL. Reporting research results: recommendations for improving communication. Ann Emerg Med. 2003 Apr;41(4):561-4.
15 Schriger DL. Suggestions for improving the reporting of clinical research: the role of narrative. Ann Emerg Med. 2005;45:437-43.
The Editors of the Journal of Public Economics conducted an innovative experiment into how we can motivate pro-social behavior. Here they share their findings…
Raj Chetty, Emmanuel Saez and László Sándor | Editors, Journal of Public Economics
We chose to collaborate with Elsevier on a study into how we can motivate pro-social behavior. Timely and useful refereeing work is usually understood to be a favor to colleagues, an unpaid but important task for the profession. As such, it is a good candidate for comparing financial incentives with other more inexpensive alternatives, in situations where the work benefits the public more than it rewards the worker.
The study has highlighted some key factors:
In 2010, we randomly assigned approximately 1,000 referees to one of four groups:
Always following the same process the majority of referees received multiple invitations throughout the study, which ended in November 2011. Using data retrieved from the Elsevier Editorial System (EES), the team studied refereeing times both before and after the experiment, as well as comparing times to other Elsevier journals. The differing approaches had small but significant impacts on whether referees agree to review a paper, but do not appear to have generated differential selection. The shorter deadline, of a 4-week due date, reduced turnaround times by an average of 10 days. The cash offer doubled this effect, further reducing referee times by an additional 10 days and the social incentive treatment reduced turnaround times by approximately 5 days. Less surprising in hindsight is that the study showed that tenured professors are most responsive to social pressure, while untenured referees are most responsive to deadlines and cash offers.
The study also showed that a common misgiving about financial incentives -- that they may crowd out intrinsic (or altruistic) motivation -- does not appear to apply here. Referees are no slower at concurrent or later unpaid jobs. Finally, all the approaches show modest impacts on the quality (length) of reports, the recommendations of the referees and/or the final decision of the editors.
This study was presented at the National Tax Association Annual Conference in November 2011 and will be part of a session on journals and academic publishing at the Summer Institute of the National Bureau of Economic Research in July 2012. The session will be attended by editors of leading journals, who may decide to re-evaluate their own journal's policies based on the evidence from this innovative experiment conducted by the research team, along with Elsevier.
This article first appeared in Reviewers' Update, Issue 11, July 2012.
A desire to understand the inner workings of the peer-review system has led a group of early career researchers to publish a new guide on the topic.
Julia Wilson | Development Manager, Sense About Science
Sense About Science is a UK charity that equips people to make sense of science and evidence
A desire to understand the inner workings of the peer-review system has led a group of early career researchers to publish a new guide on the topic.
Members of Sense About Science’s Voice of Young Science (VoYS) network, an active group of early career researchers who stand up for science in public debates and inspire their peers to do the same, were behind the guide. They were keen to discover how to get involved in peer review and what is being done to address some of the criticisms of the system, such as bias from reviewers. So, armed with a collection of concerns raised by their peers, they set off to interview scientists, journal editors, grant body representatives, patient group workers and journalists worldwide. The end result is the new guide, Peer review: the nuts and bolts, which is aimed at early career researchers. It received its official launch at the EuroScience Open Forum (ESOF) in Dublin this July.
In 2009, Sense About Science partnered with Elsevier to conduct one of the largest surveys of international authors and reviewers which highlighted how dedicated the scientific community is to peer review. 90% of respondents review articles because they like playing their part as a member of the academic community; 85% enjoy seeing papers and being able to improve them; and 91% believe their own last paper was improved through the peer-review process.
Just as a washing machine has a quality kite mark, peer review is a kind of quality mark for science. It tells you that the research has been conducted and presented to a standard that other scientists accept. At the same time, peer review is not saying that the research is perfect (nor that a washing machine will never break down). I’m surprised that such an integral and valuable contribution from scientists is often given little recognition in academia or training in how to do it for early career researchers.*
In writing the guide, the authors of Peer review: the nuts and bolts have not avoided criticisms of the peer-review process. They have asked journal editors and reviewers some challenging questions about scientific fraud and plagiarism going undetected; issues of trust and bias; ground-breaking research taking years to publish and the system benefiting a closed group of scientists.
What became clear was that early career researchers are frustrated by the lack of formal recognition for reviewing. With so many pressures to secure grant funding and publish research, there is a risk reviewing will become marginalised and inevitably inconsistent and shoddy.
Reviewing is currently not included in the Research Excellence Framework (REF) in the UK (the new system for the allocation of funding to higher education institutes). Members of the VoYS network decided to do something about this and wrote an open letter to Sir Alan Langlands, the Chief Executive of the Higher Education Funding Council of England, calling for formal recognition of reviewing in the REF. In the letter, the early career researchers told Sir Alan: “Recognising reviewing as part of the REF would ensure that it is prioritised and safeguarded by university departments, [...] and approached professionally and seriously, enabling senior researchers to spend time mentoring early career researchers like ourselves in these activities.” A copy of their letter can be found on the Sense About Science website.
Their call was supported by high profile editors and experts in the field including Dr Irene Hames, Editorial Consultant and author of Peer Review and Manuscript Management in Scientific Journals who spoke at our discussion on peer review at ESOF 2012 to mark the launch of our peer-review guide. Dr Hames said in support of the early career researchers’ open letter: “Peer reviewing involves a lot of time and effort by researchers [...] There is, however, currently no formal recognition of peer reviewing as a professional activity. Better recognition would be especially important for early career researchers, to demonstrate not only their contribution to this important activity, but their recognition as experts in their research areas.”
Peer review: the nuts and bolts is available to download from the Sense About Science website. For hard copies, please send requests to email@example.com.
4 Jun 2012 10 Comments
As many of you know, Elsevier is currently building Evise, our next generation online submission and peer-review system. The rollout of Evise is planned to begin in the second half of 2013 and to prepare for a smooth transition, 2012 will see the introduction of new features to our current system, EES. These include something […]
As many of you know, Elsevier is currently building Evise, our next generation online submission and peer-review system. The rollout of Evise is planned to begin in the second half of 2013 and to prepare for a smooth transition, 2012 will see the introduction of new features to our current system, EES.
These include something we know you have been keen to see – a single username and password across all EES journal sites.
Researchers have multiple roles in publishing: many authors are also reviewers; many Editors are also authors and reviewers. And researchers can perform these roles for multiple journals. We know that EES does not recognize that sufficiently so, later this year, we will begin the task of consolidating all user accounts.
Once the change has been rolled out, when you log into EES you will receive a prompt to consolidate your accounts. EES looks for matching associated email addresses when deciding which accounts to group together. If you have used different email addresses per EES site, you can indicate this during consolidation. Once you have selected the accounts to consolidate, you will receive a confirmation email. This is sent to ensure that only the account owner can give approval.
During consolidation, you will also be asked to choose a security question and answer. You will need these to reset your password if you forget it.
You will have 30 days to consolidate your accounts. After this period, you will only be able to use EES if you have consolidated your accounts.
After you have followed the consolidation procedure, you will be able to use the same username and password to access each EES journal site you use. Your primary email address in EES will be your username. You will continue to log into each EES journal site separately.
If you have multiple roles for a single journal, you will need to log off and log in again if you want to switch your user role.
The new user consolidation functionality will be piloted in July and August 2012, with roll out activity ramping up from September 2012 onwards. We will keep you informed of our progress by email.
We are also working on consolidating the online support available for EES. This is currently spread across the Elsevier website but going forward generic information on EES will be available on Elsevier.com, while EES support information will be presented in EES. That means that if you click on Help in EES, a pop-up window will open up in which you will be able to quickly access the right support content. The content will be presented per role and per phase in the editorial process to make it easier for you. The search function will also be available in the window.
Elsevier has a number of user feedback programs and the results of these, along with the questions end users ask Elsevier customer support, are just some of the sources we call on when determining which improvements we should introduce. You can also provide feedback via firstname.lastname@example.org.
MARKETING AND BRAND MANAGER, EES AND EVISE
Edward has worked on the development and launch of new products and services since 1997. Prior to joining Elsevier in 2011, he worked for telecom operators, utilities and publishers. He has a MSc degree in Business Administration from the Rotterdam School of Management, Erasmus University Rotterdam.
Peer review has a long history; it has been a part of scientific communication since the appearance of the first journals in the 1660s. The Royal Philosophical Transactions is accredited as being the first journal to introduce peer review. Each year more than 1.3 million learned articles are published in peer-reviewed journals. Such is its […]
Peer review has a long history; it has been a part of scientific communication since the appearance of the first journals in the 1660s. The Royal Philosophical Transactions is accredited as being the first journal to introduce peer review.
Each year more than 1.3 million learned articles are published in peer-reviewed journals. Such is its importance that according to Ziman (1968)1 it is ‘the lynchpin about which the whole business of science is pivoted.’
However, the expansion of the global research community and the year on year increase in the number of papers published means the pressure on the peer-review system has grown. Moreover, as the pressure has increased, so too has the volume of those questioning peer review’s effectiveness. Some are worried by bias and are concerned it is not objective, others are anxious about the length of time it takes for an article to go through the peer-review process and some worry about its efficiency. Richard Smith2, former Editor of the BMJ, said the following about peer review in 2006:
“.. it is slow, expensive, profligate of academic time, highly subjective, something of a lottery, prone to bias, and easily abused.”
In response to the perceived challenges, peer review has evolved and continues to do so. Working with you, the Editor, we hope to be able to improve and streamline the peer-review process, ultimately easing the burden on both reviewers and Editors. In this article, we take a closer look at initiatives in Elsevier that tackle some of the challenges in peer review and evaluate the progress of some of these pilots.
Running from March to May 2012, this web-based Challenge invited submissions on any aspect that could significantly add to the current peer-review system. Entries could range from designing a completely new system, to working within an existing peer-review method (like the single blind system).
The Challenge also welcomed entries that explored how publishers and Editors can help early career researchers become reviewers, or how reviewers can be recognized by either their institutes or publishers.
The entry phase of the Challenge closed on 7th May and the judges are now going through the submissions to pick out up to 10 finalists, whose ideas will be posted on the Challenge website. We will be inviting comments from the community on these ideas before the judges make their final decisions, taking into account any relevant community comments. Please do check the Peer Review Challenge website from 12th June onwards for details of the finalists!
For more information on this initiative, please contact Clare Lehane, Executive Publisher, STM Publishing, email@example.com
As an Editor, you may frequently be confronted with manuscripts that are out of scope or are simply not suitable for the journal; however, they still contain sound research. With this in mind, we have developed the complementary Article Transfer Service (ATS) which allows the paper to be moved to a more appropriate journal. Currently, Editors within the fields of Pharma Sciences, Physics and Immunology, are able to offer authors this option and, if the author agrees, we can promptly transfer the manuscript on their behalf.
Key advantages of the Article Transfer Service:
Results so far:
- Editors have offered to transfer up to 35% of rejected manuscripts and up to 35% of offered transfers have been taken up by authors.
- Up to 20% of those transfers have been accepted by the receiver journals.
We also surveyed a number of participants in the ATS scheme and discovered the following:
- 67% of Editors think that the ATS benefits the authors, while 75% agree that having reviewer reports is beneficial.
- 55% of authors are active promoters of the scheme.
- 86% of the reviewers are willing to recommend an alternative journal to the Editor.
For more information on this pilot, please contact John Lardee, Senior Project Manager, Publishing Services, firstname.lastname@example.org.
From feedback we know that reviewers, especially those new to the task, would value more guidance on how to peer review. This program, which is still in the developmental stages, has been created to answer that need and will consist of both theory and hands-on practice.
Theory: By attending a Reviewer Workshop, participants will be introduced to the concept and background of peer review as well as peer-review fundamentals, publication ethics and the role of a reviewer. They will also examine a specific case study. Reviewer Workshops have been taking place for a while now and participants have told us that they feel more confident after attending one. Since it is not always possible to physically attend a workshop, we are now looking into the possibility of offering a distance learning (online) alternative.
“The Reviewer Guidance Program is not only an experience that helps early career researchers become better reviewers, but also to be more critical in analyzing their own papers before submitting. In addition, this is a great opportunity for junior scientists to network with their more senior peers.” Irene Kanter-Schlifke
Hands-on practice under mentorship: This part of the Reviewer Guidance Program aims to provide participants with the experience of independently reviewing at least two manuscripts inside a specially-created EES (Elsevier Editorial System). Each trainee is supported by a mentor who discusses the reviews with them and gives feedback and guidance. The mentor finally decides when a trainee has gained enough experience to start reviewing live manuscripts. After the program, each trainee receives a certificate of participation from Elsevier. We began piloting this module at the end of last year and the first feedback is promising. One trainee commented: “I’m now more familiar with rating papers and I’m more critical when I read papers.” The mentors involved in this module, often journal Editors, also see the benefits of this initiative; one remarked: “This module is a nice opportunity to learn how to efficiently review manuscripts. Often junior scientists have no idea how it works. As well, they can better understand how their manuscripts will be reviewed.”
During the Reviewer Guidance Program we will guide participants in how to write review reports in such a way that they answer the needs of both the Editor and the author. Another expected benefit is that the program should contribute to increasing the number of trusted – and usually enthusiastic – reviewers available for Editors to call on. Irene Kanter-Schlifke is a Publisher for Pharmacology and Pharmaceutical Sciences and is closely involved in the pilot. She adds: “The Reviewer Guidance Program is not only an experience that helps early career researchers become better reviewers, but also to be more critical in analyzing their own papers before submitting. In addition, this is a great opportunity for junior scientists to network with their more senior peers.”
If you are interested in organizing a Reviewer Workshop at your institute, please contact your publisher.
Results so far: We are currently evaluating feedback and expect to do a further pilot in due course.
For more information on this program, please contact Irene Kanter-Schlifke, Publisher Pharmacology & Pharmaceutical Sciences, STM Publishing, email@example.com, or Angelique Janssen, Project Manager, Publishing Services, firstname.lastname@example.org.
Reviewers play such a vital role in the peer-review process yet their contribution often remains hidden. In addition, open reviewer reports increase peer-review transparency and assist good articles to gain authority. With that in mind, we thought why not publish reviewer reports alongside the final article on SciVerse ScienceDirect?
At the beginning of this year, we began doing just that on the journal Agricultural and Forest Meteorology.
We know from the feedback we have received that Editors welcome such a public acknowledgement of reviewers’ contributions, and we hope this step will enhance the quality of the review reports and help to capture / attract good reviewers for the journal.
How does it work?
Both authors and reviewers for the journal are informed about the new process and reviewers can indicate whether they want their name disclosed on ScienceDirect. Editors then decide if the reviewer reports are appropriate to publish alongside the article as supplementary material.
Results so far: The pilot launch attracted positive international media attention. It was also suggested that open reviewer reports could play a useful role in training early career researchers as reviewers. So far, reviewer reports have been published alongside around 13 manuscripts.
For more information on this pilot, please contact Gilles Jonker, Executive Publisher, Physical Sciences, email@example.com.
In this pilot, we have asked experienced researchers to submit a one page comment on a (review) article for the journal Physics of Life Reviews. These comments are published in the same issue as the article. On average, five comments are published with the article and the author can write a rebuttal article.
Results so far: Since the pilot was launched in January 2010, the journal has seen an increase in papers (2011 - 85 and 2010 - 74; previously the journal received around 12 papers per year). There has also been a sharp increase in usage – roughly 3,000 downloads per month compared to 2,000 per month in 2009.
For more information please contact Charon Duermeijer, Publishing Director Physics, firstname.lastname@example.org.
Traditionally in peer review, Editors have chosen to approach reviewers they consider are suitably qualified to comment on a manuscript, or who would find the subject matter interesting.
But what if the reviewer could select the manuscript themselves? For a year now, we have been experimenting with this additional peer-review system on the journal Chemical Physics Letters. Each week, a selected pool of reviewers receives an overview of the new submissions. If they like a paper because it matches their expertise and interest, they can decide to review it. Because they make the decision themselves, we ask them to review the manuscript within a week.
Martin Tanke, Managing Director of Elsevier’s STM Journals, explains: “The 2009 Peer Review Survey, which we conducted with our partner Sense About Science, showed that a significant number of reviewers were sometimes hesitant to review an article because of a lack of expertise in that particular field. In addition, researchers made clear they want to improve peer review by improving article relevancy and speeding up turnaround time. PeerChoice can contribute to solving both issues.”
Results so far: The time taken to review the manuscript has been slightly reduced, while the time taken to accept an invitation has been halved.
For more information on this pilot, please contact Egbert van Wezenbeek, Director Publication Process Development, Publishing Services, email@example.com.
All these pilots have been launched with one aim in mind; to support and improve the peer-review process to the benefit of Editors, authors and reviewers.
We would love to hear your thoughts on these new approaches and your suggestions for improvements. If you have a story you would like to share, you can post it on our new Short Communications bulletin board.
1 Ziman, J.M. (1968), Public Knowledge: an essay concerning the social development of science. London: Cambridge University Press.
2 Smith, R. Peer Review: A Flawed Process at the Heart of Science and Journals. Journal of the Royal Society of Medicine April 2006 99.4: 178–182.
SENIOR PROJECT MANAGER, PUBLISHING SERVICES
For the last 15 years, John has been involved in managing projects to improve author, Editor and reviewer experiences with Elsevier’s products and services. Recent projects include the Article Transfer Service and the Find Reviewer Tool. John’s approach to project management is an agile one: “To develop services and products iteratively together with our Editors, authors and reviewers”. John has a Master of Science in Informatics from the Technical University of Delft.
DEPUTY DIRECTOR, RESEARCH & ACADEMIC RELATIONS
Adrian has 14 years of experience in STM publishing. The last 10 years he has spent in research have given him the unique opportunity to study the scholarly community. Recently, in partnership with Sense About Science, Adrian worked on a large scale study that examined attitudes of researchers towards peer review. He has presented on peer review at various conferences, including STM, ESOF, AAP and APE. Adrian’s background is in archaeology with a BA Honours degree and a Master of Science from Leicester University. He also has a diploma in Market Research from the Market Research Society.
On March 28th, Elsevier launched the ‘How do you see the future of peer review?’ challenge. We hope that this challenge will help inform the ongoing discussions on peer review.
Clare Lehane | Executive Publisher, Energy & Planetary Sciences, Elsevier
On Wednesday 28th March, Elsevier launched the How do you see the future of peer review? challenge. The aim of the challenge is to invite our reviewing community to submit ideas on any of the following three aspects of the peer review system (for journals):
The challenge website will remain open to entries until midnight on Monday 7th May, 2012 (CET).
We will work with the overall winners of the challenge to determine if their idea could be piloted with a suitable Elsevier journal, and in cooperation with the editors of that pilot journal. The winning ideas will be announced around 15th August via the challenge website.
We hope that this challenge will help inform the ongoing discussions on peer review and help us, as your publishing partners, to work more closely with the reviewing community.
You are welcome to forward this challenge announcement to your colleagues and editorial network to encourage submissions.
Discover new ways to identify and retain the best reviewers in your field; how to motivate them to do a good job and encourage them to repeat review for you.
Discover new ways to identify and retain the best reviewers in your field; how to motivate them to do a good job and encourage them to repeat review for you.
Editors today are confronted with a number of challenges to the peer-review process, for example finding reviewers. That means new and different approaches are required, Frank H Arthur writes.
Frank H Arthur | USDA, Agricultural Research Service, Center for Grain and Animal Health Research | Regional Editor-in-Chief, Journal of Stored Products Research
I have been a Regional Editor of the Journal of Stored Products Research since November of 2006, and continue to serve as a reviewer for other scientific journals. Editors today are being confronted with a number of challenges to the peer review process, including obtaining the peer reviews necessary to evaluate scientific studies for journal publication. New and different approaches are necessary to cultivate and maintain a solid base of reviewers.
First, editors must become more active in pre-screening manuscripts before they are sent out for review. As a reviewer, I regularly receive manuscripts that are severely deficient in English grammar and construction, along with the stated or implicit assumption that it is also my responsibility to re-write these manuscripts in addition to evaluating the scientific content. This expectation places an unfair burden on reviewers and editors, who are usually serving on a volunteer basis. Related issues include being sent manuscripts that are obviously lacking in scientific quality for that journal, out of scope, or in a completely different format from what is specified. Receiving these types of manuscripts increases frustration on the part of reviewers, and editors can, and should, simply return those manuscripts to the authors and let them address the deficiencies. The authors are ultimately responsible for the quality of the manuscript.
Second, editors should focus on obtaining reviews from scientists who are actively publishing in their journal. Every month I receive several automatic ‘invitation to review” emails from journals where I have not published in the past, nor am I likely to do so in the future, including various new online journals. Many scientists will decline those invitations unless there is overwhelming interest in the topic of the paper. I also receive numerous requests for reviews from journals where I have published only sporadically as a submitting or lead author, and often not at all for the past several years. Regular contributors have a more vested interest in the journal but, at the same time, editors must not continually ask the same people to review because “they cannot find anyone else”. Efforts must be made to broaden the review base and increase participation in the review process.
Third, assuming reviews are being solicited from regular contributors to a journal, editors should first make personal contact with reviewers instead of just generating an “invitation to review” email. However, if the reviewer declines a review because of their current workload, the editor should go to someone else, rather than asking the reviewer for a suggested alternative. In my experience, many scientists will not do a review if they know a colleague has declined because he or she was “too busy”, because they are busy as well. I do not suggest colleagues when I decline a review unless that person is more appropriate because of their expertise, and I generally let them know that I have, or will, recommend them as a reviewer.
Within many biological disciplines, the number of professional scientists is declining, pressure to obtain outside funding is increasing, and research scientists are being required to perform administrative functions as well. The steps discussed above are just a few ways editors can facilitate the peer review process to ease the burden on journal reviewers.
18 Sep 2011 6 Comments
“In China, the research community is gaining year on year in resources and ability. That is very exciting to be around.” — Tracey Brown, Managing Director, Sense About Science Tracey Brown believes peer review is vital to good science and the society that uses it. And it’s a conviction the Managing Director of Sense About […]
"In China, the research community is gaining year on year in resources and ability. That is very exciting to be around.” — Tracey Brown, Managing Director, Sense About Science
Tracey Brown believes peer review is vital to good science and the society that uses it.
And it’s a conviction the Managing Director of Sense About Science shares with members of the Chinese Academy of Sciences, as she discovered during a trip to the research-rich country in March this year.
Brown embarked on the fact-finding mission with two key aims in mind; she was keen to test out views advanced about the integration of Chinese authors and reviewers into international STM publishing, and to explore future collaborations to help researchers, policy makers and journalists identify the best science.
During the two-week visit, which was supported by Elsevier, she met not only the Chinese Academy of Sciences (CAS), but Science.net, journalists, post docs and publishers.
Brown says: “It was clear that CAS is keen to discuss the best ways to evaluate research and to explore their concerns about what peer-reviewed publishing can - and can't - deliver. In an effort to avoid cronyism and subjective assessment in China, there has been a shift towards using flatter measurements; for example, the Impact Factor. There is a feeling, however, that these do not reveal enough about individual papers or the research output of an institution. Most people, including CAS, are coming to the conclusion that what we really need is a mix of the two.”
Asked to highlight some of her key learnings during the trip, Brown says:
“People raised many interesting points and some quite contradictory ones. The early career researchers I spoke with viewed international journals as motivated by quality and fairness, and in some cases compared them favourably with Chinese journals, which can be seen as wedded to the relationships and prestige of individuals and institutions.
“On the other hand, some of the more editorially-experienced people had stories of less than positive attitudes among international editors to Chinese papers. They were concerned about a head-in-the-sand approach to such a major research base and that valuable new insights could be missed.”
There were also a few eye-opening moments for Brown.
She explains: “I had not expected people’s personal experiences to differ so widely. For example, I was speaking to two post doc students at Shanghai Jiao Tong University. Both had published very successfully early in their careers in some of the top journals - the elite of the elite. One was receiving almost weekly requests to review while the other had received only one request in a year. That may reflect the different nature of their papers but I heard their stories repeated elsewhere. It is perhaps to be expected that peer-review requests from international journals are still a bit of a hit and miss process in China.”
She adds: “Each time I was about to draw a conclusion about anything I would meet someone who took me in a different direction – a symptom, I imagine, of things being a work in progress there.
“Another surprising thing for me was the high level of confidence in the research community in contrast to the UK, and perhaps the US, where universities face straitened circumstances. In China, the research community is gaining year on year in resources and ability. That is very exciting to be around.”
Commenting on the quality – and quantity – of papers submitted by Chinese researchers, Brown says: “There is some concern, internationally, about filtering the sheer weight of papers produced by China. A big sea of papers makes it difficult to pick out the best.
“The thing is, there is a large pressure to publish in China and doing so in international journals brings career breaks and prestige. While lead institutions no longer pay incentives for this, some second-tier universities still appear to, which may contribute to journals being overwhelmed by unsuitable papers.
“We discovered that inappropriate submissions also stem from a lack of local knowledge about international journals, with younger researchers copying where their supervisors have published. Library services can play a very important role in widening the pool of journals considered.”
She adds: “Since returning I have been in touch with members of the Publishing Research Consortium to discuss the prospect of looking at how these new regions, such as China and India, are being integrated. Do editors now need something different from publishers with regard to support and advice? These are questions I know publishers are asking too. There is clearly some opportunity for international publishers to improve the availability of information about how to publish and where to publish, probably via librarians in those institutions where library services are developing and pro-active.”
And what does Brown think the next five years will hold for the Chinese research community?
“Because of the volume of research and population size, even minority behaviors in China are likely to have a significant effect. If just a proportion of the new generation of researchers are trained and engaged with reviewing, it could have a big impact on sharing the reviewing burden. I know that there are already programs underway, such as Elsevier’s Reviewer Workshops and Reviewer Mentorship Program. The value of their contribution to the research output cannot be overstated – just like so many other things in China at the moment!”
What is Sense About Science?
Sense About Science is a UK charitable trust that equips people to make sense of science and evidence on issues that matter to society. With a network of more than 4,000 scientists, the organization works with scientific bodies, research publishers, policy makers, the public and the media, to lead public discussions about science and evidence. Through award-winning public campaigns, it shares the tools of scientific thinking and the peer-review process. Sense About Science’s growing Voice of Young Science network engages hundreds of early career researchers in public debates about science. Sense About Science will be publishing a Chinese edition of its public guide to peer review I Don’t Know What to Believe early in 2012 in collaboration with learned societies, patient groups and journalists.
MANAGING DIRECTOR OF SENSE ABOUT SCIENCE
Tracey has been the Director of Sense About Science since shortly after it was established in 2002. Tracey is a trustee of Centre of the Cell and MATTER. In 2009 she became a commissioner for the UK Drugs Policy Commission. She sits on the Outreach Committee of the Royal College of Pathologists and in 2009 was made a Friend of the College.
5 Jun 2011 2 Comments
“A real-life, hands-on approach like this equips future reviewers like never before.” — Irene Kanter-Schlifke, Publisher In many areas of research, the growth of paper submissions is outpacing the growth of qualified reviewers and resulting in pressure on the peer review system. As an editor, you will be only too aware of the challenge of […]
"A real-life, hands-on approach like this equips future reviewers like never before." — Irene Kanter-Schlifke, Publisher
In many areas of research, the growth of paper submissions is outpacing the growth of qualified reviewers and resulting in pressure on the peer review system. As an editor, you will be only too aware of the challenge of finding good reviewers. Together with our editorial community, journal publishers at Elsevier have been working on a number of programs to develop and nurture your future pool of reviewers.
Following a request from reviewers for increased support and guidance, and tests by current journal editors, the Reviewer Guidelines are now available on all Elsevier journal homepages and on our Reviewers’ homepage.
A step-by-step guide through the various stages of the peer-review process, the guidelines begin with the ‘purpose of peer review’ (addressing why reviewers should review); move on to conducting the review itself (what criteria should the reviewer be taking into account); and finish with submitting the report to the editor. They include key topics relevant to peer review, such as conducting the review, originality of research, the structure of a paper and ethical issues, together with a sample peer review report.
Reviewer Workshops allow participants to put the Reviewer Guidelines into context. “They aim to promote and explain the fundamentals and techniques that reviewers should adhere to when reviewing manuscripts for academic journals,” explains Andrea Hoogenkamp-O’Brien, Customer Communications Manager. Such workshops have been taking place across China, together with input from some Elsevier journal editors giving young Chinese scientists the opportunity to review scientific papers for international journals and to get hands-on training.
During a workshop, reviewers receive practical information on Elsevier publishing policies and procedures together with advice from other reviewers and editors, all with the aim to expedite the process of reviewing papers. Throughout the sessions, there is thorough discussion of the philosophy of peer review, various steps of the review process and examples from recent journals.
“The result is that reviewers get a real opportunity to better understand the principles and methods involved in reviewing for an international journal,” notes Hoogenkamp-O’Brien. "This is invaluable experience for the next step in our program."
This program aims to extend the help given to reviewers during workshops by also giving some coaching and direct feedback on the reports that the trainees have submitted. Elsevier Publisher Irene Kanter-Schlifke has been piloting this program in two Institutions; Lille University, France, and the Saarland University in Saarbrücken, Germany. Each program involved 10-12 trainees.
“It is important that the trainees review a manuscript that is both controversial and in their area of expertise. During the workshop, an introduction on reviewing is given, followed by a discussion of the review and disclosure of the original ‘fate’ of the paper (reviews and the final article, if accepted for publication). A real-life, hands-on approach like this equips future reviewers like never before,” explains Kanter-Schlifke.
After the workshop, trainees are invited through the system to review at least two manuscripts within a given timeframe. Each trainee is supported by a mentor who discusses the reviews with the trainee and gives feedback and guidance. The mentor finally decides when a trainee has gained enough experience to review live manuscripts. After the program, each trainee receives a certificate of participation from Elsevier.
“There are a few thoughts on what defines a good reviewer,” adds Hoogenkamp-O’Brien. “The definition I particularly like is: A good reviewer should know the journal and should have the knowledge to be able to fairly and objectively give a good report of the manuscript they are reviewing. They should concentrate on offering useful advice to authors rather than giving summary reports to editors.”
If you are interested in running either a Reviewer Workshop or Reviewer Mentorship Program at your institute or would like some further information, please email Editors' Update.
We want to hear your views on these and other issues surrounding the challenges faced by editors and peer review. Please share your thoughts by posting a comment at the bottom of this page.
In 2008, Irene began work as a Publisher for Elsevier’s Pharmacology and Pharmaceutical Sciences portfolio of journals. In her current role as publisher, she has been working on a number of exciting initiatives with her editors and colleagues, one of which is helping to organize and run a mentorship program for new reviewers. She holds a PhD in Neurology from Wallenberg Neuroscience Centre, in Lund, Sweden. Before coming to Elsevier, she worked at Centocor (now Janssen Biologics), part of Johnson & Johnson pharmaceuticals in The Netherlands.
CUSTOMER COMMUNICATIONS MANAGER
Andrea has recently started working in the Strategy and Journal Services department of Elsevier in Amsterdam, where she is part of a team responsible for developing new initiatives to improve services for authors, editors and reviewers. She joins Elsevier from FEMS in Delft where she had worked as the Editorial Coordinator, responsible for managing the publications unit, which publishes five FEMS Microbiology journals. Prior to that, Andrea held the position of Postdoctoral Research Fellow at the University of Amsterdam.
Of interest to: Journal editors (key), additionally authors and reviewers Archive views to date: 845+ Average feedback: 4.4 out of 5
Of interest to: Journal editors (key), additionally authors and reviewers
Archive views to date: 845+
Average feedback: 4.4 out of 5