Return to Elsevier.com

Tagged:  Publishing ethics

PaulDoda

Warning regarding fraudulent call for papers

Information from our Legal department about a fraudulent call for papers sent out in Elsevier’s name.

Read more >


Paul Doda | Deputy General Counsel, Elsevier

Some of you may have received an email that appears to have been sent by Elsevier, inviting you to submit scholarly articles via email for publication in our various journals.

The subject line of the message is "Manuscripts Submission" and it is sent from a Gmail email address.

Please be assured that Elsevier is in no way associated with this fraudulent email campaign and we are currently investigating to identify the people responsible. Elsevier does not use free, third-party email providers such as Gmail and Hotmail to solicit submissions from authors. Additionally, almost all our journals only accept submissions via an online submission system, for example Elsevier Editorial System (EES).

If you receive any emails that appear to be a part of this fraudulent solicitation, please do not respond to the message and do not open any attachments it may contain.  If you have any concerns, please contact our Support Team at support@elsevier.com

EthicsWebinarMarch2014

Ethics and Responsibilities

Of interest to: Journal editors (key), additionally authors and reviewers

Read more >


Of interest to: Journal editors (key), additionally authors and reviewers

PressureToPublish

Gad_Gilad

Abolish authors’ Conflict of Interest Statement, says author

Author and Editorial Board Member, Dr Gad Gilad, PhD, argues that it is time to get rid of Author Conflict of Interest Statements.

Read more >


The collaboration of Dr Gad Gilad, PhD, with Varda Gilad while working at several academic centers, including the National Institutes of Health, Weizmann Institute and Harvard University, has resulted in more than 100 peer-reviewed scientific papers and several patents. He is the co-founder and CEO of Gilad&Gilad LLC, a California- based company that manufactures and markets nutraceutical supplements for nerve health. He is also an Editorial Board Member of International Journal of Developmental Neuroscience. Here he outlines why he feels it is time to abolish the author Conflict of Interest Statement.

Does the appearance of financial or other "material" interests by authors of scientific papers present a more serious bias than other competing interests so as to require a signed Conflict of Interest (COI) statement? Really! How about down-to-earth banal "conflicts of interest" such as: promoting ones favorite hypothesis; self promotion for achieving tenure, a desired position or a higher status; or self promotion when competing for funding and prizes [1,2].

One may well argue that the whole scientific endeavor is biased by these or other conflicting interests and that, in fact, these interests are the very driving force of the whole scientific endeavor and are behind the greatest of scientific discoveries. This notion is substantiated by numerous examples throughout the history of scientific research [3]. No one is innocent of any bias; after all we are all humans. So, why should we regard these rather motivating interests as conflicts?

It is not at all clear that scientific misconduct arising from conflicting financial or other interests, has been on the rise in the years preceding enforcement of the COI rules by editors of scientific journals [1]. Human nature however, indicates that the ever increased fierceness of competition is bound to result in continued evasion of ethical rules with disregard to the consequences in spite of any deterrents [4]. Enough recent examples, which have occurred after the COI statement has been enforced by mainstream biomedical journals [1,2], demonstrate that this is indeed the case. Apparently, aside from vain appearance, the authors' COI statement simply does not make any difference.

Furthermore, in this day and age the availability of alternative and effective citable journals, albeit some considered of lesser "prestige" [5,6], but readily accessible for rapid publication on the blessed internet, makes the whole exercise of demanding COI statements from authors, futile indeed.

Surely, journal editors are aware of these facts and do not regard authors' COI statements as some kind of guarantee against, or even deterrent to fraud. If one chooses to cheat and publish a paper with outright false information, would a COI statement stop her or him? Of course not! Rather, if such a paper is of sufficient interest, sooner or later the scientific community is apt to expose the fraud anyway (e.g., ref. 6). Otherwise, fraudulent publications would remain harmlessly buried "in a wasteland of silence, attracting no attention whatsoever" [7].

For these reasons, I think that the authors' COI statement in scientific papers is an oxymoron and hereby propose that it should be abolished.

Finally and further diminishing the significance of the authors' COI statement, are competing interests of all other parties involved in the publication process including the editors* [1,8], reviewers (who remain conveniently hiding behind the veil of anonymity) and publishers [9]. Are we to require published statements from all those parties as well…? Clearly, the author's COI statement is an exception. This, some go so far as to say, singles out authors of scientific papers as potential criminals and therefore, is also unfair; the author's COI statement, they say, is nothing but a pretense, a fig leaf cover at most for journal editors and publishers. Today, when even the very naive of readers of scientific papers are well aware of these truths, this requirement is obviously uncalled for. So, "Let's Call the Whole Thing Off"!

*Disclosure†: Two journal editors have recently refused a priori to accept a submission of a scientific paper by my group, explicitly declaring a conflict of interest on account of apparent financial interests. Things have obviously gone wrong and out of proportions when worried editors prescreen submissions for the sake of their journal's façade. The declared purpose of COI statements is to let the readers judge and make up their own mind about the quality of scientific papers. Anyway you look at it, it is time to abolish the authors' COI Statement altogether!

†As an afterthought, I would like to suggest the option of instating a voluntary Disclosure statement for honest souls to voluntarily disclose their biases. Like the original intent of the Acknowledgement statement, this should be a sufficient and honorable choice properly left at the author's discretion to include or exclude any information.

References

[1] International Committee of Medical Journal Editors, "Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Ethical Considerations in the Conduct and Reporting of Research: Conflicts of Interest".

[2] Mullane K, Williams M, "Bias in research: the rule rather than the exception?" Elsevier Editors’ Update, September 2013; 40, 7-10.

[3] M. Brooks, Free Radicals, Profile Books LTD, London, 2011.

[4] Zietman AL, "Falsification, fabrication, and plagiarism: The unholy trinity of scientific writing", Intl J Radiat Oncol Biol Physic 2013; 87, 225-7.

[5] San Francisco Declaration on Research Assessment.

[6] Stone R, Jasny B, "Scientific discourse: buckling at the seams", Science 2013; 342, 56-7.

[7] A. Mandavilli, "Peer review: Trial by Twitter", Nature 2011; 469, 286-7.

[8] S. Huggett, L. Lavelle, "The ethics pitfalls that editors face", Elsevier Editors’ Update, September 2013; 40, 14-6.

[9] Committee on Publication Ethics (COPE), "Code of Conduct and Best Practice Guidelines for Journal Editors", March 2011.

EthicsCasesRising

Bob-Strangeway

Editor in the Spotlight – Dr Robert Strangeway

Dr Robert Strangeway holds the position Research Geophysicist at the Institute of Geophysics and Planetary Physics (IGPP) at UCLA

Read more >


Journal of Atmospheric and Solar-Terrestrial Physics

Dr Robert Strangeway holds the position Research Geophysicist at the Institute of Geophysics and Planetary Physics (IGPP) at UCLA (University of California, Los Angeles).

He has been joint Editor-in-Chief of the Journal of Atmospheric and Solar-Terrestrial Physics (JASTP) since 2012. The journal began in 1951 at the very beginning of what is termed the ‘space age’ and has grown to be the premier international home for research dedicated to the physics of the Earth's atmospheric and space environment. The journal publishes 12 volumes a year, has an Impact Factor of 1.417 and a 5-year Impact Factor of 1.625.

Q. What does being a journal editor mean to you and what do you find most rewarding about this role?
A.
Being a journal editor is an important task that contributes to the health of the research endeavor. A journal editor has a gate-keeper function, which I still consider to be an essential component of the scientific process, as is the peer-review process. It is important for a scientist to both be able to publish in journals that have a good scholarly reputation, and also know that papers published in a journal have passed through a reputable review process.

The most rewarding aspect of being an editor is in providing a mechanism whereby scientists from under-represented communities have the opportunity to publish in a journal that has an international reputation.

Q. What are your biggest challenges as Editor-in-Chief of Journal of Atmospheric and Solar-Terrestrial Physics? How do you overcome these challenges and what extra support can Elsevier provide?
A.
The biggest challenge as an editor is finding sufficient reviewers for manuscripts (see Question 3). Unfortunately, everyone is so busy that it can take months for reviews to be obtained. This is a source of frustration for authors, and a problem for the editors. In particular, some areas of research are relatively small, and only a small pool of reviewers is available. Multiple requests for reviews also frustrate the reviewers. The Elsevier ’Search for Reviewers’ tool helps keep track of reviewers who have recently provided a review, or declined a review. It would be useful if the tool also provided an ’uninvited’ entry for reviewers who do not respond to requests for reviews.

The second largest challenge concerns the breadth of topics covered by the journal. I think this is to the journal’s credit, but sometimes as editor I find myself assigning reviewers although I have little knowledge of the field, or any experience of the reviewers themselves. The ‘Find Reviewers’ tool provided by Elsevier helps fill that gap, although it sometimes provides incorrect email addresses and affiliations, especially for similar names in different disciplines.

The final challenge is what to do with contradictory or incomplete reviews. If two reviews contradict each other, as editor I often have the difficult choice of deciding which review carries more weight, while an incomplete review simply makes my editorial decision harder to justify.

Q. In many areas of research, the growth of paper submissions is outpacing the growth of qualified reviewers and resulting in pressure on the peer-review system. What do you think the solution to this problem is and how do you see the peer-review process changing in the future?
A. I have no obvious solution to this problem, as I think the two-reviewer peer-review process as it has evolved over the years is the best process we have. It is not perfect – it does depend on conscientious reviewers, and sometimes can be affected by inherent reviewer bias. But alternatives, such as single reviewers, or editor pre-screening, makes the review decision essentially dependent on a single point of view. As editor I do pre-screen papers, and I have rejected papers before sending them for review, but this only occurs for papers that are incorrect, or inappropriate for the journal. I do not pre-screen for scientific relevance, for example.

I know that there are advocates for ‘crowd-sourcing’ of reviews, where papers are essentially published without review, and the merit of the papers is determined by the readership. But that is what happens as part of the present publication process, albeit less visibly, in that unimportant papers are not cited. The question then becomes one of quality control – does the community at large want some form of filter on the material that is published?Right now that function is performed through the editorial review process.

Q. We have observed that researchers are increasingly accessing journal content online at an article level, i.e. the researcher digests content more frequently on an article basis rather than a journal basis. How do you think this affects the visibility of your journal among authors?
A. Access by articles, rather than by journals, is the wave of the future. That is how I personally access articles. I rely heavily on emails from the journals listing the most recently published articles. I no longer access a journal’s site and browse the table of contents.

Access at an article level is good for the journal. Table of contents emails, or links to cited articles, allow individuals to learn about the article, regardless of the journal. There is no journal bias.

Q. Recently, there have been many developments in open access particularly in the UK and Europe where, back in July 2012, the UK government endorsed the Finch Report recommendations for government-funded research to be made available in open access publications. The European Commission has since followed suit, making a similar announcement for an open access policy starting in 2014. How do you see these open access changes in your country? And how do you see them affecting authors who publish in your journal?
A. The United States is also moving towards open access, as new requirements are being developed that provide open access to articles generated through government support. This is very much in a state of flux, and it is not yet clear how this will be implemented. The fundamental issue is, of course, how the costs of publication are recovered by the journal. The concern is that open access requirements will require more of the costs of publication to be incurred by the authors, rather than the readers of the article. This may require different publication charge policies depending on whether or not open access is mandated.

Q. Researchers need to demonstrate their research impact, and they are increasingly under pressure to publish articles in journals with high Impact Factors. How important is a journal’s Impact Factor to you, and do you see any developments in your community regarding other research quality measurement?
A. Impact Factor is important, and I would like to see JASTP’s Impact Factor continue to improve. This requires a demonstration that the journal is following best practices in terms of reviewing and publishing articles. This is also facilitated by having Special Issues that include topical, and hence highly citable, papers. Impact Factor continues to be the primary means by which the members of my community assess journal importance.

Q. As online publishing techniques develop, the traditional format of the online scientific article will change. At Elsevier, we are experimenting with new online content features and functionality. Which improvements/changes would you, as an editor, find most important.
A. Elsevier is doing an excellent job in providing linked cross-references in articles. I would like to see this be available without requiring a subscription to the journal, but this may only happen in response to changes instigated by the open access requirement that is being imposed by the UK, EU, and USA. At a minimum, the reference list should be open access, with embedded links to the cited papers.

Q. Do you use social media or online professional networking in your role as an editor or researcher? Has it helped you and, if so, how?
A. I am of the generation that is more familiar with email as a primary means of electronic communication. I have a web presence, but it uses the relatively static ‘homepage’ concept, where updates require me to actively edit the content of my webpage, rather than simply post updates. While I have pages on a few social and professional networks, I have not yet taken advantage of the additional visibility provided by these networking sites.

I would consider including more information concerning my role as an editor in the professional networking sites to which I subscribe, but guidance from Elsevier would be useful. In particular, would Elsevier require notification or review of any information posted on a personal entry in a networking site? Guidance on how to connect the personal site to the journal would also be helpful.

Q. How do you see your journal developing over the next 10 years? Do you see major shifts in the use of journals in the future?
A. This is an important and difficult question. For JASTP in particular, I am looking forward to a continued improvement in the Impact Factor. For journals as a whole, the future depends very much on how they adapt to the open-access environment, as well as the tendency to electronically access separate articles from the journal. Journals have always been seen as an archiving medium. That role will remain. Similarly, I expect citation rates, and indices derived from citation rates, to continue as an important aspect of an individual’s promotion throughout their career. The standing of a journal is also important when assessing an individual’s publication record. But these are all based on the historical model of publication in scientific journals.

Adapting to the changing publishing environment is essential if the journals are also to be considered as the active source, rather than simply an archival record. Again, this is related to the adoption of an open-access environment, which in turn will require journals to adapt how publication charges are assessed.

Q. Do you have any tips or tricks to share with your fellow editors about being a journal editor?
A. The most basic tip, which I confess to not always following myself, is to find time each day to clear out the “Editor ‘to-do’ List” on the Editor page. This can back up quickly.

I don’t have any special tricks that I use during the editorial process. I scan the submitted articles, and always compare reviewers’ comments with the content. I don’t see any shortcut for that process.

I do wish I could find a trick that would enable me to find reviewers more quickly, as that is usually the most time-consuming aspect of the process.

Cartoon-image

The importance of author education

While increasingly effective tools to detect plagiarism and duplicate submissions may prove a strong deterrent to errant authors, there is another vital element required for any ethics armory – education. At Elsevier, we are keen to ensure that those new to the academic community clearly understand the ethics standards necessary when compiling and submitting a […]

Read more >


While increasingly effective tools to detect plagiarism and duplicate submissions may prove a strong deterrent to errant authors, there is another vital element required for any ethics armory – education.

At Elsevier, we are keen to ensure that those new to the academic community clearly understand the ethics standards necessary when compiling and submitting a manuscript.

We know this is an approach you also find important – a quick glance through the answers submitted to our quarterly Editor Feedback Program shows how highly you rate ‘supporting authors’.

In this article, we will explore two of the initiatives we have introduced to ensure students and young researchers can access this training; the Ethics in Research & Publication program and Publishing Connect author and reviewer training workshops.

Ethics in Research & Publication Program*

.

Early in 2012, a team of Elsevier colleagues gathered to work out how best to supplement the current author training opportunities on offer through projects such as Publishing Connect. Comprised of employees with expertise in publishing ethics, author communications, and editor and librarian relations, one of the first steps they took was to assemble an independent panel of experts well-versed in ethics issues. The result of this collaboration was Ethics in Research & Publication, an interactive website and program that emphasizes the individual researcher’s contribution to advancing science through integrity and good ethical standards. It also highlights the impact misconduct can have on the science community as a whole and on one’s career.

The program has a clear overarching message for its target audience – Make your research count. Publish ethically.

To make this message resonate, the team looked for creative ways to address the concerns of young researchers while conveying the wisdom of those who have been in their shoes.

Current Ethics Advisory Panel

Dr David Rew, Medical Subject Chair, Scopus Content Selection and Advisory Board and Consultant General Surgeon with Southampton University Hospitals, UK.

Professor Alexander T “Sandy” Florence, Editor-in-Chief, International Journal of Pharmaceutics and Emeritus Professor of Pharmacy, University of London.

Professor Margaret Rees, Secretary of the Committee on Publication Ethics (COPE), UK Editor-in-Chief Maturitas and Emeritus Reader in Reproductive Medicine in Oxford.

For more on the panel, visit the website’s Experts’ Corner.

A major channel for the program has been the interactive website, which has been liked more than 450 times on Facebook and has received a lot of positive attention on Twitter as well. Features include:

  • An interactive quiz to test your ethics IQ
  • A toolkit with downloadable fact sheets and materials that answer the question, ‘What should you do to avoid misconduct in specific situations?’
  • A recorded version of a live webinar about publishing ethics
  • Interviews with victims of misconduct
Ethics in Research & Publication website

The Ethics in Research & Publication website.

According to Catriona Fennell, Director of Publishing Services for STM Journals at Elsevier and one of the main drivers behind the program, “ethical issues are a shared problem for all involved in research and publishing. We felt our strongest impact would be in providing the tools to help researchers learn the ‘rules’ and how to comply with them.”

The program was launched with a series of workshops at the 2012 Euroscience Open Forum (ESOF) in Dublin, given by past advisory panel member, Ole G Evensen. In Evensen’s words: “Our goal is simple: to educate students on publishing ethics so well that no one can ever claim ‘I didn’t know better’.”

Ole Gunnar Evensen with Catriona Fennell at the Euroscience Open. Forum conference in Dublin.

The interactive element of the program has continued with webinars (organized under the Publishing Connect umbrella) in September 2012, and January 2013, which together have recorded more than 1,500 views. During the last webinar there was live tweeting with the hashtag #PubEthics – a good example of the important role social media has come to play in the program. The webinars have been well received, with 97 percent of attendees agreeing they were satisfied, and 96 percent of attendees saying they would attend future webinars.

Throughout 2013, work has continued with further updates to the website – more webinars are also planned for the future.

Examples of live tweets posted during the latest ethics webinar.

 

Publishing Connect workshops and webcasts

Since the Publishing Connect program for authors and reviewers was launched in 2006, Elsevier publishers and journal editors have jointly hosted training workshops at hundreds of institutions and conferences worldwide.

While workshop topics span the full publishing process – from applying for funding to writing and submitting a manuscript – there is no doubt a core element of many events has been the module on ethics and plagiarism.

Addressing the issue at a grass roots level not only underpins Elsevier’s goal of supporting future authors on a wide range of training needs, but works towards achieving prevention rather than a cure.

In 2012, there were 350+ global workshops

Top 5 countries:

1. USA
2. China
3. Germany
4. India
5. UK

The publishing ethics issues covered in these workshops include:

  • Establishing authorship: definitions, corresponding authors, gift/ghost authorship
  • Handling authorship disputes: the role of the author and the editor
  • Plagiarism: definitions, allowances, understanding the ethical boundaries, detection technology, correct citation practice
  • Author responsibilities: originality, submission, conflicts of interest, fabrication, falsification, consequences

Research & Publishing Ethics Crib Sheet.

We recently created a new resource for early career researchers called Publishing Crib Sheets. These free to download posters include one entitled Research & Publishing Ethics, which contains information on types of authorship, handling disputes, what constitutes plagiarism and how is it detected, together with the key responsibilities of authors and the consequences of misconduct.

During or after each Publishing Connect workshop, participants are asked to complete a short survey. Results for 2012 show the workshops are delivering a much-needed service, with 94 percent of participants agreeing that they found them helpful. Additionally, 81 percent agreed that ‘Attending this seminar increased my understanding of publishing ethics’.

If you are interested in holding a Publishing Connect workshop at your institution, please contact your publisher for further information.

Publishing Connect training webcasts

The Publishing Connect program was recently extended to include bite-sized online training webcasts. Each webcast is up to 15 minutes long and can be viewed in the Publishing Connect training webcasts library. The latest additions to the channel are three new webcasts on research and publishing ethics and author responsibilities – more are in the pipeline. Since January 2012, the series has collectively garnered more than 280,000 views.

* This section is based on the Elsevier Connect article How to avoid misconduct in research and publishing.

Author biographies

Dr Inez van Korlaar

Dr Inez van Korlaar
DIRECTOR OF PROJECT MANAGEMENT
Inez (@InezvKorlaar) joined Elsevier in 2006. After three years in publishing, she moved to the marketing communications department of STM Journals.  In her current role she is responsible for global marketing communication projects, which includes outreach to researchers in their role as an author. She has a PhD in health psychology from Leiden University in The Netherlands and is based in Amsterdam.

Hannah Foreman

Hannah Foreman
HEAD OF RESEARCHER RELATIONS
Hannah joined Elsevier in 2007 as Marketing Communications Manager for journals in Physics and Astronomy. With more than 10 years’ experience in communications and relations roles she now leads the Researcher Relations team in Amsterdam. This team focuses on delivering information innovatively to editors, authors and reviewers of Elsevier journals, together with ensuring that Elsevier maintains its close partnerships with these vital communities. Hannah has a professional and academic background in European business and speaks four languages.

Media-Microphones

Talking to the media – who is responsible?

“My first thought is usually whether it is even appropriate for me to respond on behalf of the editor.  The answer here, of course, is that it depends.” Authors, editors, Elsevier…we all love the media when they want to write a positive, straightforward story about a new research finding that promotes a particular journal. As […]

Read more >


"My first thought is usually whether it is even appropriate for me to respond on behalf of the editor.  The answer here, of course, is that it depends."

Authors, editors, Elsevier…we all love the media when they want to write a positive, straightforward story about a new research finding that promotes a particular journal.

As an editor, you are probably proud of your role in deciding to publish the article, and welcome any corresponding increase in article submissions, citations and journal reputation that the added attention brings. Those calls from the media are always a pleasure to take and are usually redirected to the article authors who are best placed to answer questions about their research.

But what about when the media focus on something that went wrong? Or an issue that is complicated and not likely to reflect favorably on your journal?  Those calls usually pertain to retractions and publishing ethics, and more often than not they go to editors. They’re not as much fun. Some of those calls come straight to me at Elsevier, and whether or not they’re fun isn’t my concern. My first thought is usually whether it is even appropriate for me to respond on behalf of the editor. The answer here, of course, is that it depends.

Our approach

We begin with the belief that while the publisher is responsible for setting the aims and scope of a particular journal, editors are responsible for the journal’s contents. That means you are accountable for the vast majority of articles that don’t raise any particular questions of impropriety, but it also means you are accountable for the very rare articles that do. So, when a reporter is looking for further information on how a journal handled a particular paper, the journal’s editor is the primary, authoritative source.

We at Elsevier are here, however, to support our editors, and my team is happy to lend that support when it comes to managing media inquiries. There are also situations where we recommend that you pass the media inquiry to us to handle (always in tandem with the publishers). Here are some of the questions we ask when deciding who the appropriate person is to respond.

  • Is it an ongoing investigation? Although we know you would probably provide the same response that we are likely to, i.e. “it would be inappropriate for me to discuss an investigation that hasn’t been concluded”, these inquiries are still best referred to Elsevier.
  • Was Elsevier a key contributor to the decision? Retractions, for example, are usually initiated by the authors, though sometimes by editors without the author’s consent. In either case, Elsevier has a retractions committee that approves each editorial decision to retract. However, when it comes to communicating that decision to the journal’s community of authors, in most cases it is the authoritative voice of the editor they want to hear.
  • Are there any legal implications to responding? Sometimes, in highly charged cases, there could be either the existence, or threat, of legal action. These cases are always best referred to Elsevier so we can assume liability.
  • Does the issue span more than one journal? For example, a wide range of titles were affected by the recent ‘faking’ of reviewer identities in EES, our editorial submission system. In these types of cases, any media inquiries an editor receives should be referred to Elsevier, even if the question is about a paper in that editor’s journal.

Our best advice would be that you should always talk to your publishing contact about the inquiry; together you can decide whether or not Elsevier’s corporate media relations team should be involved. We can work together to make sure Elsevier, you as the editor, the reporters and the journal community at large are best served by receiving the most accurate information from the most appropriate source.

*View Reller’s previous Editors’ Update article, Watching Retraction Watch, to discover what a new breed of journalist means for transparency and public trust in science.

Author biography

Tom Reller

Tom Reller

Tom Reller
VICE PRESIDENT AND HEAD OF GLOBAL CORPORATE RELATIONS
Reller (@TomReller) leads a global team of media, social and web communicators. Together, they work to build on Elsevier's reputation by promoting the company's numerous contributions to the health and science communities. Reller directs strategy, execution and problem-solving for external corporate communications, including media relations, issues management and policy communications, and acts as a central communications counsel and resource for Elsevier senior management. Additionally, he develops and nurtures external corporate/institutional relationships that broaden Elsevier's influence and generate good will, including partnerships developed through The Elsevier Foundation.

Crosscheck

How CrossCheck can combat the perils of plagiarism

At Elsevier, we receive around a million articles per year for publication in our journals. Unfortunately, a small percentage fails to meet our ethics guidelines and nearly 50 percent of those cases are suspected plagiarism. To help address this obvious pain point for our editors, in 2008 we joined CrossCheckTM, a collaboration between major publishers […]

Read more >


At Elsevier, we receive around a million articles per year for publication in our journals. Unfortunately, a small percentage fails to meet our ethics guidelines and nearly 50 percent of those cases are suspected plagiarism.

To help address this obvious pain point for our editors, in 2008 we joined CrossCheckTM, a collaboration between major publishers and CrossRef® to prevent plagiarism, simultaneous submission and multiple publication. That enabled us to incorporate into our editorial workflows iThenticate, the software that powers CrossCheck.

For many journals, this software is now indispensable – more than 4,000 editors at 800 Elsevier journals have iThenticate accounts, and editor usage of the software is up 41 percent on last year.  We expect that the upcoming integration of iThenticate into Elsevier’s Editorial System (EES), which will make it possible to automatically run English-language submissions through the software, will see that usage continue to rise. The integration is currently being piloted and the EES team aims to roll it out to all journals by the beginning of next year.

Features of iThenticate

  • Prevents plagiarism by detecting textual similarities which could indicate misconduct.
  • Compares full-text manuscripts against a database of 38+ million articles from 175,000+ journals, books from 500+ publishers, and 20+ billion webpages.
  • Use can be tailored to meet a journal’s needs: screening at the submission phase, pre-acceptance phase, or on an ad-hoc basis when allegations are raised.

The main function of iThenticate is to identify the textual overlap of a manuscript against CrossCheck’s growing database of published works and internet sources.  Such software can only be as good as the database it uses, and this is a large part of the reason that iThenticate is so successful – CrossCheck’s database is arguably the most complete and up-to-date of its kind available, with major publishers and societies contributing full-text content to it.

Caption: The left pane shows an uploaded document while the right pane highlights sources in the CrossCheck database found to have overlapping text. A quick visual scan of the Similarity Report is usually the first step in analyzing the results. iThenticate currently accepts a wide range of file types, now up to 40MB in size: DOC, DOCX, XML, TXT, PDF, HTML, WPD, RTF.

 

Professor Claes Wohlin

Prof Claes Wohlin

Editor-in-Chief of Information and Software Technology, Professor Claes Wohlin, has been using iThenticate since 2010. He said: “iThenticate helps in identifying textual similarity, but it is very important that the editor uses a sound judgment on the similarities found. It depends very much on whose text is reused and in which part of the paper. There’s a big difference between similarities in the research methodology descriptions and the actual research findings.”

What’s new in iThenticate

Based on your feedback, recent releases have improved functionality. For example, a common complaint was that short, standard phrases in the field could add noise to the Similarity Reports. Since May 2013, users can now specify the length of individual matches, e.g. must be greater than 10 words, which makes the reports easier to interpret and analyze.  The latest release on 24th September this year lets users exclude the Abstract or Materials and Methods sections.

A new viewing mode, Document Viewer, retains the layout of the original document (including figures and equations), making it more straightforward to spot where the overlap is and navigate through the document efficiently. The results from this mode can also be saved and printed to simplify sharing between editors.

A frequent request from editors was to integrate iThenticate with EES to minimize the time needed to upload the files to the software.  We are pleased to report that by the beginning of next year we expect EES submissions to be automatically run through the software. EES will provide a direct link to the full CrossCheck report for each submission.

It’s encouraging to see that journals adopting a screening policy can observe an increase in desk-reject rates and faster decision times, along with an improvement in the quality of papers sent out for review.  For example, at Journal of Materials Processing Technology, thanks to the huge efforts of a strong and dedicated editorial team, desk rejections for scope, quality and plagiarism are now at 78 percent while editorial times from submission to first decision went from 4.8 weeks in 2009 to 3.5 weeks in 2012.

Prof Richard Aron

Prof Richard Aron

Use of iThenticate can also lead to other, less obvious benefits. Journal of Mathematical Analysis and Applications Editor-in-Chief, Professor Richard Aron, has found that: “iThenticate helps not only in identifying plagiarism, but also in suggesting possible referees that have been overlooked, or at least not mentioned, in the citations.”

If you don’t have an iThenticate account but would be interested in benefitting from this service, please speak to your publishing contact.

More information on plagiarism detection can be found in PERK (Elsevier’s Publishing Ethics Resource Kit).

Tips for interpreting iThenticate results

  • Human interpretation is crucial to differentiate between:
    • paragraphs or sentences copied from properly referenced sources;
    • text copied from the author’s previous works (often in the Methods section); and
    • paragraphs or sentences copied from improperly or unreferenced sources.
  • Similarities discovered in the Results/Discussion sections can be more concerning than those found in Intro/Methods.
  • You should become suspicious if you discover:
    • Similar strings of sentences or small paragraphs. One may not be an issue, but several could signify a problem.
    • A couple of paragraphs containing identical material. This may indicate improper reuse and should be carefully checked.
    • As much as a full page of matching material. Proceed with extreme caution!

Ethics cases can be less obvious than they appear so whenever in doubt, check with your publishing contact to make sure you follow due diligence in any accusation of research or publishing malpractice.

The pie chart shows the types of ethics cases reported at Elsevier in 2012, as per figures submitted to STM, the International Association of Scientific, Technical & Medical Publishers.

The pie chart shows the types of ethics cases reported at Elsevier in 2012, as per figures submitted to STM, the International Association of Scientific, Technical & Medical Publishers.

 

Author biographies

Laura Schmidt

Laura Schmidt

Laura Schmidt
PUBLISHER, MATHEMATICS
Laura joined Elsevier in 2010 as a Managing Editor for a physics journal. She is currently a publisher for mathematics journals, and frequently works with editors to support and assist them in handling plagiarism and other misconduct cases.  Earlier, she held a postdoctoral research position at the University of Twente in The Netherlands after receiving her PhD in Physics from the University of Chicago in 2008.

Gaia Lupo

Gaia Lupo

Gaia Lupo
PUBLISHER, INDUSTRIAL AND MANUFACTURING ENGINEERING
Gaia joined Elsevier in 2011 as a Managing Editor after graduating from the University of Perugia in Italy with a PhD in Mathematics. Gaia is currently working as a publisher and is responsible for a portfolio of 16 journals across the areas of manufacturing processes and systems. Her role includes defining and implementing journals’ long-term strategies and being the primary contact for editors seeking advice on publishing and ethics issues.

Cope-logo-2

Making the most of your COPE membership

In 2008, all Elsevier journals were enrolled in the Committee on Publication Ethics (COPE), so editors would have an alternative information resource when faced with research misconduct cases. In this interview, current COPE Chair, Dr Virginia Barbour, discusses recent changes to the organization and outlines some of the benefits that membership can bring. When a […]

Read more >


In 2008, all Elsevier journals were enrolled in the Committee on Publication Ethics (COPE), so editors would have an alternative information resource when faced with research misconduct cases. In this interview, current COPE Chair, Dr Virginia Barbour, discusses recent changes to the organization and outlines some of the benefits that membership can bring.

When a handful of medical editors set up the Committee on Publication Ethics (COPE) back in 1997, they hoped that pooling their knowledge would help them tackle the ethics cases they were witnessing on their journals.

Fast forward 16 years and COPE can claim more than 8,700 members spanning a variety of disciplines across the globe.

While the organization has undergone tremendous change – particularly over the past five years – that original goal of editors offering their peers non-judgmental advice remains central to all COPE’s activities, says current Chair, Dr Virginia Barbour.

She explained: “COPE acted as a sort of support group for those early members and that really hasn’t changed. COPE provides the resources so that editors can make their own decisions – we aren’t here to tell them what to do.”

COPE at a glance:

COPE provides advice to editors and publishers on all aspects of publication ethics and, in particular, how to handle cases of research and publication misconduct. It also provides a Forum for its members to discuss individual cases.

COPE does not investigate individual cases but encourages editors to ensure that cases are investigated by the appropriate authorities (usually a research institution or employer).” *

* Taken from the About COPE page on the organization’s website.

Dr Barbour became aware of COPE in 1999, when she was working on The Lancet in the role of Molecular Medicine Editor – The Lancet Editor-in-Chief, Richard Horton, was one of COPE’s founding members.

In 2004, she left to join the Public Library of Science (PLOS) and was invited to join COPE’s Council. Dr Barbour said: “At that time, we were launching PLOS Medicine. Before our first paper was published we encountered some ethics issues so I realized COPE’s help would be important.”

At that stage, COPE still had a fairly relaxed structure, with no formal constitution. In the years that followed, membership expanded, a constitution was established, internal communications evolved, and the Council became more global. A group of officers was appointed (all voluntary) – the Chair, a Vice Chair, Treasurer and Secretary – and paid staff were added. These have all become crucial to the smooth running of what is now quite a complex organization.

Since taking over the reins as Chair 18 months ago, one of Dr Barbour’s key aims has been to increase that internationalization. She said: “Until recently, London was the location for all our quarterly Forum meetings (where cases submitted to COPE are discussed). One of the first things I did was to hold two Forum meetings by webinar – opening up the opportunity for all COPE members to attend, wherever they are based.  The success of the virtual Forums has been such that we have decided to hold all of our quarterly Forums by webinar.  We will also be holding workshops around the globe where members can meet in person to discuss cases and publication ethics issues.  We feel that it is important to retain that personal contact with our members, as well as opening up our services to more of our global membership.”

Another two important steps have been the introduction of online consultation sessions (still in early testing) and an International Advisory Group.

The online sessions are designed to supplement the quarterly Forums; they will be held on a regular basis, dictated by the needs of members. Dr Barbour said: “Cases can be submitted in the usual way (via the COPE website) and we will post them on a secure section of the site. We will then hold a two-hour session where anyone from the Council can login and comment on them. As with the Forums, a written summary of feedback will be provided to the submitting editor.”

The International Advisory Group, which is in the process of being launched, comprises a worldwide panel of individuals experienced in publication ethics. Dr Barbour explained: “Although our current Council is global and very active, by necessity it can’t cover every area of the world. To remedy that, we have sought people who are interested in helping us think about ethics issues in their country; if an ethics issue arises that is of importance to their region, we will be able to call on their expertise.”

She added: “The kind of internationalization we are discussing can only be achieved with appropriate software and technology so another major focus has been the introduction of those tools.”

Dr Barbour has also steered COPE through a strategic review which involved having a “hard think” about what its principles should be. She said: “COPE’s primary purposes are now much clearer; we exist for the support and education of members and we enable them to solve cases – on their own. That last point is absolutely the thing that members appreciate.

“Another point I would like to make is that we are not a regulatory body – this isn’t the General Medical Council. We do get people writing to us about the behavior of editors. We do have a Code of Conduct and can work with editors to look at how they can better comply with it but we don’t feel it is our role to rule on an editor’s conduct from a regulatory point of view.”

What happens when a case is submitted to COPE

When editors approach COPE for advice on a case, the first step is to direct them to the resources on the COPE website. Dr Barbour said: “Many of the problems they experience we will have encountered before, for example, authorship issues are tremendously common.” The website contains flowcharts to help editors make decisions on many publishing ethics dilemmas, such as ‘What to do if you suspect redundant (duplicate) publication’ and ‘What to do if you suspect a reviewer has appropriated an author’s idea or data’. There is also a database containing details of, and advice given on, the 500+ cases COPE has discussed since its inception in 1997. Work is currently being carried out to increase the effectiveness of the database’s search function. COPE also hopes that an ongoing reclassification exercise will help it understand which areas of research misconduct are becoming more prevalent and require more focus.

Dr Barbour continued: “If an editor feels their case is not so simple, or they need a bit more support, e.g. they are a first-time editor, or are under pressure from someone, then we suggest they bring the case to one of our quarterly Forums, where it can be discussed by up to 60 editors. Another option shortly will be to submit it for an online consultation session.

“Both these avenues can lead to a divergence of members’ views – not in a combative way, but you will find one editor says ‘this has always helped me’ while another favors an alternative approach. Sometimes members will say ‘exactly the same thing happened to me’ and they can explain how they dealt with it. We collate all the feedback received and provide a written (anonymized) summary to the editor who submitted the case. It is up to them to decide on the next steps.”

Dr Barbour has seen firsthand the value that discussing a case at the Forum can bring. She explained: “At one North American Forum, a member mentioned that they were puzzled by the behavior of an author who had fabricated, or inaccurately reported, references on a paper. It sounded odd, but minor. Then another member said ‘that’s weird’ and related a similar story. It turned out that the reference fabrication was just one aspect of a wider case and between them they uncovered misconduct going back years.

“Similarly, last year at two or three Forums we heard about incidents where authors had fabricated reviewers. It was strange, nobody had ever heard of this happening before and then suddenly there were three cases in six months. That was sufficient for us to send a warning to all our members.”

Plans for the next few years include a focus on making COPE more proactive in leading debates on publication ethics. The first steps have already been taken with the introduction of a ‘discussion’ about a topical ethics issue at the start of each Forum. She said: “Our ultimate goal is to be an organization that leads the debate on publishing ethics.”

COPE – how it can help

The automatic COPE membership Elsevier extends to all its journals brings a number of benefits. As an editor you can:

  • Receive advice on individual (anonymized) cases from members of the Council and other COPE members at Forum meetings each quarter.
  • Access advice on a more regular basis via the new online consultation service.
  • Attend the COPE seminars (free for members) where real-life, anonymized cases are debated.
  • Access the recently revamped eLearning course, and invite co-editors to participate.
  • Use the ethics audit tool to see how well your journal matches COPE’s guidelines (log-in required).
  • Use the COPE logo in your journal.
  • Apply for COPE research grants.
  • Stand for election to COPE Council.
  • Receive the new eNewsletter, COPE Digest: Publication Ethics in Practice.
  • Use COPE’s range of sample letters (log in required).

You will also have access to a variety of resources available to members and non-members alike. These include:

Contributor bio*

Dr Virginia Barbour

Dr Virginia Barbour
CHAIR OF COMMITTEE ON PUBLICATION ETHICS
Virginia Barbour joined The Lancet in 1999, becoming Molecular Medicine Editor in 2001. She joined the Public Library of Science in 2004 and was one of the three founding editors of PLOS Medicine. She was Chief Editor until September 2013 and is now Medicine Editorial Director for PLOS. She initially studied Natural Sciences at the University of Cambridge, and then Medicine at University College and Middlesex Hospital School of Medicine, London. After training in hematology at the Royal Free Hospital, London, she continued her studies at the Institute of Molecular Medicine in Oxford, before carrying out postdoctoral work in the Division of Experimental Hematology at St Jude Children's Research Hospital in Memphis, Tennessee. Alongside her role as Chair of COPE, Dr Barbour is a member of the Ethics Committee for the World Association of Medical Editors (WAME). She has participated in discussions on revisions to CONSORT statements, the QUOROM statement and was involved in the first meetings of the EQUATOR initiative.

* Dr Barbour was interviewed for this article by Linda Willems, Editor-in-Chief of Editors’ Update.

Spreadsheets

The art of detecting data and image manipulation

“… a false statement of fact, made deliberately, is the most serious crime a scientist can commit.” English Chemist and novelist, Charles Percy Snow (1905-1980) Over the years, numerous initiatives have been launched to educate authors about the dangers of manipulating data and images in their journal submissions — in fact, we discuss two of […]

Read more >


“… a false statement of fact, made deliberately, is the most serious crime a scientist can commit.” English Chemist and novelist, Charles Percy Snow (1905-1980)

Over the years, numerous initiatives have been launched to educate authors about the dangers of manipulating data and images in their journal submissions — in fact, we discuss two of our own programs in The importance of author education in this Ethics Special.

Biochemical PharmacologyWhile many of these have met with success, there is no doubt this kind of behavior remains more common than we would wish. In this article, we focus on some of the tools and processes developed to detect data and image manipulation. Dr Jacques Piette, Editor of Biochemical Pharmacology, shares his eight-point plan to control submitted Western Blots, while Dr John Dahlberg, of The Office of Research Integrity (ORI), talks about how his organization can help identify manipulation and offers insight into the techniques used by its investigators. Dr Dahlberg has also kindly offered to share with readers a program the ORI uses to identify potentially fabricated numbers — further details of which you will find below.

But most of all, we hope this article proves the starting point of a wider discussion on this topic — we want to hear your views. Please let us know your thoughts on how data and image manipulation can be better managed in your field by posting your comments below.

The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) offers the following guidance on graphics editing:

“For clarity, figures may be adjusted to better see the item being discussed as long as such changes do not obscure or eliminate information present in the original image. However any changes (brightness, contrast, color balance, etc.) must be made overall, and mentioned in the figure caption. An original image file must be retained in case it is required by the peer-review process. Do not remove or move anything in an image, or clean up an image.”

Tools to detect fraud at The Office of Research Integrity (ORI)

The Office of Research Integrity (ORI) is responsible for oversight reviews of investigations into allegations of research misconduct that involve research funded — at least in part — by agencies of the US Public Health Service.

John Dahlberg

Dr John Dahlberg

According to John Dahlberg, PhD, Deputy Director of the ORI, an oversight review is essentially a “de novo review of the institutional record” and is carried out by the ORI’s Division of Investigative Oversight (DIO); ten scientists and physician-researchers with a wide range of disciplinary backgrounds.

He said: “The pace at which they are being asked to examine research is increasing dramatically. Over the years, DIO employees have developed a number of computer-aided approaches to examining data and other research records to strengthen the evidence for research misconduct in cases where findings appear warranted.”

Here, Dr Dahlberg guides us through some of those tools and processes, many of which are available to the public, and shares some useful tips from the team.

ORI’s forensic image tools

Forensic droplets: First posted on the ORI website in 2005, droplets are small desktop applications in Adobe Photoshop that automatically process files dragged onto the icon. They are available to download from ORI’s website and allow you to quickly examine the details of a scientific image in Photoshop while reading the publication in the full text (html) form or in the PDF form in an Internet Browser.

The droplets have a variety of uses and can help you to:

  • Find out whether an image’s light or dark areas have been adjusted
  • Evaluate whether two images may have been obtained from a single source
  • Compare two images

Photoshop Actions: ORI also posted a number of Photoshop actions in 2005 and an advanced set of these has been developed for later Photoshop versions. The actions differ from the droplets in that they pause to allow the user to make a choice in how to proceed with the analysis of the image(s).

Other image tools used by the Division of Investigative Oversight (DIO):

Adobe Bridge: This software can generate libraries of images for rapid screening — images can be organized by date or file size, and the large thumbnail size allows careful viewing of each image. This is particularly useful when searching for sequential versions of files that have been modified, where they are likely to be very similar in size and their time-date stamps are closely spaced.

ImageJ: This program is available for a variety of platforms and can be freely downloaded from the National Institutes of Health (NIH) website. It is very versatile and the DIO finds it particularly useful for producing quantitative scans of gel bands, for example.

DIO has also discovered research misconduct in PowerPoint images by using the ‘Reset Picture’ tool. On numerous occasions, this has revealed the use of underlying images and, in several cases, those underlying images turned out to have been scanned from unrelated published papers. It is also possible to reset images in some PDF files viewed in Adobe Acrobat.

Examining questionable data

Review of questioned numbers: Research [1-4] has shown that when people are asked to write random numbers, they do a poor job. James Mosimann, a bio-statistician at ORI in the 1990s, recognized that if sets of numbers in respondents’ notebooks purportedly obtained by transcribing them from instruments such as scintillation counters or spectrophotometers were unaccompanied by the original data printouts, then they might have been fabricated. He also reasoned that while the digits on the left side of a number would be expected to be non-uniform (because they conveyed the results of the experiment) those in the right-most positions ought to be uniformly distributed. He developed a program to calculate chi-square values and corresponding probabilities based on the distribution of right-most digits in sets of numbers sufficiently large enough (>50 digits). Columns of numbers saved as a text file can be imported into his program. The DIO requires control data from similar unquestioned experiments carried out in the same laboratory. In quite a number of cases, while right-most digits from control numbers have been shown to be uniformly distributed, this has not been true of the questioned numbers.

Although not publicly available, the ORI has kindly agreed to provide a copy of James Mosimann's program to interested editors along with instructions. It is usable in Windows through version 7, but does not load in Windows 8. If you would like to receive a copy, please contact Dr Dahlberg at john.dahlberg@hhs.gov.

Issues with spreadsheet files: ORI has made findings in several cases involving the discovery of embedded formulae in spreadsheets that calculate backwards; in other words, a formula is used to calculate the raw data value from the final claimed result. The formula in an Excel cell is visible in the formula bar when a cell in highlighted, while all of the formulae in the spreadsheet can be displayed in Excel (Microsoft Office 2007 version) by pressing the “control + ~” keys (control/plus/grave accent) simultaneously. Pressing the same three keys restores the normal view. Even when formulae have been removed from a spreadsheet, the format of the numbers in the columns may be informative. Calculated values usually have long digit strings to the right of a decimal, and data input values often do not — this can be revealed by setting the cell number format to ‘general’.

Converting graphs back to spreadsheet values: ORI has frequently found it necessary to compare published graph data with raw notebook or computer data to determine if it has been reported accurately. Similarly, they can see if the published standard errors or standard deviations — expressed as error bars — are adequately reflective of the raw data. It is also often desirable to compare graphs published in different grant applications or papers that are labeled as coming from different experiments but which appear to have identical values. To accomplish this, DIO has used computer software [5] to convert images to spreadsheet values.

In several cases, ORI has determined that error bars seem improbably small, or of a fixed percentage of the experimental values. Fixed error bars at, say, 5 percent of the height of the histogram bars in the graphs, are not reflective of typical biological experiments, and warrant a review by the institution to determine if the experiment(s) were actually conducted as described.

Forensic review of sequestered digital data: In recent years, DIO has increasingly relied on the forensic examination of sequestered digital data, particularly of hard drives. This is reflective of increasing reliance by the scientific community on storage of data on computers rather than in notebooks. Whenever possible, ORI advises institutions to acquire forensic copies of digital data, which may involve the expertise of IT personnel and special hardware and software. There are multiple advantages to acquiring image copies in comparison to simply copying files onto CDs or other media; for example, time-date stamps are accurately preserved and forensic software can recover erased files as long as they have not been overwritten by a more recently saved file.

Eight tips to prevent Western Blot manipulation 

Dr Jacques Piette

Western Blots — a highly valuable technique to separate proteins by structure or size — is a widely-used method. According to Dr Jacques Piette, Groupe Interdisciplinaire Génoprotéomique Appliquée Research Director at the Université de Liège, Belgium, and Editor of Elsevier’s Biochemical Pharmacology, it is also a method that is sadly misused and vigilance is needed in evaluating these images [6].

Figure 1: An example of a Western Blot suffering overloading or over-exposure problems, and inappropriate gel cutting. The accompanying paper also lacked quantification and statistical analysis around the WB.

Figure 1: An example of a Western Blot suffering overloading or over-exposure problems, and inappropriate gel cutting. The accompanying paper also lacked quantification and statistical analysis around the WB.

Dr Piette has highlighted eight key points to consider:

1. Pay attention to the overall quality of the Western Blot (WB). The bands should be well-marked. Do not accept a WB with fuzzy or smearing bands.

2. Do not accept a WB with over-loaded or over-exposed bands because they are impossible to quantify.

3. Request that the WBs be quantified and statistically analyzed.

4. Do not accept a WB where the samples to compare have been loaded on more than one gel.

5. Do not accept a WB without the proper loading controls:

  • They should not be over-exposed.
  • They should be made using proteins extracted in the same conditions as the analyzed proteins. Example: if a nuclear protein is analyzed, the loading control should be made with a nuclear protein and not with a cytoplasmic protein - quite often the case!

6. Pay attention to the fraudulent use of the same loading controls in several different WBs.

7. Primary and secondary antibodies must be described in the Materials and Methods section. If the antibodies are not of commercial origin, their characterization must be described.

8. If there are doubts about a WB, do not hesitate to ask the authors to provide an image of the full WB.

The Guide for Authors of many journals do not carry any information around submitting Western Blots. Biochemical Pharmacology is one of the few that does. If a journal receives a large number of Western Blots, the editor might consider amending the Guide accordingly. Any editors interested in working together on a common text on Western Blots should contact me, Anthony Newman, at a.newman@elsevier.com.

Author biography

Anthony Newman

Anthony Newman

Anthony Newman
SENIOR PUBLISHER, APPLIED BIOCHEMISTRY
In September 1987, Anthony moved from London to Amsterdam to join Elsevier. He has always been interested in ethics, and was one of the original project team that founded PERK (Publishing Ethics Resource Kit), and brought COPE into Elsevier. Apart from managing a dozen or more journals, he is also a member of the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) Task Force on Ethics, where he recently published a white paper, and he has given workshops on publication ethics at various IFCC-sponsored events worldwide.

 

 


References

[1] Mosimann J E, Wiseman CV and Edelman RE, “Data Fabrication: Can People Generate Random Digits?”, Accountability in Research4:31-55, 1995.

[2] Mosimann J E and Ratnaparkhi M V, “Uniform occurrence of digits for folded and mixture distributions at finite intervals”, Communications in Statistics25(2):481-506, 1996.

[3] Mosimann J E, Dahlberg J E, Davidian N M and Krueger J W, “Terminal digits and the examination of questioned data”, Accountability in Research9:75-92, 2002.

[4] Dahlberg J E and Davidian N M, “Scientific forensics: how the Office of Research Integrity can assist institutional investigations of research misconduct during oversight review”, Sci. Eng. Ethics, 16:713-735, 2010.

[5] There are various programs that can be used, and although ORI cannot endorse any, it has used SigmaScanPro, sold by Systat Software Inc.

[6] Adam Marcus and Ivan Oransky, “Can We Trust Western Blots?”, Lab Times, 2-2012, 41

Woven-thread

Working together: a précis of roles and resources

When an ethics case arises on one of your journals, establishing who is responsible – and for what – may not seem clear cut. In truth, it isn’t; much will depend on the specifics of each situation including the type of case and its severity. While creating a ‘one size fits all’ set of instructions […]

Read more >


When an ethics case arises on one of your journals, establishing who is responsible – and for what – may not seem clear cut. In truth, it isn’t; much will depend on the specifics of each situation including the type of case and its severity.

While creating a ‘one size fits all’ set of instructions may be a challenge, there are a few basic guidelines, which you will find outlined below. This article also highlights some of the resources Elsevier has available to support you when faced with ethical issues relating to journal articles.

Whose job is it anyway?

As Mark Seeley points out in his Guest Editorial in Part I of this Ethics Special, the journal editor plays a central role in resolving ethics allegations. The ultimate decision should be based on the editorial and scientific integrity of the article and the journal, and should not be swayed by business or legal concerns.  Elsevier also has a critical role to play in working through each ethics issue with the journal editor.  Elsevier’s role is to:

  • Guide - help the editor decide how to evaluate and investigate the allegation and provide the best available tools and resources.
  • Support – we can aid editors to implement editorial decisions.
  • Defend - stand behind the editor before and after the decision is made and implemented.

Elsevier has a variety of experts available to assist the editor in handling ethics disputes:

  • Journal publisher: The publisher provides first-line support on any journal matter, including questions relating to ethics.
  • Publishing ethics team: Elsevier has recently formed a small team of publishing ethics experts to support our journals. These experts will assist journal publishers and editors, expand the tools and resources available for identifying and resolving ethics issues, and support further author education aimed at preventing future ethics breaches.
  • Legal staff: Elsevier’s legal department is available to advise as needed with respect to issues of process and legal rules.
  • Corporate relations: Elsevier’s corporate relations team assists in handling media inquiries or other information requests related to ethics disputes and decisions. See Talking to the media – who is responsible? in this issue for further details.

In the end, the journal editor and publisher share a common goal: to resolve ethics issues in a way that upholds the reputation of the journal, ensures the integrity of the scientific record as reported in the journal, treats all parties fairly and efficiently, and effectively resolves the situation. Together, we will continue to do everything necessary to protect the record of science.

Importance of validating reviewers suggested by authors

There is one editor role in particular that we would like to take this opportunity to highlight. As part of the submission process for some Elsevier journals, authors are asked to suggest potential reviewers for their paper.  While this can be a great help in fields where editors struggle to find good reviewers, recently we have seen this practice lead to some unethical author behavior. There have been a few, rare cases of authors suggesting fictitious reviewers with fictitious email addresses. This ensures the authors receive the review request and gives them the opportunity to create their own reviews.

To help prevent this, it is essential that editors use Scopus to check the validity of reviewers suggested by authors.  Running through the checklist of questions below can also help to raise any potential red flags.

  • Is the institute listed against the reviewer’s name credible?
  • Is the email address provided that of an institute? A Hotmail email address, for example, may not necessarily be suspect but could be an additional alert if other information doesn’t add up.
  • Has a known reviewer suddenly switched from an institutional email address to another? It could be the case that the name is valid but the email address and reviewer account in EES are not.
  • Are there any indications of a conflict of interest? For example, a suggested reviewer having the same affiliation as the author?
  • Is the reviewer a subject expert? A quick check of the reviewer’s history in Scopus should answer this question.
  • Is the reviewer a regular co-author with the corresponding author? Again, a quick check of the reviewer’s history in Scopus should verify this.

Carrying out these simple checks will go some way towards ensuring fake reviewers are caught prior to being registered and invited.

How can Elsevier help editors when publishing ethics cases arise?

To assist our journal editors in handling publishing ethics cases and to safeguard the scientific integrity of our journals, Elsevier makes available a wide variety of tools and resources.

PERK (Publishing Ethics Resource Kit)

Elsevier’s online PERK resource provides journal editors with a roadmap to take them through the entire process of resolving a complaint of an ethics breach. It includes:

  • General process guidelines, including a description of decision-making process, due process for authors, when to involve the legal department, and discussions relating to potential remedies.
  • Decision trees for each type of ethics issue which guide the editor through the steps needed to resolve the ethics allegation.
  • Form letters to use in ethics-related correspondence.
  • FAQs re. ethics issues and processes.
  • Links to third-party ethics resources.
COPE membership

Elsevier has enrolled its journal editors in COPE, the Committee on Publication Ethics, an independent organization consisting of more than 8,700 editors of peer-reviewed journals.  COPE provides an opportunity for editors to discuss issues relating to the integrity of the scientific record, and supports and encourages editors to report, catalogue and instigate investigations into ethics problems in the publication process.  Other resources and benefits include:

  • A COPE Forum at which editors may receive individual advice on resolving specific disputes from a committee of the organization’s members.
  • A database of all cases considered by the COPE Forum listed by the category of ethics breach (for example, duplicate submission), the advice given, and the outcome of cases - an extremely valuable resource for editors when deciding what to do in similar situations.
  • Newsletters and seminars on special topics.
  • A self-audit tool for use with individual journals.

Enlisting our journal editors in COPE ensures that they have an alternative source to refer to when dealing with publishing ethics issues. In this issue, Chair of COPE, Dr Virginia Barbour, explains in more detail how the organization can help.

CrossCheck

Elsevier journal editors may choose to employ CrossCheckTM, third-party software provided by CrossRef® and Iparadigms (iThenticate), which is used to discover similarities between submitted manuscripts and previously published journal articles. This is a database of more than 38 million articles from more than 175,000 journals produced by more than 500 participating publishers.  Elsevier is working closely with the software vendor and other publishers on enhancements that will make CrossCheck an even more efficient and effective plagiarism detection tool for editors. You can read more about this valuable service in How CrossCheck can combat the perils of plagiarism in this issue.

Author education

As part of its commitment to help educate researchers and authors about scientific publishing issues, Elsevier has developed ethics training in collaboration with an independent panel of experts: the Ethics in Research & Publication program.  The program includes online education to teach the ‘ground rules’, as well as the consequences if they’re broken.  It also contains interviews, fact sheets, quizzes, and a Q&A.

In addition, Elsevier hosts more than 350 author and reviewer training workshops per year through the Publishing Connect program, as well as quarterly author webinars. You can find out more about all these projects in the article The importance of author education in this issue.

Author biographies

Linda Lavelle

Linda Lavelle

Linda Lavelle
GENERAL COUNSEL (NORTH AMERICA)
Linda is a member of Elsevier’s legal team, providing support and guidance for its companies, products and services. She is also responsible for Elsevier’s Global Rights-Contracts team, and is a frequent speaker on matters of publication ethics.  Linda earned her law degree from the University of Michigan and also has an MBA.  She joined Harcourt in 1995, which subsequently became part of Elsevier.  Before that time, she served in a law firm, and held a number of positions in the legal, scientific, and information publishing industry.

Mihail Grecea

Dr Mihail Grecea
EXPERT IN PUBLISHING ETHICS
Mihail holds the role of Expert in Publishing Ethics in Elsevier’s STM Journals group. He joined Elsevier in 2011 and was a Managing Editor for our journal Physics Letters A before taking on this new role in May this year. Mihail has a PhD in Physical Chemistry from the University of Leiden and before joining Elsevier was a postdoctoral researcher at the Materials Innovation Institute (M2i) and the Dutch Institute for Fundamental Energy Research (DIFFER). 

EU41_HeroEditorial

Welcome to Part II of our Ethics Special edition

When it comes to publishing ethics, one question in particular eludes a definitive answer; are these cases on the rise or are we simply getting better at uncovering them? In Part I of this Ethics Special, Elsevier’s Senior Vice President and General Counsel, Mark Seeley, was happy to go on record as a supporter of […]

Read more >


When it comes to publishing ethics, one question in particular eludes a definitive answer; are these cases on the rise or are we simply getting better at uncovering them?

In Part I of this Ethics Special, Elsevier’s Senior Vice President and General Counsel, Mark Seeley, was happy to go on record as a supporter of the hypothesis that they are indeed increasing. In his Guest Editorial he stated: "... I think the better view — one more consistent with the evidence on the number of retractions — is that we are seeing an actual rise in volume."

We were keen to hear your views and asked you to vote in our online Editors’ Update poll. Interestingly, the result was fairly evenly split – at the time of going to press, 54 percent of you had voted for a rise in publishing ethics cases, while 44 percent felt that software and experience were responsible for bringing more cases to light.

This poll has now been replaced by a question submitted by an Editors’ Update reader: Is the pressure of grants to publish driving the rise in unethical practices from authors?. Please do take a few moments to visit the right hand bar and let us know your thoughts.

One further request – the Editors’ Update website is currently running a short survey to help us improve our service. If you see the pop-up request below, I would be very grateful if you could take part. As an added incentive, for every completed survey we will donate US$2 to Book Aid International, which supports literacy, education and development in sub-Saharan Africa.

 

What will I find in this issue?

Before I outline the contents of this edition, I’d like to reflect on the feedback we have received on our Ethics Special Part I. Thanks to all of you who took the time to post comments – one article in particular, Bias in research: the rule rather than the exception?, sparked much discussion, while the most visited article proved to be Research misconduct – three editors share their stories.

And so, on to Part II…we begin with Working together: a précis of roles and resources, a scene-setter for the articles that follow. Find out more about the roles Elsevier and editors have to play and the range of support available to help you.

We then move to The art of detecting data and image manipulation in which we investigate a range of tools and processes. The article contains useful advice (and an offer of free software) from The Office of Research Integrity as well as an editor’s practical tips for checking Western Blots.

All Elsevier journals are enrolled in the Committee on Publication Ethics (COPE) and in Making the most of your COPE membership, current Chair, Dr Virginia Barbour, explains recent changes to the organization and outlines some of the benefits membership can bring.

For many journals, CrossCheck is now indispensable. In How CrossCheck can combat the perils of plagiarism we discover why they wouldn’t be without this software and how integration into EES will further streamline the process for checking papers for plagiarism, simultaneous submission and multiple publication.

Talking to the media – who is responsible? asks Tom Reller, Elsevier’s Head of Media Relations. Media exposure for your journal may be welcome when the coverage is positive, but what about when they want to discuss publishing ethics cases? Reller outlines some scenarios and advises on whether Elsevier or the editor should respond.

If we want to reduce research misconduct incidents, education is key and in The importance of author education we look at two of Elsevier’s early-career training initiatives – the Ethics in Research & Publication Program and Publishing Connect author workshops.

Finally, no edition would be complete without our Editor in the Spotlight feature. This time, Dr Robert Strangeway, Research Geophysicist at University of California, Los Angeles, and joint Editor-in-Chief of the Journal of Atmospheric and Solar-Terrestrial Physics takes on our regular Q&A.

Looking ahead

Planning is already underway for our 2014 editions. As you can see from the Ethics Special Part I, articles written by editors are extremely popular so I would love to hear from you if there is a topic you are keen to write about. I would also welcome article ideas and any feedback you might have to share. You can email me at editorsupdate@elsevier.com.

Margaret-Rees-email

Editor in the Spotlight – Margaret Rees of Maturitas

Maturitas was founded in 1978 and is the official journal of the European Menopause and Andropause Society (EMAS). It is also affiliated with the Australasian Menopause Society. The journal Impact Factor of 2.844 ranks it 14 out of 77 journals in Obstetrics and Gynecology and 19 out of 46 journals in Geriatrics & Gerontology. Submissions […]

Read more >


Maturitas was founded in 1978 and is the official journal of the European Menopause and Andropause Society (EMAS). It is also affiliated with the Australasian Menopause Society.

The journal Impact Factor of 2.844 ranks it 14 out of 77 journals in Obstetrics and Gynecology and 19 out of 46 journals in Geriatrics & Gerontology. Submissions to the journal are running at around 500 per year, with academic institutions annually downloading more than 381,000 articles on ScienceDirect and www.maturitas.org registering 238,765 article pageviews by subscribers. Maturitas Editor-in-Chief Professor Margaret Rees is a Reader Emeritus in Reproductive Medicine and a Fellow at St Hilda's College, at the University of Oxford. Her ethics experience is extensive – she is Secretary of the Committee on Publication Ethics (COPE), Chair Elect of the Association of Research Ethics Committees (AREC), and member of the Open University Human Research Ethics Committee (HREC); the Central University of Oxford Ethics Committee (CUREC); and the Elsevier Ethics Committee.

Q. What does being a journal editor mean to you and what do you find most rewarding about this role?
A. I have been Editor-in-Chief of a journal since 1998. For the first 10 years I edited Menopause International and since 2008 I have edited Maturitas. Maturitas is a well-established international journal which allows regular interaction with cutting-edge researchers. The editors and editorial board provide an excellent multidisciplinary team. So what is most rewarding is being able to select high-quality articles for publication, thereby stimulating interest in the journal and encouraging the junior researchers, who are our future.

Q.What are your biggest challenges as Editor-in-Chief of Maturitas? How do you overcome these challenges and what extra support can Elsevier provide?
A. The biggest challenge I faced at the outset was that Maturitas was perceived as a women’s health journal, which restricted its focus. I therefore expanded its scope and broadened the editorial board, which is regularly refreshed by junior researchers who are encouraged to join. I also commission review articles to publicise the widened area of interest. Thus, Maturitas is now a multidisciplinary, international, peer-reviewed scientific journal that deals with midlife health and beyond. We publish original research, reviews, clinical trial protocols, consensus statements and guidelines. The scope encompasses all aspects of post-reproductive health in both genders, ranging from basic science to health and social care. Within the first year of my editorship, submissions increased by 50% and downloads by 70%. The Impact Factor has steadily increased and I now tweet via EMAS about selected articles. My main current challenge is to be able to publicize the journal more widely using social media: further assistance from Elsevier would be invaluable. In addition, the speed of manuscript processing and minor language editing could be improved. While I have a dedicated language editor, whom I selected, it would be inappropriate to use him for minor edits which could be undertaken by typesetters.

Q.In many areas of research, the growth of paper submissions is outpacing the growth of qualified reviewers and resulting in pressure on the peer-review system. What do you think the solution to this problem is and how do you see the peer-review process changing in the future?
A. There are various solutions to this perennial problem which I have deployed, thus reducing the average time to first decision to 21 days. In 2008 it was 61 days. Before passing on manuscripts to the editors, I routinely screen them, e.g. check for originality, look for text similarity with iThenticate, review the quality of the English and take into account ethical considerations etc. As Secretary of COPE and member of the Elsevier Ethics Committee, I am committed to high standards. I aim to provide constructive comments to authors for papers rejected outright so that poorly-presented papers, which nonetheless contain good science, can be reconsidered. This reduces the workload of both editors and reviewers. Thus, reviewers are not asked to look too frequently at papers for Maturitas. The pool of reviewers is increased by constant refreshing of the editorial board and asking junior researchers to review. Currently, we publish around 30% of unsolicited articles.

One specific problem with EES is that, historically, different journals have used different logon names and passwords. This has the potential to deter reviewers who are aware that other publishers use single logons and passwords for all their journals. I have not found the Elsevier process of consolidation smooth. It relies on reviewers hunting down their various user names and passwords. During this process some, including me, have been denied access to EES. I am now relieved to know that these problems are being resolved and users now have the option to forgo consolidating their accounts.

Q. We have observed that researchers are increasingly accessing journal content online at an article level, i.e. the researcher digests content more frequently on an article basis rather than a journal basis. How do you think this affects the visibility of your journal among authors?
A. Access of individual papers is becoming the norm, but visibility of the journal as a whole can be maintained through social media and commissioning timely, high-impact reviews.

Q.Recently, there have been many developments in open access particularly in the UK and Europe where, back in July 2012, the UK government endorsed the Finch Report recommendations for government-funded research to be made available in open access publications. The European Commission has since followed suit, making a similar announcement for an open access policy starting in 2014. How do you see these open access changes in your country? And how do you see them affecting authors who publish in your journal?
A. Maturitas offers several open access options. Authors or their funders can choose to pay a publication fee to make an article open access. Each month, using Editor’s Choice, a new feature the journal has introduced, I can summarize for the public the most important research published in the journal and the papers are openly available.

Q. Researchers need to demonstrate their research impact, and they are increasingly under pressure to publish articles in journals with high Impact Factors. How important is a journal’s Impact Factor to you, and do you see any developments in your community regarding other research quality measurements?
A. The Impact Factor remains the gold standard, but does not allow consideration of the size of the field and is inherently slow to respond to stimuli. Thus, changes in a journal’s focus will take a few years to become apparent. New metrics include paper views and Twitter as well as other social media tools which are more immediate, but this needs to be taken on board by funders as well as researchers. Elsevier could help editors by providing education on these new tools. The instruction ‘Print or share this page’ could be made more explicit. The Top 25 Hottest Articles for each quarter should be available during the following month.

Q. As online publishing techniques develop, the traditional format of the online scientific article will change. At Elsevier, we are experimenting with new online content features and functionality. Which improvements/changes would you, as an Editor, find most important?
A. Maturitas authors have the ability to provide AudioSlides. Better presentation of papers with linked references in the margins would help the reader. Those accessing the journal website would benefit from seeing on the page the preview content rather than having to click on it, to stimulate interest. Authors may wish to have links to their webpages. Also, the most cited and read facilities should provide numbers with the title of each article rather than you having to click on each abstract. Thus, as an editor, I would like to have ‘at a glance’ up-to-date tracking of citations and downloads for each article to better profile future commissioned reviews.

Q. Do you use social media or online professional networking in your role as an editor or researcher? Has it helped you and, if so, how?
A. I have recently started with Twitter and LinkedIn using the EMAS portal. Furthermore, since September 2012, I publicize papers published in Maturitas in the monthly EMAS newsletter which is opened by over 30,000 people - downloads increased by 70,000 in 2012. It would be helpful if Elsevier could provide a journal-focused helpline to encourage authors to use social media and have regular calls for papers.

Q. How do you see your journal developing over the next 10 years? Do you see major shifts in the use of journals in the future?
A. While journals will become more electronic, some readers prefer paper which can be read in the bath without mishap. Publicizing collections on various themes online is attractive as it allows authors and readers to see the range of a journal at a glance.

Q. Do you have any tips or tricks to share with your fellow editors about being a journal editor?
A. Becoming an editor of a journal is an exciting but daunting task, especially if you are working alone without day-to-day contact with editorial colleagues. The job requires constant attention to detail to ensure publication of high-quality, well-presented material, maintaining the integrity of the scientific record. It is important that editors act politely, fairly but firmly at all times. Speedy communication with authors and reviewers is essential. All Elsevier journals are members of COPE and its website is a source of useful advice. An editor should also act as an educator to authors and reviewers so that young researchers are encouraged to publish and be involved in the process. They are the editors of the future. An Editor-in-Chief needs to interact regularly with other editors and the editorial board, as well as journal and publishing managers. It is a team effort. Editors should not go out on a limb and difficult decisions should be made in consultation.

WCRI

Lessons learnt at the 3rd World Conference on Research Integrity

The Canadian city of Montreal played host to the 3rd World Conference on Research Integrity in May this year and attendees enjoyed the luxurious problem of choosing from a packed program of fascinating sessions. This personal report is therefore not comprehensive but I hope that it gives you a flavour of the event. As someone […]

Read more >


The Canadian city of Montreal played host to the 3rd World Conference on Research Integrity in May this year and attendees enjoyed the luxurious problem of choosing from a packed program of fascinating sessions. This personal report is therefore not comprehensive but I hope that it gives you a flavour of the event.

As someone who deals with publishing and research ethics cases on a daily basis, I found it both depressing to hear other stakeholders report similar dilemmas and reassuring to find that Elsevier’s approach is largely aligned with that of others. However, I’d like to share with you my take on those speakers who inspired me to consider these issues on a deeper level or from an entirely new angle.

Scientists are human too!

If we are to meaningfully prevent research misconduct, we need to understand the underlying behavioral psychology that drives cheating in the first place.

Professor Fred Grinnell

Professor Fred Grinnell made the case that being a scientific maverick requires passion and a burning belief in one’s hypothesis, often in the face of an unbelieving community or elusive evidence. While such passion may drive discovery, it can also be interpreted as an inherent bias – an almost irrational belief that you are right. Professor Grinnell cited examples such as James Watson’s fascinating account of the emotional, competitive race to describe the structure of DNA: a far cry from the clinical objectivity that is often held up as the ideal for scientists.

Professor Dan Ariely made a related point about the irrationality of cheating. Behavioral economics traditionally saw cheating as a logical cost-benefit analysis: how likely am I to get caught versus how much can I benefit from the deceit. The latest research indicates that such decisions are not logical whatsoever but emotional: all efforts towards preventing or disincentivizing misconduct need to recognise that emotion. For example, once someone has started to stray from the right path, they reach a crucial tipping point. If they can confess, wipe the slate clean and be rehabilitated before that point, there is hope. If the community doesn’t offer minor offenders opportunities for rehabilitation, they may feel that there is no open path back and descend into more serious offences.

He also spoke of conflicts of interest as an unavoidable fact of life: we should focus on recognizing and acknowledging them, rather than pretending they can be totally eliminated. For example, even a researcher’s most noble desire to help patients by completing a successful clinical trial can conflict with the best interests of an individual patient within that trial.

How to blow the whistle (or oboe) and still have a career afterwards

Elsevier is regularly approached by younger researchers seeking guidance on how to deal with everyday ethics issues, for example, inappropriate authorship or perhaps they have witnessed misappropriation of data. Our Ethics in Research & Publication program tries to help by providing them with tools to make the right decision, so I listened with great interest to two speakers who have decades of experience in ethics education.

Professor C K Gunsalus

Many speakers recommended Professor C K Gunsalus’ seminal guide: “How to blow the whistle and still have a career afterwards”. She advises young researchers to have a simple ’script‘ which they are comfortable with and ready to use should they need to confront a colleague’s unethical behavior, especially where there is a power imbalance, such as with a supervisor.

Professor Joan Sieber elegantly proposed the need for an even more subtle skill set than whistle-blowing: the art of blowing the oboe - in other words, handling ethics dilemmas in an effective but low-key manner that exposes the ‘blower’ to less personal risk.

Challenges facing publishers

Speakers from many publishing houses shared their experiences of developing publishing ethics policies in an ever-changing environment. In her talk on “Challenges of author responsibilities in collaborations”, Nature’s Dr Veronique Kiermer spoke of the two sides to authorship: on the one hand, it conveys credit but on the other, that credit comes with accountability. Elsevier’s own Mark Seeley highlighted the dilemma that editors and publishers face - sometimes we have to accept that we just don’t know what actually happened in the lab and may never know. While it is frustrating to make decisions without all the facts, we are committed to making the fairest decision based on the facts available to us.

Dr Bernd Pulverer

Dr Bernd Pulverer from the EMBO Journal cleverly presented real (anonymized) cases that initially looked extremely suspicious, only for a valid explanation to be found once the author was asked for more information. This was a perfect illustration of the need for editors to always give authors the benefit of the doubt and the right to respond: a need regularly reinforced to Elsevier by similar experiences.

For more coverage of the 3rd World Conference on Research Integrity you may wish to read the personal accounts of Liz Wager on Elsevier Connect and Alice Meadows on Scholarly Kitchen’s website.

The Conference has also led to the development of the Montreal Statement, a draft version of which is now available to view on the Conference website. It contains a series of recommendations for individual and collaborative research.

Co-chairs of the Conference: (L-R) Sabine Kleinert, The Lancet’s Senior Executive Editor and former Vice Chair of the Committee on Publication Ethics (COPE) and Melissa Anderson, Associate Dean of Graduate Education and Professor of Higher Education at the University of Minnesota.

Author biography

Catriona Fennell

Catriona Fennell
DIRECTOR PUBLISHING SERVICES
Following graduation from University College Galway, Ireland, Catriona joined Elsevier as a Journal Manager in 1999. She later had the opportunity to support and train hundreds of editors during the introduction of the Elsevier Editorial System (EES). Since then, she has worked in various management roles in STM Journals’ Publishing and is now responsible for its author-centricity and publishing ethics programs.

Misconduct

The ethics pitfalls that editors face

When talk turns to matters of research misconduct, the author community is most commonly left to shoulder the blame. However, there are unethical practices of which journal editors may fall foul. In this article we examine two of the most common – undisclosed conflicts of interest and citation manipulation. The complex world of conflicts of […]

Read more >


When talk turns to matters of research misconduct, the author community is most commonly left to shoulder the blame. However, there are unethical practices of which journal editors may fall foul. In this article we examine two of the most common – undisclosed conflicts of interest and citation manipulation.

The complex world of conflicts of interests

It’s all too easy for the situation to arise — an editor receives a submitted manuscript to review that has links to a company or organization in which the editor has some interest. Or perhaps an editor wishes to publish their own research in their journal; in highly-specialized fields, there may be no appropriate alternative publications to choose from. In PERK, Elsevier’s Publishing Ethics Resource Kit, we refer editors to guidelines [1] issued by the International Committee of Medical Journal Editors. These advise that:

“Editors who make final decisions about manuscripts must have no personal, professional, or financial involvement in any of the issues they might judge. Other members of the editorial staff, if they participate in editorial decisions, must provide editors with a current description of their financial interests (as they might relate to editorial judgments) and recuse themselves from any decisions in which a conflict of interest exists. Editorial staff must not use information gained through working with manuscripts for private gain. Editors should publish regular disclosure statements about potential conflicts of interests related to the commitments of journal staff.”

In its Code of Conduct for journal editors [2], COPE (the Committee on Publication Ethics) advises journal editors to establish systems for managing their own conflicts of interest, as well as those of their staff, authors, reviewers and editorial board members. It also recommends that journals introduce a declared process for handling submissions from the editors, employees or members of the editorial board to ensure unbiased review.

But even with these guidelines in place, deciding what constitutes a conflict of interest can be a subjective business. As a general rule of thumb, as an editor, your goal should always be to ensure that whatever action you take, it is transparent and is made free of actual or perceived bias.

Editors may also face challenges in maintaining the neutrality necessary for proper editorial decision-making. While you of course strive to be objective, you are likely very familiar with many of the individuals involved in research and publication in your field. As human beings, it may be difficult to remain completely impartial when dealing with, for example, a paper from a PhD student in your own lab, or a VIP with whom you are friendly. On the other hand, it is possible you may subconsciously disfavor submissions from individuals with whom you have had some kind of prior conflict — for example, someone who has failed to support you for funding or tenure, or someone who has rejected your own submission.

Another area of concern is when an editor succumbs to improper pressure when making an editorial decision. On a number of occasions, we have seen individuals or companies demand either that the editor publishes, or refrains from publishing, a particular paper. You should make editorial decisions based on editorial and scientific factors, not on political pressure or legal threats. It is our responsibility as publishers and journal owners to ensure that you feel confident enough to operate in this manner, by standing behind your reasonable decisions.

The ICMJE guidelines for conflicts of interest.

Citation ethics: a recapitulation of current issues*

Recent computational advances and the Internet have contributed to an increase in available content that some say has resulted in ‘information overload’ or ‘filter failure’. Scholarly communications have not escaped this trend, which is why journal performance indicators can play an important role in scientific evaluation as they provide systematic ways to compare journals. There are many different metrics available, using sources such as the relatively traditional counts of articles and citations, or the more recently available web usage or downloads. Altmetrics even make use, amongst other flavours of impact, of social media mentions. Using a variety of indicators helps yield a picture that is as thorough as possible, providing insights into the diverse strengths and weaknesses of any given journal [3,4], even though opinions on the appropriate use of journal-level bibliometrics indicators can be divided [5].

An example of the Altmetric.com donut which can be found on many Scopus articles.

Yet, journal performance metrics have long been used as prime measures in research evaluation, and many editors see it as part of their editorial duty to try to improve bibliometrics indicators and rankings for their journal [6]. The importance of these rankings, and how people perceive ethics misconduct, may be influenced by their geographical, cultural, academic, or even personal background.

As a consequence, a diversity of strategies and behaviors that endanger the validity of bibliometrics indicators has been observed:

  • Author self-citation, i.e. writing papers that cite articles previously authored, often with the intention of boosting one’s bibliometrics performance.
  • Journal self-citation, i.e. publishing papers that cite content previously published in the same journal. Journal level self-citations can be voluntary, for instance with an editorial citing several papers previously published in the journal, or coerced [7,8], for instance when an editor demands citations to previous journal content be added as a condition for publication.
  • Citation cartels [9], also called citation stacking, i.e. collusion across journals to inflate each other’s citations. This can even happen to a journal editor unknowingly - for instance, an author could also be an associate editor of Journal A, and include in their paper submitted to Journal B several gratuitous references to Journal A.

These are problematic because citations are meant to provide scientifically-justifiable, useful references, which can then be used to calculate several performance indicators measuring scientific impact. Superfluous citations can distort the validity of these metrics, and that makes them unethical. Practical consequences for the journal in question can include damaged reputation: for instance, when this kind of activity results in an anomalous citation pattern, the journal runs the danger of being suppressed from the Thomson Reuters’ Journal Citation Report [10] and losing its Impact Factor for two or more years. The list of titles suppressed from the JCR seems to increase in length every year, with 66 journals for the most recent year [11]. However, we need to see this rise in context as it may not only be attributable to an increase in unethical behavior - various factors could be at play, including JCR coverage expansion or improvements to the data monitoring process.

* Note: This section is based on recent articles in Editors’ Update and Elsevier Connect.

Author biographies

Sarah Huggett

Sarah Huggett
SENIOR PUBLISHING INFORMATION MANAGER
As part of the Scientometrics & Market Analysis team, Sarah provides strategic and tactical insights to colleagues and publishing partners, and strives to inform the bibliometrics debate through various internal and external discussions. Her specific interests are in communication and the use of alternative metrics such as SNIP and usage for journal evaluation. After completing an M.Phil in English Literature at the University of Grenoble (France), including one year at the University of Reading (UK) through the Erasmus programme, she moved to the UK to teach French at Oxford University. She joined Elsevier in 2006 and the Research Trends editorial board in 2009.

Linda Lavelle

Linda Lavelle
GENERAL COUNSEL (NORTH AMERICA)
Linda is a member of Elsevier’s legal team, providing support and guidance for its companies, products and services. She is also responsible for Elsevier’s Global Rights-Contracts team, and is a frequent speaker on matters of publication ethics. Linda earned her law degree from the University of Michigan and also has an MBA. She joined Harcourt in 1995, which subsequently became part of Elsevier. Before that time, she served in a law firm, and held a number of positions in the legal, scientific, and information publishing industry.

References

[1] International Committee of Medical Journal Editors, “Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Ethical Considerations in the Conduct and Reporting of Research: Conflicts of Interest”.

[2] Committee on Publication Ethics (COPE), “Code of Conduct and Best Practice Guidelines for Journal Editors”. March 2011.

[3] Amin, M & Mabe, M (2000), “Impact Factors: use and abuse”, Perspectives in Publishing, number 1, http://cdn.elsevier.com/assets/pdf_file/0014/111425/Perspectives1.pdf

[4] Bollen J, Van de Sompel H, Hagberg A, Chute R (2009) A Principal Component Analysis of 39 Scientific Impact Measures. PLoS ONE 4(6): e6022. doi:10.1371/journal.pone.0006022, http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0006022

[5] San Francisco Declaration on Research Assessment http://am.ascb.org/dora/ and Elsevier’s view http://elsevierconnect.com/san-francisco-declaration-on-research-assessment-dora-elseviers-view/

[6] Krell, FK (2010), “Should editors influence journal impact factors?”, Learned Publishing, volume 23, issue 1, pages 59-62, DOI:10.1087/20100110, http://alpsp.publisher.ingentaconnect.com/content/alpsp/lp/2010/00000023/00000001/art00010

[7] Wilhite, AW & Fong, EA, (2012) “Coercive Citation in Academic Publishing”, Science, volume 335, issue 6068, pages 542–543, DOI: 10.1126/science.1212540, http://www.sciencemag.org/content/335/6068/542

[8] Cronin, B (2012), “Do me a favor”, Journal of the American Society for Information Science and Technology, volume 63, Issue 7, page 1281, DOI: 10.1002/asi.22716, http://onlinelibrary.wiley.com/doi/10.1002/asi.22716/abstract

[9] Davis, P (2012), “Citation Cartel Journals Denied 2011 Impact Factor”, Scholarly Kitchen blog, 29 June 2012, http://scholarlykitchen.sspnet.org/2012/06/29/citation-cartel-journals-denied-2011-impact-factor/

[10] http://wokinfo.com/media/pdf/jcr-suppression.pdf

[11] http://admin-apps.webofknowledge.com/JCR/static_html/notices/notices.htm#editorial_information

Thoughts-email

Research misconduct – three editors share their stories

We approached three leading editors with the following question: we know that the three most common forms of ethical misconduct are falsification, fabrication and plagiarism. Please share with us the impact these have had on submissions to your journal and how you have handled them. In their answers below they touch on the ethics challenges […]

Read more >


We approached three leading editors with the following question: we know that the three most common forms of ethical misconduct are falsification, fabrication and plagiarism. Please share with us the impact these have had on submissions to your journal and how you have handled them. In their answers below they touch on the ethics challenges in their fields and how they are working to combat them.

Henrik Rudolph is Dean of Faculty Military Sciences for the Netherlands Defence Academy (NLDA). He has been Editor of Applied Surface Science - a journal devoted to applied physics and the chemistry of surfaces and interfaces - for more than eight years and Editor-in-Chief since 2011. During that time, he has handled several thousand manuscripts and become experienced both in the use of iThenticate (software for plagiarism detection and prevention) and the identification of suspicious manuscripts.

First of all, I prefer to talk about academic misconduct rather than ethical misconduct, since the latter is a much broader issue. It includes, for example, papers with experiments that are either prohibited by law (usage of lab animals) or due to their use of restricted materials are impossible to repeat in a normal research environment.

The frequency of academic misconduct has been rather stable since Applied Surface Science started using EES in July 2005. Close to 10% of the papers we receive show some sign of academic misconduct, but since the total number of submissions is increasing, the absolute number is also rising. The most common issue we see is too large an overlap with previously published material, i.e. plagiarism. Cases are evenly divided between self-plagiarism and regular plagiarism. These submissions are most often identified in the editorial phase (by the managing editor or editor) and are rejected before they are sent out for review. iThenticate is an important instrument for detecting academic misconduct, but often common sense is an equally important instrument: do certain parts of the paper look much more polished language-wise than the rest? Has the spelling suddenly changed from UK English to US English? We have even had cases where authors have copied the spelling mistakes in the papers they have plagiarized. If it looks fishy it probably is fishy.

Another common issue is the reuse of figures from previously published work. This is much more difficult to detect, but it can often be found by comparing the figure captions. We have seen all kinds of manipulations to mislead the reader: turning the figure 90 degrees, cropping the figure differently or even showing the negative image. These issues are found by editors but also by great reviewers. I am afraid that what is detected in only the tip of the iceberg – we are simply not equipped to detect this kind of academic misconduct. Also, the human gut feeling plays an important role here: does the figure look like the rest of the figures in graphics style? Does the date imprinted in the picture (often done in our field of work) correspond with the rest of the figures or is the figure much older than the rest? My colleague Professor Frans Habraken, who unfortunately passed away in 2011, was especially keen in detecting this kind of academic misconduct. He could spend a large portion of a day flipping, cropping and comparing figures.

Reusing old figures can be (self-) plagiarism, but it can also be pure falsification. Once in a while we encounter submissions which claim to have observed certain phenomena and support this with old material. Falsification is the most difficult type of academic misconduct to detect. As long as the results look plausible and are in line with expectations we, as human beings, are willing to accept them. Requesting the raw data for all experiments in a submission would help us. While we currently don’t require this from authors, it would be a natural extension of working online. Cloud space is getting cheaper by the day and any given experiment in our field should not generate terabytes of information. This would make it possible to let statistical tools loose on the experimental results and editors and reviewers could look closely at the underlying data. Falsification is seemingly the least common form of academic misconduct, but that could be related to the difficulties in detecting it. We also enter a grey area: is it academic misconduct to leave out data or experiments that were not in line with expectations?

Besides the above-mentioned common forms of academic misconduct, there is a more serious threat arising. The pressure on (young) academic staff to publish is huge. Often people are included as authors when they have contributed only marginally, if at all. This might sound like a rather innocent kind of altruism, but it is highly misleading for the reader and very irritating for an editor. Even worse are the cases where major contributors are left out as authors. Behind every single case reported (around 5-6 per year) there is some kind of conflict. Either the author did not agree with the interpretation of the data or had a personal conflict with the corresponding author. Last, but not least, we also have cases where authors have been included without their knowledge. This is sometimes done out of gratitude - he/she helped me greatly - and sometimes as an acknowledgement of the accomplishment of an established expert in the field.

Occasionally, we see people publishing data which they were not allowed to publish, or should have asked permission to publish. In these cases there is most likely nothing wrong with the data or the submission, but the authors gave away something that was not theirs to give away; the copyright to the paper. While these cases are few and far apart, they always have legal aspects that are beyond the capacity of an average Editor (-in-Chief), so I would suggest the Editor (-in-Chief) contacts the legal department of Elsevier as soon as possible. Your publisher or other contact person at Elsevier will help you with this.

At Applied Surface Science, we have agreed that all cases of academic misconduct are handled by the Editor-in-Chief. This makes it simpler to stick to one line of action and ensures the Editor-in-Chief gains the experience necessary to handle the different kinds of academic misconduct we see. But no academic misconduct case is alike, so there is never a dull moment while investigating one. We keep track of the academic misconduct cases and put notes in the author profiles in EES. We even involve collaborators if there is reason to believe that it was a group issue rather than an individual rogue author. As Editor-in-Chief, I often kindly ask an author not to submit new papers to the journal for a while. This step is often taken when an author is caught for a second time. Repeat offenders are unfortunately rather common and it is therefore important to keep track of past transgressions.

Bottom line for detecting academic misconduct: don’t underestimate the stupidity of the transgressor and don’t underestimate your own ability to be misled.

Professor Ulrich Brandt is Deputy Chair of the Nijmegen Centre for Mitochondrial Disorders (NCMD) at Radboud University Medical Centre in The Netherlands. For many years he served as the Chair of the ethics committee of the Goethe-University in Frankfurt, Germany. He is also Editor-in-Chief of Biochimica and Biophysica Acta (BBA), comprising nine topical sections, and advises on many of the journal’s publishing ethics cases.

The problem of publication ethics is not too big for our journal and our field, at least not if you consider the amount of cases we are aware of, which is less than one per month. On the other hand, this is probably only the tip of the iceberg. While cases are sometimes identified by reviewers, most frequently they are discovered following complaints by colleagues or peers.

There is a certain upward trend in the number of cases and this probably has two causes: increased awareness and more people carrying their disputes with colleagues over to journal editors. The most common forms of research misconduct we see involve author disputes; (self-) plagiarism; manipulated figures; and improper citations. However, I am concerned about the fact that some improper things – for example, the pasting together of Western blots – are not even looked at as scientific misconduct by some people.

Recent publishing ethics cases we have dealt with include:

  • Complaints that a peer has not properly cited somebody’s work.
  • Complaints that a person who produced data presented in the paper was not properly acknowledged as an author or did not authorize the publication.
  • Repeated use of the same Figure panels – but with different labeling. We’ve also seen suspiciously similar data between different Figures (bands in blots, curves, etc…).
  • Self-plagiarism by publishing the same data in two languages without proper citation of the first publication.

The editor is not usually in the position to investigate these cases and therefore – except in clear cases of misconduct – can only moderate between the parties involved. If it can help to clarify the situation, we should confront the authors with the allegations and ask for original data. However, once there are indications of serious scientific misconduct, it is time to inform the organization of the corresponding author and ask for an investigation of the case. The verdict reached by the organization in question can help to inform your decision-making.

I think it is important to avoid getting involved in personal disputes and ignore anonymous complaints, unless they are severe and immediately seem justified.

Apart from picking good and knowledgeable reviewers, there is little that can be changed in the peer-review system that would help with this problem. I don’t think that ideas like publishing reviews and reviewers’ names of accepted papers will be helpful. Authors should know that their papers may be checked by anti-plagiarism software, because this will have a good deterrent effect.

I have found the resources Elsevier has available useful, for example, membership of COPE (the Committee on Publication Ethics) and PERK (the Publishing Ethics Resource Kit). Also, publicizing that Elsevier journals and editors are actively involved in such activities will make us less attractive to potential bogus authors.

Overall though, the problem of scientific misconduct cannot be solved by the journals. It is often a matter of the culture within a given scientific community.

Ben Martin is Professor of Science and Technology Policy Studies at the University of Sussex and an Editor on the journal Research Policy, which explores policy, management and economic studies of science, technology and innovation. He recently authored an extended editorial for his journal entitled Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of research assessment [1]. It discusses whether peer review is continuing to operate effectively in policing research misconduct in the academic world.

In my field, the problem of research misconduct is substantial and it is growing – perhaps that is also because we are becoming better at uncovering these cases. Typically, the cases we see involve self-plagiarism, redundant publication or duplicate submissions.

They are normally identified by alert reviewers, sometimes by editors, and occasionally with the benefit of information on the 'grapevine' from editors of other journals who have encountered problems with a particular individual.

The role of the editor in these cases is to oversee the process of investigation (including ensuring all the facts are double-checked independently), ask the author(s) to respond, and decide on the outcome and appropriate sanction.

I find following the COPE 'flowcharts' useful. I also consult the discussions of previous similar cases on COPE’s website. It is also important to check each step with other editors and with Elsevier and avoid the trap of becoming too upset by misbehaving authors – the danger is that you will then overreact.

If we want to solve these problems we need the academic community to be willing to discuss them openly - particularly about where the line between acceptable and unacceptable research behavior should be drawn. We also need more systematic training of young researchers with regard to such matters (what the rules are, what to do if they spot misconduct, the role of referees, editors and publishers etc…).*

* Note from Ed: In the November Part II of this Ethics Special, we will take a closer look at some of the activities already underway at Elsevier to help train early career authors and reviewers.

References

[1] Ben R Martin, “Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of research assessment”, Research Policy, Volume 42, Issue 5, June 2013.

Research-Bias-email

Bias in research: the rule rather than the exception?

Dr Kevin Mullane and Dr Mike Williams, two of the editors of the Elsevier journal, Biochemical Pharmacology, discuss some of the causes and prevalence of bias in the fields of biomedical research – and the implications for the wider research community. As the primary purpose of scientific publication is to share ideas and new results […]

Read more >


Dr Kevin Mullane and Dr Mike Williams, two of the editors of the Elsevier journal, Biochemical Pharmacology, discuss some of the causes and prevalence of bias in the fields of biomedical research - and the implications for the wider research community.

As the primary purpose of scientific publication is to share ideas and new results to foster further developments in the field, the increasing prevalence of fraudulent research and retractions is of concern to every scientist since it taints the whole profession and undermines the basic premise of publishing.

While most scientists tend to dismiss the problem as being due to a small number of culprits - a shortcoming inherent to any human activity - there is a larger issue on the fringes of deception that is far more prevalent and of equal concern, where the adoption of certain practices can blur the distinction between valid research and distortion – between "sloppy science", "misrepresentation", and outright fraud [1].

Bias in research, where prejudice or selectivity introduces a deviation in outcome beyond chance, is a growing problem, probably amplified by:

  • the competitive aspects of the profession with difficulties in obtaining funding;
  • pressures for maintaining laboratories and staff;
  • the desire for career advancement (‘first to publish’ and ‘publish or perish’); and, more recently,
  • the monetization of science for personal gain.

Rather than being "disinterested contributors to a shared common pool of knowledge" [2], some scientists have become increasingly motivated to seek financial rewards for their work through industrial collaborations, consultancy agreements and venture-backed business opportunities; even to the exclusion of concerns regarding the accuracy, transparency and reproducibility in their science.

Bias tends to be obscured by the sheer volume of data reported. The number of publications in Life Sciences has increased 44% in the last decade, and at least one leading biomedical journal now publishes in excess of 40,000 printed pages a year. Data is generally viewed as a "key basis of competition, productivity growth...[and]... innovation" [3], irrespective of its conception, quality, reproducibility and usability. Much of it, in the opinion of Sydney Brenner, has become "low input, high throughput, no output science" [4].

Indeed, while up to 80% of research publications apparently make little contribution to the advancement of science - "sit[ting] in a wasteland of silence, attracting no attention whatsoever" [5], it is disconcerting that the remaining 20% may suffer from bias as reflected in the increasing incidence of published studies that cannot be replicated [6,7] or require corrections or retractions [8], the latter a reflection of the power of the Internet.

Categories of bias

Although some 235 forms of bias have been analyzed, clustered and mapped to biomedical research fields [9], for the purposes of this brief synopsis, a cross-section of common examples are grouped into three categories:

1. Bias through ignorance can be as simple as not knowing which statistical test should be applied to a particular dataset, reflecting inadequate knowledge or scant supervision/mentoring. Similarly, the frequent occurrence of inappropriately large effect sizes observed when the number of animals used in a study is small [10-13], that subsequently disappear in follow-up studies that are more appropriately powered or when replication is attempted in a separate laboratory, may reflect ignorance of the significance of determining effect sizes and conducting power calculations [11,12,14].

The concern with disproportionate large effect sizes from small group sizes has been recognized by the National Institutes of Health (NIH) [15], which now mandates power calculations validating the number of animals necessary to determine if an effect occurs before funding a program. However, this necessitates preliminary, exploratory analyses replete with caveats, which might not get revisited, and is not a requirement with many other funding agencies. Too often studies are published with the minimal number of animals necessary to plug into a Student's t-test software program (n=3) or based on 'experience' or history. Replication of any finding as a standard component of a study is absolutely critical, but rare.

2. Bias by design reflects critical features of experimental planning ranging from the design of an experiment to support rather than refute a hypothesis; lack of consideration of the null hypothesis; failure to incorporate appropriate control and reference standards; and reliance on single data points (endpoint, time point or concentration/dose). Of particular concern is the failure to perform experiments in a blinded, randomized fashion, which can result in 3.2- and 3.4-fold higher odds, respectively, of observing a statistically significant result when compared to studies that were appropriately blinded or randomized [16]. While the impact of randomization might come as a surprise, since many animal studies are conducted in inbred strains with little heterogeneity, the opportunity to introduce bias into non-blinded experiments, even unintentionally, is very obvious. It is paramount that the investigator involved in data collection and analysis is unaware of the treatment schedule. How an outlier is defined and to be handled (e.g. dropped from the analysis), or what sub-groups are to be considered, must be established a priori and effected before the study is un-blinded. Despite its importance in limiting bias, one analysis of 290 animal studies [16] and another of 271 publications [15] revealed that 86-89% were not blinded.

Another important consideration in experimental design is the control of potentially confounding factors that can influence the experimental outcome indirectly. In the field of pharmacology, at a basic level this might include the importance of controlling blood pressure when conducting evaluations of compounds in preclinical studies of heart attack, stroke or thrombosis; or the recognition that most compounds lose specificity at higher doses; but consideration might also need to be given to other factors such as the significance of chronobiology (where, for example, many heart attacks occur within the first 3 hours of waking), referenced in [30].

3. Bias by misrepresentation. Researchers are an inherently optimistic group – the 'glass half full' is more likely brimming with champagne than tap water. Witness the heralding of the completion of the Human Genome Project or the advent of gene therapy, stem cells, antisense, RNAi, any "-omics" - all destined to have a major impact on eradicating disease in the near-term. This tendency for over-statement and over-simplification carries through to publications. The urge and rush to be first to publish a new "high-profile" finding can result in "sloppy science" [1], but more significantly can be the result of a strong bias [17]. Early replications tend to be biased against the initial findings, the Proteus phenomenon, although that bias is smaller than for the initial study [17]. It is not clear which is more disturbing – the level of bias and selective reporting found to occur in the initial studies; the finding that ~70% of follow-on studies contradict the original observation; or that it is so common and well-recognized a phenomenon that it even has a name.

A recent evaluation of 160 meta-analyses involving animal studies covering six neurological conditions, most of which were reported to show statistically significant benefits of an intervention, found that the "success rate" was too large to be true and that only 8 of the 160 could be supported, leading to the conclusion that reporting bias was a key factor [18].

The retrospective selection of data for publication can be influenced by prevailing wisdom promoting expectations for particular outcomes, or, where the benefit of hindsight at the conclusion of a study allows an uncomplicated sequence of events to be traced and promulgated, as the only conclusion possible.

While research misconduct in terms of overt fraud [1,19,20] and plagiarism [21] is a topic with high public visibility, it remains relatively rare in research publications while data manipulation, data selection and other forms of bias are increasingly prevalent. Whether intentional, the result of inadequate training or due to a lack of attention to quality controls, they foster an approach and attitude that blurs the distinction between necessary scientific rigor and deception, and probably contribute substantially to the poor reproducibility of biomedical research findings [6,7].

Scientific bias represents a proverbial "slippery slope", from the subjectivity of "sloppy science" [1] and lack of replication [22] to the deliberate exclusion or non-reporting of data [6,7] to outright fabrication [19,20]. Plagiarism, distortion of data or its interpretation, physical manipulation of data, e.g., western blots [23], NMR spectra [24] to make the outcomes more visually appealing or obvious (often ascribed to the seductive simplicity of PowerPoint and the ease of manipulation with Photoshop), and blatant duplicity in the biopharma industry in the selective sharing of clinical trial outcomes [25] with inconclusive/negative trials often not reported [26], all contribute to the expanding concerns regarding scientific integrity and transparency.

This is an issue that obviously increases in importance as the outcomes of investigator bias impact the expenditure of millions of dollars on research programs that are progressed based on data presented; where inappropriate New Chemical Entities are advanced into clinical trials also exposing patients to undue risk; and unvalidated biomarkers are promoted to an anxious and misinformed public.

Correcting bias

With the increase in bias, data manipulation and fraud, the role of the journal editor has become more challenging, both from a time perspective and with regards to avoiding peer-review bias [27]. And while keeping the barriers high [8,28], much of the process still depends on the integrity and ethics of the authors and their institutions. It is paramount that institutions, mentors and researchers promote high ethical standards, rigor in scientific thought and ongoing evaluations of transparency and performance that meet exacting guidelines. Clinical trials with a full protocol defining size of the study, randomization, dosing, blinding and endpoints have to be registered before the study can begin, and, at the conclusion of the study, every patient has to be accounted for and included in the analysis. A proposal has been made [29] that non-clinical studies should adopt the same standards and, while not a requirement, such guidelines provide a useful rule of thumb to consider when designing any study. These topics, and their impact on the translation of research findings to the clinic, will be discussed in greater detail in an upcoming article in Biochemical Pharmacology [30].

Author biographies

Kevin Mullane

Kevin Mullane

Kevin Mullane
CARDIOVASCULAR EDITOR, BIOCHEMICAL PHARMACOLOGY & PRESIDENT, PROFECTUS PHARMA CONSULTING INC.
Kevin’s main guise has been as a drug hunter at multinational pharmaceutical (Wellcome, CIBA-Geigy) and biotechnology companies (Gensia, Chugai Biopharmaceuticals), before becoming President and CEO of Inflazyme Pharmaceuticals. Subsequently he has been an advisor to industry, academia, foundations and VC companies, evaluating technologies and developing translational opportunities. Kevin received his PhD from the University of London.

Michael Williams

Michael Williams

Michael Williams
COMMENTARIES EDITOR, BIOCHEMICAL PHARMACOLOGY & ADJUNCT PROFESSOR, DEPARTMENT OF MOLECULAR PHARMACOLOGY AND BIOLOGICAL CHEMISTRY, FEINBERG SCHOOL OF MEDICINE, NORTHWESTERN UNIVERSITY, CHICAGO.
Mike retired from the pharmaceutical industry in 2010 after 34 years in drug discovery research with Merck, CIBA-Geigy, Abbott and Cephalon. He has been actively involved with the biotech industry as a consultant, SAB member and executive (Nova, Genset, Adenosine Therapeutics, Antalium, Tagacept, Elan, Molecumetics) and has published extensively in the areas of pharmacology and drug discovery. He received his PhD and DSc degrees from the University of London in an era long before e-books could be downloaded.

References

[1] Stemwedel JD, “The continuum between outright fraud and "sloppy science": inside the frauds of Diederik Stapel (part 5)”, Scientific American June 26, 2013.

[2] Felin T, Hesterly WS, "The Knowledge-Based View, Nested Heterogeneity, And New Value Creation: Philosophical Considerations On The Locus Of Knowledge", Acad. Management Rev 2007, 32: 195–218.

[3] Manyika J, Chui M, Brown B, Bughin J, Dobbs R, Roxburgh C, Byers AH, “Big data: The next frontier for innovation, competition, and productivity“, McKinsey Global Institution, April 2011.

[4] Brenner S, “An interview with... Sydney Brenner”, Interview by Errol C. Friedberg, Nat Rev Mol Cell Biol 2008; 9:8-9.

[5] Mandavilli A, “Peer review: Trial by Twitter”, Nature 2011; 469, 286-7.

[6] Prinz F, Schlange T, Asadullah K, “Believe it or not: how much can we rely on published data on potential drug targets?”, Nature Rev Drug Discov 2011; 10: 712-3.

[7] Begley CG, Ellis LM, “Drug development: Raise standards for preclinical cancer research“, Nature 2012, 483, 531-533.

[8] Steen RG, Casadevall A, Fang FC, “Why has the number of scientific retractions increased?“, PLoS ONE 2013: 8: e68397.

[9] Chavalarias D, Ioannidis JPA, “Science mapping analyses characterizes 235 biases in biomedical research”, J Clin Epidemiol 2010; 63: 1205-15.

[10] Ioannidis JPA, “Why most published research findings are false“, PLoS Med 2005: e124.

[11] Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, et al., “Power failure: why small sample size undermines the reliability of neuroscience”, Nat Rev Neurosci 2013; 14: 365-76.

[12] Henderson VC, Kimmelman J, Fergusson D, Grimshaw JM, Hackam DG, “Threats to validity in the design and conduct of preclinical efficacy studies: a systematic review of guidelines for in vivo animal experiments“, PLoS Med 2013: e1001489.

[13] Sean ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR, “Publication bias in reports of animal stroke studies leads to major overstatement of efficacy“, PLoS Biol 2010; 8: e1000344.

[14] Kilkenny C, Parsons N, Kadyszewski E, Festing MFW, Cuthill IC, et al., “Survey of the quality of experimental design, statistical analysis and reporting of research using animals“, PLoS One 2009; 4: e7824.

[15] Wadman M, “NIH mulls rules for validating key results”, Nature 2013: 500:14-6.

[16] Bebarta V, Luyten D, Heard K, “Emergency medicine animal research: does use of randomization and blinding affect the results?”, Acad Emerg Med 2003; 10; 684-7.

[17] Pfeiffer T, Bertram L, Ioannidis JPA, “Quantifying selective reporting and the Proteus Phenomenon for multiple datasets with similar bias“, PLoS One 2011; 6: e18362.

[18] Tsilidis KK, Panagiotou OA, Sena ES, Aretouli E, Evangelou E, et al., “Evaluation of excess significance bias in animal studies of neurological diseases“, PLoS Biol 2013; 11: e1001609.

[19] Kakuk P, “The Legacy of the Hwang Case: Research Misconduct in Biosciences”, Sci Engineer Ethics 1; 2009: 645-62.

[20] Bhattacharjee Y. “The Mind of a Con Man“, New York Times Magazine April 26, 2013.

[21] “Science publishing: How to stop plagiarism”, Nature 481, 21–23.

[22] Ivan Oransky, “The Importance of Being Reproducible: Keith Baggerly tells the Anil Potti story, Retraction Watch, May 4, 2011.

[23] Rossner M, Yamada KM, “What's in a picture? The temptation of image manipulation”, J Cell Biol 2004;166:11-5.

[24] Smith III AB, “Data Integrity”, Organic Letts 2013, 15: 2893-4.

[25] Eyding D, Lelgemann M, Grouven U, Harter M, Kromp M, Kaiser T et al., “Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials”, BMJ 2010;341:c4737.

[26] Doshi P, Dickersin K, Healy D, Vedula SW, Jefferson T, “Restoring invisible and abandoned trials: a call for people to publish the findings”, BMJ 2013; 346:f2865.

[27] Lee CJ, Sugimoto CR, Zhang G, Cronin B,”Bias in peer review”, J. Amer Soc Info Sci Technol 2013: 64:2-17.

[28] “Reducing our irreproducibility”, Nature 2013: 496: 398.

[29] Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG, “Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research“, PLoS Biol 2010; 8: e1000412.

[30] Mullane K, Winquist RW, Williams M, “The translational paradigm in drug discovery”, Biochemical Pharmacology, 2014.

What-is-misconduct

Understanding and addressing research misconduct

At what point does an author’s behavior earn the label ‘unethical’? When does taking inspiration from another cross the line and become plagiarism? Deciding what constitutes research misconduct is never easy. In this article, Linda Lavelle, a General Counsel on Elsevier’s legal team, reflects on the challenges of defining and responding to these cases, while […]

Read more >


At what point does an author’s behavior earn the label ‘unethical’? When does taking inspiration from another cross the line and become plagiarism? Deciding what constitutes research misconduct is never easy. In this article, Linda Lavelle, a General Counsel on Elsevier’s legal team, reflects on the challenges of defining and responding to these cases, while Publishing Director, Charon Duermeijer, focuses on the roles editors and publishers should play.

Dealing with ethics issues in journal publishing demands increasingly significant amounts of time and attention from scientific and medical journal editors. Our editors tell us that the seemingly ancillary responsibility of handling allegations of ethics breaches has become incredibly frustrating and time-consuming. In fact, when we host seminars for journal editors on a variety of publishing subjects, our ethics sessions (often titled ’Liars, Cheats, and Thieves‘) are consistently the best-attended presentations, and generally stimulate more discussion than any other publishing topic.

Why are ethics situations so challenging for editors?

Why are these cases so difficult for journal editors? Well, for one thing, some ethics issues fall into grey areas. While a situation involving data fabrication is clearly an ethics breach, cases such as disputed authorship or duplicate submission may be less clear. While uncredited text constitutes copyright infringement (plagiarism) in most cases, it is not copyright infringement to use the ideas of another.  The amount of text that constitutes plagiarism versus ‘fair use’ is also uncertain — under the copyright law, this is a multi-prong test that involves subjective analysis and balancing (in other words, guessing!) — so there is often not a clear answer as to when something does in fact constitute an ethics violation.

There are also widespread misconceptions about certain ethics issues, particularly in developing countries where education relating to publishing ethics may not be so freely available. We once had a situation where a young scientist insisted that the plagiarism allegation against him was unfounded. He told us that although most of the paper was in fact a word-for-word copy of an article authored by another scientist, his submission wasn’t plagiarism because he had changed the first sentence of every paragraph.

Sometimes ethics situations pose challenges because they arise from standards and rules that are different from journal to journal. For example, rules and practices relating to disclosing conflicts of interests, or what constitutes article authorship, may vary from discipline to discipline, or even from journal to journal — there are currently no uniform standards for conflict disclosure or authorship requirements that span all of scientific and medical publishing.

Another reason behind the difficulty in dealing with these situations is that editors are often not in a position to know what has really occurred. Does the person bringing the complaint to our attention have some personal bone to pick with the author who is being accused of the ethics breach? In one situation, we didn’t initially realize that the ‘complainant’ was the ex-wife of the ‘complainee’; in another, a professor falsely accused a former colleague who, it turned out, had not supported him for tenure. When there are complaints regarding who should be an author, or about unethical research practices, how can a journal editor know what really went on behind the scenes at the institution during the research and writing of the paper? An editor does not have resources to do police-like investigations, nor should that be the role of the editor in these cases. Nonetheless, the lack of information often puts the editor in a quandary when trying to decide the proper resolution.

One of the primary reasons that editors find these cases so frustrating is because these types of breaches deeply offend their personal conviction in the integrity of scientific publishing. And, of course, ethics breaches have the potential to damage the reputation of their journal, to which they have dedicated so much commitment and personal conviction.

For many publishers and editors, news of ethical misconduct invokes a strong response. Our first instinct may well be that it should not be tolerated and the authors involved should be punished. But is that appropriate? Is punishment always the answer? Once we start diving into the actual facts, we typically find that there are many shades of grey in allegations of ethical misconduct. This is exactly where the role of the editor comes in.

There are various ways that publishers and editors are alerted to a case. Often referees discover an issue while reviewing a paper and bring that to the attention of the editor. Subsequently, the issue is flagged with us, the publisher. At first, all concerned are disappointed that such a case has even happened and then in our journal! Personally, I feel thankful that a referee (or any other whistleblower) has helped the journal to rectify the situation and protected the scientific literature from any consequences of misconduct. It probably means that the editor has picked a very good and thorough referee.

As a first step in our systematic approach, we consult PERK (our Publishing Ethics Resource Kit) guidelines and point our editors in the right direction. Whatever the issue and final outcome is though, we as a publisher need the expert, in-depth knowledge of the scientific community, in this case the editor and referee; this is truly a team effort. Publishers are usually not equipped with the exact scientific background to understand the shades of grey. We have to rely on the scientific knowledge of true insiders which is why editors are ultimately responsible for the assessment of alleged misconduct.

If you are an experienced editor for one of our journals, you may have already dealt with one or more ethics cases and know the process well. Some of your fellow editors may be less experienced because they are new, or have just been fortunate that, until now, their journal has not been affected. The ethics cases we deal with at Elsevier are very diverse and you could say that some authors have become very creative in their actions! We had a recent case where a referee noted that the discussed results were remarkably similar to those produced by his PhD student not too long ago, when they were all happily working in the same group. As the investigations still continue, the editor is trying to find out what really happened and if this accusation is correct. As emotions can run high, it requires a lot of patience and persistence but mostly a neutral eye for the situation at hand. In another recent case, the corresponding author supposedly added co-authors with whom he may not have written this particular paper; the jury is still out on that one. The recently added invitation to all co-authors to add their ORCID ID to a submission in Elsevier's Editorial System (EES) is a first step to address such issues, as it alerts people that they have been listed as co-authors for a particular paper. Previously, co-authors were not consulted at all.

On occasion, editors can get understandably nervous about who is legally responsible for making the ultimate decision but rest assured, Elsevier can offer legal support and has insurance in the unlikely case of litigation. 

At Elsevier, we handle hundreds of publishing ethics cases a year. Investigations around the actual case have to be handled very carefully and it may take some time to come to a conclusion which is fair to all parties involved (in some cases even up to a few years). When the outcome of an investigation results in retracting a paper from ScienceDirect, the retraction may come to the attention of Retraction Watch or similar websites. It could very well be that you as the responsible editor will be contacted by a journalist for comment; please feel free to consult your publisher who can help you address this in a timely manner. In Part II of this Ethics Special (publication date November), our Vice President of Global Corporate Relations, Tom Reller, will offer some advice on dealing with these situations.

As you continue to grow into your role as an editor, you will likely become more familiar with the variations. And although we may all feel instinctively disappointed by the alleged offence, we constantly need to consider whether the punishment (if proven guilty) is appropriate and proportionate; we aim to not only be consistent within a journal but across all Elsevier journals. After all, there are many shades of grey….

Author bios

Linda Lavelle

Linda Lavelle
GENERAL COUNSEL (NORTH AMERICA)
Linda is a member of Elsevier’s legal team, providing support and guidance for its companies, products and services. She is also responsible for Elsevier’s Global Rights-Contracts team, and is a frequent speaker on matters of publication ethics. Linda earned her law degree from the University of Michigan and also has an MBA. She joined Harcourt in 1995, which subsequently became part of Elsevier. Before that time, she served in a law firm, and held a number of positions in the legal, scientific, and information publishing industry.

Charon Duermeijer

Charon Duermeijer
PUBLISHING DIRECTOR, PHYSICS
Since Charon joined Elsevier in 2000, she has held various publishing roles for Physics. In close collaboration with many editors around the world, she has worked on improving and setting the strategy for various journals. She holds a PhD in Geophysics from the Utrecht University in The Netherlands. Prior to Elsevier, she worked at Kluwer Academic Publishers and Goldfields of South Africa. She is currently responsible for the Physics team and their journals.

Mark Seeley

Guest Editorial: Elsevier General Counsel, Mark Seeley

Ethics in society and business has many meanings — conducting one’s self in a way that respects and recognizes others and their contributions, but also the notion of compliance with our societally-sanctioned behavioral processes (including laws and regulations). Ethics in scientific and medical publishing takes these meanings a step further — by touching on the […]

Read more >


Ethics in society and business has many meanings — conducting one’s self in a way that respects and recognizes others and their contributions, but also the notion of compliance with our societally-sanctioned behavioral processes (including laws and regulations). Ethics in scientific and medical publishing takes these meanings a step further — by touching on the relationships within the scholarly communication process involving the journal, the relevant scientific or medical community that a particular journal serves, and society at large, publishing ethics encompasses a complex set of relationships which are mutually reinforcing. Scholarly journals have enormous prestige due to the stewardship that such journals have exhibited, over the decades, concerning the ‘record of science’ and the trust that their respective communities and society as a whole have in that record.

In one sense, publishing ethics allegations diminish that historical reputation and trust — climate change ‘deniers’ have used some controversies concerning undeclared potential conflicts of interest to suggest that the fundamental research is flawed, even though there is little evidence of substantial scientific disagreement on the broad question of the impact of industrial activities on climate. On the other hand, increasing the transparency and visibility over processes for managing publishing ethics allegations could — and should — shore up the journal’s reputation.

Society has highly unrealistic expectations of scientists, journal editors, and journals. The concept of peer review, for example, is often portrayed extremely simplistically as a ‘quality testing’ process —and consequently society expects that if an article has been through the peer-review process it should be completely sound in its calculations and methods. The reality is that peer review is about some fairly subjective points — a paper’s potential impact and importance and how the paper fits into the theoretical developments in the relevant discipline. It is also sometimes about identifying true ‘outliers’ in research results or methods, thus giving the journal and the editor an opportunity to review these in more depth. It is not a ‘Consumer Reports’ second laboratory duplicating the purported results of a particular paper for testing and quality assurance purposes. And yet, society is not wholly wrong to expect that the scholarly community — and thus the journal — should be better able to recognize fraudulent results or plagiarism and report on such violations as responsibly as possible.

The responsibility for correcting the scientific record when ethics allegations are confirmed falls largely on the journal editor and the publishing team, often with support from the relevant research funders, universities and institutions, and other investigators or peer reviewers. While Elsevier is adding a small team of publishing ethics experts, and while Elsevier legal team members are often involved in highly contentious or ‘legalistic’ matters, the expertise of the journal editor is fundamental to any significant investigation or consideration. The editor has the appropriate grounding in the particular discipline, an understanding of the relevant expertise of various research teams, and a general sense of which facts are more likely correct than not. We must always keep in mind that the standard of ‘proof’ that we look for is a general ‘more likely than not’ standard, not a deeply rigorous, criminally-oriented, ‘beyond a shadow of doubt’ standard. One reason for this standard level is that we anticipate that the relevant scholarly community, as an informed reader/consumer of journal content, will come to its own conclusions as to the merits and relevance of particular allegations and claims, assuming the requisite transparency and disclosure.

Law is the profession of skeptics, and a skeptical point of view is often useful in judging the merits of competing narratives in a publishing ethics dispute. Law is supposed to teach us to be logical, to think through the alternatives and contrary points of view, and to disregard emotion and ‘threats’. There are sometimes vague and sometimes very specific threats that are made by complainants or subjects of inquiries about resorting to legal process and formal legal complaints. These are generally quite silly. The few courts that have been asked to opine about scholarly publishing complaints have generally been respectful of the scientific process and charmingly denigrating about the ability of the legal process to improve on the underlying scientific investigation or conclusion. But my team is prepared to help when complaints like this are made or when it is otherwise deemed useful by editors or publishing team members. We are happy to help, and we are quite passionate about publishing ethics issues and process.

For me personally, I have been involved in publishing ethics policy discussions, the drafting of policy and procedural documents, individual investigations of a ‘legalistic’ nature, and service on our formal ‘retractions panel’ for more than ten years now (along with my UK-based colleague, Senior Vice President of Research & Academic Relations, Mayur Amin). I have enjoyed my discussions with editors and our publishing team members on these issues, and I like to think I have helped to contribute to good policy-setting and reasonably professional retraction processes. I’m also fairly opinionated, so I wanted to conclude this introduction with some general comments and observations.

One can make the argument that the level of ethics issues has not significantly increased, but is simply more visible now. However, I think the better view — one more consistent with the evidence on the number of retractions — is that we are seeing an actual rise in volume. The number of formal retractions as recorded on Elsevier’s ScienceDirect platform has more than doubled between 2004 and today — in fact, for 2013 it looks as if we will have close to 200 retractions, which would be five times the number we had in 2004. I believe this is due to the ramp up of the pressure to publish and occasionally the pressure to take short-cuts — and I believe this pressure is increasing across the board, including in the rapidly-developing countries of the world where scientific research is exploding. As vehicles for investigation and comparisons, I applaud the efforts of COPE (the Committee on Publication Ethics) and CrossCheck (both of which you will hear more about in Part II of this Ethics Special), but would note that they are only vehicles and not a substitute for editorial judgment.

Researchers should be allowed the benefit of the doubt, particularly at an early point in their career, and we should accept that researchers will sometimes make mistakes at this stage which they can then learn from. This is why I am leery of the notion of ‘blacklisting’ an author or research group. I do not believe that authors should be given carte blanche when it comes to ethics violations — simply that we should be careful not to rush to judgment and be careful in recommending the appropriate sanction for the relevant degree of ethics violation.

I think that not all misconduct is equal — to me outright fraud is the most dangerous and has the most impact on the community (as it may cut off otherwise promising research areas), and fraud allegations are relatively rare. I would distinguish between plagiarism, which involves taking credit for someone else’s research efforts, as opposed to merely copying a stray paragraph or two — the latter is certainly wrong and deserves some form of censure but the former is more inherently improper as a destabilization of the research environment. I think arguments about authorship, and particularly the idea of one author as opposed to all co-authors being identified in a retraction notice as the ‘culprit’, are unnecessary and ultimately do not advance science. Of course, I accept that a co-author who was an equal participant in a project and a paper deserves recognition for their contribution.

Elsevier has had to address some publication ethics issues over the past several years, but I believe we have addressed them head-on and pro-actively. Last year, there was some discussion about the ‘faking’ of peer-reviewer identities in our article submission system, which we acknowledged and dealt with by improving our system (unfortunately identity ‘theft’ in this sense is difficult to prevent entirely except by improving on personal security systems in areas like passwords). We have also had public controversies about a now-retired editor who accepted many of his own authored papers for his journal, and controversies about genetically modified foods and the raising of children by same-sex couples. In all of these controversies we have worked hard to achieve the ‘right’ resolution — first by emphasizing science (if appropriate scientific and publishing processes have been followed, then our view is that we should stick with the result, even if it is not ‘politically correct’) and second by emphasizing transparency and disclosure. Again, we trust the relevant scientific community to put things into context and judge things on their scientific merits.

By sticking with science and transparency, we can give assurance to society that our publishing processes are trustworthy — not that they are perfect but that they can be relied on. Science publishing, however, is not headline-oriented, short-term journalism — it is about the long-term process of building on discoveries and theories.

I hope you enjoy this special edition as I know I will!

Mark Seeley
Senior Vice President & General Counsel, Elsevier
Chair of the Copyright and Legal Affairs Committee, STM (International Association of Scientific, Technical & Medical Publishers)

Magnifying-glass

Welcome to Part I of our Ethics Special edition

Publishing ethics, research misconduct… call it what you will it has become one of the greatest challenges many journal editors face today. In fact, a growing number of you have been moved to pen editorials on the subject – two recent examples being Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of […]

Read more >


Publishing ethics, research misconduct… call it what you will it has become one of the greatest challenges many journal editors face today.

In fact, a growing number of you have been moved to pen editorials on the subject – two recent examples being Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of research assessment by Research Policy’s Professor Ben Martin and Falsification, Fabrication, and Plagiarism: The Unholy Trinity of Scientific Writing by Dr Anthony L Zietman, Editor-in-Chief of the International Journal of Radiation Oncology * Biology * Physics.

As Elsevier General Counsel, Linda Lavelle, notes in this issue, “when we host seminars for journal editors on a variety of publishing subjects, our ethics sessions (often titled 'Liars, Cheat, and Thieves') are consistently the best-attended presentations….”

With research misconduct clearly such an area of concern, we have devoted the final two issues of 2013 to the topic.

This edition, Part I of our Ethics Special, moves from a broad overview of the current publishing ethics landscape to a more detailed examination of aspects such as bias and conflicts of interest. Part II, due for publication in early November, will take a closer look at the resources offered by Elsevier and the wider industry to support you when these cases arise.

What will I find in this issue?

Part I of our Ethics Special opens with a Guest Editorial by our SVP and General Counsel for the legal department, Mark Seeley. He reflects on the rise in publishing ethics cases and talks frankly about his own thoughts on how they should be addressed.

In Understanding and addressing research misconduct we hear from an Elsevier lawyer and a publisher about what constitutes research misconduct and the roles editors and publishers have to play once a case has been identified.

Two editors from the journal Biochemical Pharmacology explore research bias – and its implications – in Bias in research: the rule rather than the exception?.

We also hear from the editor community in Research misconduct – three editors share their stories. Our interview subjects discuss the ethics challenges in their fields and how they are working to combat them.

It’s not only authors who can find themselves crossing ethical boundaries and in The ethics pitfalls that editors face we examine two of the most common editor pitfalls – undisclosed conflicts of interest and citation manipulation.

Lessons learnt at the 3rd World Conference on Research Integrity highlights the key points one of Elsevier’s publishing ethics experts took home with her from this year’s World Conference on Research Integrity.

We complete the edition with Editor in the Spotlight – Professor Margaret Rees. As Editor-in-Chief of Maturitas and current Secretary of COPE (the Committee on Publication Ethics), she draws on her extensive ethics experience to answer our questions.

What does that leave for Part II?

The second part of our Ethics Special, scheduled for publication in early November, will contain a range of articles designed to keep you up to date with the publishing ethics support on offer. Features include an interview with the current Chair of COPE, tips on dealing with the media, information on how we are working with authors and reviewers to train them on good ethical practice and a range of practical advice (and an offer of free software!) from The Office of Research Integrity.

We really hope this edition answers some of your questions on this topic and perhaps inspires some new ones. As always, I really look forward to hearing your views and you can email me at editorsupdate@elsevier.com.

ElmaKleikamp

CrossCheck-EES integration go-live date announced

This autumn will see the integration of CrossCheck with Elsevier’s Editorial System (EES). We look at what this will mean for manuscripts.

Read more >


Elma Kleikamp | Team Leader EES Application Management, Elsevier

Update, November 2013 - Since this Short Communication was published, there has been a change to the planned timeline for integration. The technology is currently being piloted with a group of journals and the EES team aims to roll it out to all journals by the beginning of 2014.

An integration of the plagiarism detection software CrossCheck with our Elsevier Editorial System (EES) is due to go live next month.

CrossCheck uses iThenticate originality detection software to identify text similarities which may indicate plagiarism. It does this by  comparing manuscripts with both a web repository and the CrossCheck database, which contains more than 50 million published articles.

Currently the software needs to be operated separately from EES, which means editors wanting to use it to check manuscript(s) must login to iThenticate and upload the papers before they can view the results.

The new CrossCheck-EES integration will benefit editors in several ways:

  • The article will automatically be uploaded to iThenticate at the submission stage. Editors will be able to access the similarity report by clicking a CrossCheck/iThenticate Results link in EES. They will no longer have to download/upload files themselves.
  • All editors who can view the EES article can access the similarity report for that article.
  • To help editors quickly identify articles that need further assessment, the ‘largest match from a single source’ value will be displayed beside the CrossCheck/iThenticate Results link.

The integration is scheduled to go live in October 2013.

Tom Reller

Falsification, fabrication and plagiarism — the unholy trinity of scientific writing

Editor discusses one of the greatest challenges he and his colleagues face – “the adjudication of ethical integrity issues”.

Read more >


Tom Reller | Vice President and Head of Global Corporate Relations, Elsevier

“One of the greatest, and sadly all too common, challenges facing a contemporary medical journal editor is the adjudication of ethical integrity issues,” opens the lead editorial by Dr Anthony L Zietman, Professor at Harvard Medical School and Editor-in-Chief of what’s known in the field as “The Red Journal.” The International Journal of Radiation Oncology • Biology • Physics is the official journal of American Society for Radiation Oncology (ASTRO).

I was drawn to the article from a tweet by Dr Ivan Oransky, who tweeted that Dr Zietman had referenced the “somewhat addictive” editorial blog Retraction Watch,” and I agreed that interest in retractions is growing. As publishers, we spend a good deal of time managing interest in particular retractions and what they are revealing about the state of publishing ethics today.

Anthony Zeitman, MD

Anthony L Zeitman, MD, FASTRO

No doubt, retractions are on the rise. Dr Zietman points out that “between 2001 and 2010, the number of manuscripts accepted by listed medical journals increased by 44%. The number of retracted papers over the same period, however, went up 19-fold!”

It’s generally understood that the Internet and increased readership has led to a rise in reported ethical issues, though it’s not quite clear if the violations themselves are increasing or just our ability to detect them.

Dr Zietman observes:

"There has always been pressure on investigators, but in a time of economic hardship these are amplified. The National Cancer Institute pay line, and that of granting agencies globally, is in sharp decline. The competition for the sparse funding that remains is intense and merit-based. Merit, however, is frequently quantified by numbers of publications, making this a vulnerable target for manipulation."

Then there is the question of how much time editors and publishers should spend investigating ethical inquiries. Dr Zietman outlines a realistic approach:

"If we were to go back in time and start retracting duplicate papers, we would have little time for anything else. We have, therefore, decided on a “statute of limitations” considering such behaviors conducted before 2004, when PubMed and the Web of Science brought cosmos to chaos, if not forgotten then, at least, forgiven. Duplicate publication after that date is grounds for a retraction."

He then describes the journals process for handling inquiries that fall within the statute: a series of steps provided by the Committee on Publishing Ethics (COPE).

Dr Zietman further acknowledges the inherent limitation to handling ethical cases:

"We are not, however, the Federal Bureau of Investigation, and there is only so close to the truth that we can reach."

He closes by stating prevention is the best cure, as it “is far better to prick the conscience of the miscreant before the manuscript is ever submitted than to seek retraction after publication.” To that end, he pledges to define these sorts of issues more clearly on the journal website at the time of submission.

This article first appeared in Elsevier Connect.

Read the editorial

Falsification, Fabrication, and Plagiarism: The Unholy Trinity of Scientific Writing,” Andrew L. Zeitman, MD, FASTRO, International Journal of Radiation Oncology • Biology • Physics, 1 October 2013.

Understanding ethics policy

Publishing ethics – how Elsevier can help

Of interest to: Journal editors (key), additionally authors and reviewers

Read more >


Of interest to: Journal editors (key), additionally authors and reviewers

Tom Reller

Faking Peer Reviews

Someone found a way to infiltrate the Elsevier Editorial System; Tom Reller, Vice President of Global Corporate Relations at Elsevier, explains what happened and what we’ve done.

Read more >


Tom Reller | Vice President of Global Corporate Relations, Elsevier

This article originally appeared in Elsevier Connect

Yesterday, Ivan Oransky of Retraction Watch reported that Elsevier Editorial System (EES), our online platform for managing the submission and peer-review process, had been hacked in November. His article, “Elsevier editorial system hacked, reviews faked, 11 retractions follow,” is an accurate account of what happened and a good example of the positive role Retraction Watch can play in monitoring the scientific literature.The Retraction Notices posted by the Elsevier journals themselves provided details about the falsified reports:

A referee’s report on which the editorial decision was made was found to be falsified. The referee’s report was submitted under the name of an established scientist who was not aware of the paper or the report, via a fictitious EES account. Because of the submission of a fake, but well-written and positive referee’s report, the Editor was misled into accepting the paper based upon the positive advice of what he assumed was a well-known expert in the field. This represents a clear violation of the fundamentals of the peer-review process, our publishing policies, and publishing ethics standards. The authors of this paper have been offered the option to re-submit their paper for legitimate peer review.

Online Elsevier Editorial System

What happened here is that in late October, one of the editors of Optics & Laser Technology (JOLT) alerted our EES team that reviewers for two of his assigned submissions had been invited but not by him. Our team immediately launched an investigation and discovered that someone had been able to retrieve the EES username and password information for this editor

Fake reviews are becoming an increasingly challenging issue for publishers, but one we’re prepared to confront. We participated in a story in The Chronicle of Higher Education back in September, also stemming from someone creating fake reviewer accounts. In that case, the editors noticed the reviews were coming in from emails with generic email contacts (i.e., yahoo or gmail) and not institutional emails. Here, it was clear the author himself had created the fake reviewer accounts.

What is Elsevier doing to protect EES users?

We regularly conduct an audit of EES tools and processes to determine where improvements can be made. The major recommendations from the most recent audit prompted a security change that was introduced: User Profile Consolidation. Consolidated profiles in EES are protected from the malicious use that occurred in this scenario because the registered user has total control over the personal information in the user profile. More information about the benefits of User Profile Consolidation can be found on this Profile Consolidation FAQ.

In July, we ran a pilot to make user profile consolidation in EES available to almost 1,000 “very active” users. The first pilot was successful, with 90 percent of these pilot users consolidating approximately 4,000 entitlements. Pilot users were surveyed for feedback on the process, including level of effort, provision of help and support. This pilot ran for 10 weeks, and the process itself, the supporting documentation and the communication was improved prior to introducing a second pilot on October 10. This second pilot introduced user profile consolidation for 16,500 additional users and has also proven to be very successful.

After the successful pilots, user profile consolidation became available to all users on December 3.  Elsevier encourages all EES users to complete this process as soon as possible; we’ve already seen more than100,000 unique users consolidate their accounts. In the coming weeks, we will proactively support larger numbers of frequent users through this process as necessary.

In addition to User Profile Consolidation, we have implemented other changes that were recommended by Elsevier’s internal Security and Data Protection team, not all of which would be wise for us to discuss publicly. It has also been suggested that the new ORCID program also has the potential to reduce this type of fraud.

The challenge for us is not so different from that of other companies, and that’s finding the right balance between security control and customer ease of use. One result of this is that editors may have to do more to keep their accounts safe — much like people have to do more to access their online bank accounts —though clearly, there are differences here. Another important aspect of fraud detection in academic publishing is that no matter how strong we make protocols and controls, there is always going to be a human element – a role for editors and publishers to flag when something looks out of line.

Scientific fraud and misconduct is a growing concern in the scientific community and is something Elsevier contributes a significant amount of resources to confront. That includes an information security team that is acutely aware of the risks and vulnerabilities of any online system. The reality today is that hacking and spoofing can and will occur, though here we believe we acted quickly, the impact is minimal and that we have taken the necessary steps to eliminate the threat posed, at least through this method.

We’ll be paying close attention to the discussion surrounding this incident and will try to address any questions that arise.

EU36_Impact-letters

Impact Factor Ethics for Editors

How Impact Factor engineering can damage a journal’s reputation The dawn of bibliometrics We’ve all noticed that science has been accelerating at a very fast rate, resulting in what has been called ‘information overload’ and more recently ‘filter failure’. There are now more researchers and more papers than ever, which has led to the heightened […]

Read more >


How Impact Factor engineering can damage a journal’s reputation

The dawn of bibliometrics

We’ve all noticed that science has been accelerating at a very fast rate, resulting in what has been called ‘information overload’ and more recently ‘filter failure’. There are now more researchers and more papers than ever, which has led to the heightened importance of bibliometric measures. Bibliometrics as a field is a fairly new discipline, but it has seen an impressive growth in recent years due to advances in computation and data storage, which have improved the accessibility and ease of the use of bibliometric measures (for instance through interfaces such as Sciverse Scopus or SciVal). Bibliometrics are being increasingly used as a way to systematically compare diverse entities (authors, research groups, institutions, cities, countries, disciplines, articles, journals, etc.) in a variety of contexts. These include an author deciding where to publish, a librarian working on changes in their library’s holdings, a policy maker planning funding budgets, a research manager putting together a research group, a publisher or Editor benchmarking their journal to competitors, etc.

Enter the Impact Factor

In this perspective, journal metrics can play an important role for Editors and we know it’s a topic of interest because of the high attendance at our recent webinar on the subject. There are many different metrics available and we always recommend looking at a variety of indicators to yield a bibliometric picture that is as thorough as possible, providing insights on the diverse strengths and weaknesses of any given journal1. However, we are well aware that one metric in particular seems to be considered especially important by most Editors: the Impact Factor. Opinions on the Impact Factor are divided, but it has now long been used as a prime measure in journal evaluation, and many Editors see it as part of their editorial duty to try to raise the Impact Factor of their journal2.

An Editor’s dilemma

There are various techniques through which this can be attempted, some more ethical than others, and it is an Editor’s responsibility to stay within the bounds of ethical behavior in this area. It might be tempting to try to improve one’s journal’s Impact Factor ranking at all costs, but Impact Factors are only as meaningful as the data that feed into them3: if an Impact Factor is exceedingly inflated as a result of a high proportion of gratuitous self-citations, it will not take long for the community to identify this (especially in an online age of easily accessible citation data). This realisation can be damaging to the reputation of a journal and its Editors, and might lead to a loss of quality manuscript submissions to the journal, which in turn is likely to affect the journal’s future impact. The results of a recent survey4 draw attention to the frequency of one particularly unethical editorial activity in business journals: coercive citation requests (Editors demanding authors cite their journal as a condition of manuscript acceptance).

Elsevier’s philosophy on the Impact Factor
“Elsevier uses the Impact Factor (IF) as one of a number of performance indicators for journals. It acknowledges the many caveats associated with its use and strives to share best practice with its authors, editors, readers and other stakeholders in scholarly communication. Elsevier seeks clarity and openness in all communications relating to the IF and does not condone the practice of manipulation of the IF for its own sake.”

This issue has already received some attention from the editorial community in the form of an editorial in the Journal of the American Society for Information Science and Technology5. Although some Elsevier journals were highlighted in the study, our analysis of 2010 citations to 2008-2009 scholarly papers (replicating the 2010 Impact Factor window using Scopus data) showed that half of all Elsevier journals have less than 10% journal self-citations, and 80% of them have less than 20% journal self-citations. This can be attributed to the strong work ethic of the Editors who work with us, and it is demonstrated through our philosophy on the Impact Factor (see text box on the right) and policy on journal self-citations (see text box below): Elsevier has a firm position against any ‘Impact Factor engineering’ practices.

So, what is the ethically acceptable level of journal self-citations?

There are probably as many answers to this question as there are journals. Journal self-citation rates vary between scientific fields, and a highly specialised journal is likely to have a larger proportion of journal self-citations than a journal of broader scope. A new journal is also prone to a higher journal self-citation rate as it needs time to grow in awareness amongst the relevant scholarly communities.

Elsevier’s policy on journal self-citations
“An editor should never conduct any practice that obliges authors to cite his or her journal either as an implied or explicit condition of acceptance for publication. Any recommendation regarding articles to be cited in a paper should be made on the basis of direct relevance to the author’s article, with the objective of improving the final published research. Editors should direct authors to relevant literature as part of the peer review process; however, this should never extend to blanket instructions to cite individual journals. […] Part of your role as Editor is to try to increase the quality and usefulness of the journal. Attracting high quality articles from areas that are topical is likely the best approach. Review articles tend to be more highly cited than original research, and letters to the Editor and editorials can be beneficial. However, practices that ‘engineer’ citation performance for its own sake, such as forced self-citation are neither acceptable nor supported by Elsevier.”

As mentioned in a Thomson Reuters report on the subject: “A relatively high self-citation rate can be due to several factors. It may arise from a journal’s having a novel or highly specific topic for which it provides a unique publication venue. A high self-citation rate may also result from the journal having few incoming citations from other sources. Journal self-citation might also be affected by sociological factors in the practice of citation. Researchers will cite journals of which they are most aware; this is roughly the same population of journals to which they will consider sending their own papers for review and publication. It is also possible that self-citation derives from an editorial practice of the journal, resulting in a distorted view of the journal’s participation in the literature.”6

Take care of the journal and the Impact Factor will take care of itself

There are various ethical ways an Editor can try to improve the Impact Factor of their journal. Through your publishing contact, Elsevier can provide insights as to the relative bibliometric performance of keywords, journal issues, article types, authors, institutes, countries, etc., all of which can be used to inform editorial strategy. Journals may have the options to publish official society communications, guidelines, taxonomies, methodologies, special issues on topical subjects, invited content from leading figures in the field, interesting debates on currently relevant themes, etc., which can all help to increase the Impact Factor and other citation metrics. A high quality journal targeted at the right audience should enjoy a respectable Impact Factor in its field, which should be a sign of its value rather being an end in itself. Editors often ask me how they can raise their journal’s Impact Factor, but the truth is that as they already work towards improving the quality and relevance of their journal, they are likely to reap rewards in many areas, including an increasing Impact Factor. And this is the way it should be: a higher Impact Factor should reflect a genuine improvement in a journal, not a meaningless game that reduces the usefulness of available bibliometric measures.

References

1 Amin, M & Mabe, M (2000), “Impact Factors: use and abuse”, Perspectives in Publishing, number 1

2 Krell, FK (2010), “Should editors influence journal impact factors?”, Learned Publishing, Volume 23, issue 1, pages 59-62, DOI:10.1087/20100110

3 Reedijk, J & Moed, HF (2008), “Is the impact of journal impact factors decreasing?”, Journal of Documentation, Volume 64, issue 2, pages 183-192, DOI: 10.1108/00220410810858001

4 Wilhite, AW & Fong, EA, (2012) “Coercive Citation in Academic Publishing”, Science, Volume 335, issue 6068, pages 542–543, DOI: 10.1126/science.1212540

5 Cronin, B (2012), “Do me a favor”, Journal of the American Society for Information Science and Technology, early view, DOI: 10.1002/asi.22716

6 McVeigh, M (2002), "Journal Self-Citation in the Journal Citation Reports – Science Edition"

Author Biography

Sarah Huggett

Sarah Huggett
PUBLISHING INFORMATION MANAGER, RESEARCH & ACADEMIC RELATIONS
As part of the Scientometrics & Market Analysis team, Sarah provides strategic and tactical insights to colleagues and publishing partners, and strives to inform the bibliometrics debate through various internal and external discussions. Her specific interests are in communication and the use of alternative metrics such as SNIP and usage for journal evaluation. After completing an M. Phil in English Literature at the University of Grenoble (France), including one year at the University of Reading (UK) through the Erasmus programme, Sarah moved to the UK to teach French at Oxford University before joining Elsevier in 2006.


Webcast_ethics_small150

Ethics and Plagiarism in Publishing – What Editors Should Know

A discussion of the challenges journal editors face with regards to ethical issues and plagiarism. Discover the role we play in solving them using tools and resources made available to you by Elsevier. COPE also has an e-learning course on ethics available, aimed specifically at new editors. Learn more

Read more >


A discussion of the challenges journal editors face with regards to ethical issues and plagiarism. Discover the role we play in solving them using tools and resources made available to you by Elsevier.

COPE also has an e-learning course on ethics available, aimed specifically at new editors. Learn more

EU34_Retraction_magnifyer

Watching Retraction Watch

What a new breed of journalist means for transparency and public trust in science “…a powerful metaphor for understanding their work as science critics is to see them as cartographers and guides…” Author Declan Fahy in Columbia Journalism Review In Elsevier’s newsroom, we are responsible for working with the media on press coverage of research […]

Read more >


What a new breed of journalist means for transparency and public trust in science

"...a powerful metaphor for understanding their work as science critics is to see them as cartographers and guides..." Author Declan Fahy in Columbia Journalism Review

In Elsevier’s newsroom, we are responsible for working with the media on press coverage of research and responding to various inquires pertaining to scientific or research misconduct. From this vantage point, we are observing a notable trend, and unfortunately that trend is that the growth in coverage of scientific misconduct is outpacing that of the research itself.

Traditional scientific coverage is slowing due to a fall in the number of professional science media working today. In 2009, Nature1 chronicled the situation by noting that the number of dedicated science sections in American newspapers fell from a peak of about 95 to 34 between 1989 and 2005. Accordingly, in the same survey, 26% of global journalists reported job losses, and of the remaining journalists, 59% had less time per article available.

The Nature article appeared in the same year that more than 1.5 million articles were published, a figure that is growing by 3-4% each year2. Meanwhile, research is becoming more technical, inter-disciplinary and global in nature. Consider that 25% of Elsevier’s journals fall into more than one subject collection area, and 35% of our papers include authors from different countries3.

In other words, when we need experienced traditional science media professionals most, we have fewer of them. However, the number of freelance science journalists and bloggers is increasing. In fact, as of 2009, of the 2,000 US-based National Association of Science Writers members, only 79 were full-time staff science writers for newspapers4. Further, remaining science reporters cited getting more story leads from science bloggers1, which suggests science is still being covered, but by a new breed of reporter.

This shift from traditional reporters to freelancers and bloggers coincides with a greater ability to detect and report on academic misconduct. And there has been a lot of misbehavior to expose. A 2002 NIH-funded survey of several thousand scientists in the USA found that around one third admitted they had engaged in at least one sanctionable misbehavior in the prior three years5. We can also presume that with competition for research dollars, tenure, prestige, and patents increasing in line with our ability to detect misconduct, there will be even more.

Impact of the New Media Landscape

In a recent Columbia Journalism Review article, Skeptical of Science6, author Declan Fahy states, “among other new roles, journalists are becoming more critical of research.” He describes how journalists are being “undercut by the emergence of a new science media ecosystem in which scientific journals, institutions and individuals are producing original science content directly for non-specialist audiences.” The author notes that, “consequently, they need additional ways to attract readers and maintain their professional identity.”

Take Ivan Oransky, Executive Editor at Reuters Health, for example. Ivan is a seasoned health reporter who graduated from Harvard, obtained an MD from New York University, has written for The Lancet, and is a professor at NYU. Still, Ivan lived in relative obscurity until he launched his side job as a blogger at Retraction Watch, a new form of science blog with 150,000 page views per month and a mission to increase the transparency of the retraction process.

Now Ivan is himself the news. He speaks at conferences, appears on National Public Radio and Retraction Watch is routinely sourced by top-tier media. By looking into the stories behind retractions, Ivan, and his blog partner, Adam Marcus, have staked a new claim as the journalism community’s academic misconduct experts. They believe in keeping science honest and that if the research community reveals more of its own flaws, the level of trust in it will rise.

Retraction Watch website

Figure 1: A blog entry on Retraction Watch.

 

Ivan and Adam represent the new breed of science watchdog that is able to promote the results of their own investigations quickly via the internet. Beyond them, there are scientists, both aggrieved and benevolent, who spend time identifying cracks in scientific literature and making them public. Either through Retraction Watch or their own blogs, skeptical scientists aided by the internet have helped shed new and bright lights on scientific misconduct in ways never before seen.

And Retraction Watch has no shortage of material to work with. A recent Nature article7 captured how an increase in withdrawn papers is highlighting weaknesses in the system for handling them; and it’s the handling of them that gets watchdogs’ attention. For example, ‘opaque retraction notices that don't explain why a paper has been withdrawn.’

As publishers, we are mindful that the dramatic increase in access to scientific research in recent years has led to an increase in the level of critical review of these articles. The good news here is that these watchdogs are able to help editors keep scientists highly accountable for their work. The bad news is that they often print inaccurate and incomplete stories, which as a result can appear one-sided against those who don’t provide them with all the information they request. And they can take up an inordinate amount of editors’ time.

Are they entitled to your time?

For editors, the new breed of journalist is completely different and can be far more frustrating to work with than the traditional science reporter. The key is to spend time thinking about what the new media world means for your journal, and then develop your own approach to responding. Your publisher and Elsevier’s corporate media staff can work through this process with you. We can also help you manage the inquiries in a professional manner, i.e., making sure only credible inquiries are addressed, taking up a minimal amount of your time, and providing only appropriate, accurate information.

Increased skepticism and critical review in science can be complicated, combative, time consuming, and political. Many allegations and requests to investigate are legitimate, many not, but it’s become very clear that everything is more transparent in this new era. We have to acknowledge that every article and decision can be questioned. Promoting science and engendering trust require a new mindset that editors should become familiar with if they want their journal viewed favorably in the press – both traditional and new media.

The key is to remember that with the internet, the politicalization of science, and the rise of ethics journalism, the conduct of science has more visibility than ever. In his Columbia Journalism Review article6, Fahy suggests that ”a powerful metaphor for understanding their work as science critics is to see them as cartographers and guides, mapping scientific knowledge for readers, showing them paths through vast amounts of information, evaluating and pointing out the most important stops along the way.”

The inquiries and coverage from science skeptics is not always comfortable, but in this new era of science media, it can be a positive development for retaining public trust.

Finding the silver lining - why the rise in blogging could prove good news for researchers

  • There are still highly qualified experts who can decipher highly technical research articles for an audience, only now instead of working for a newspaper, they form or contribute to a blog or other form of online community. And while they may reduce an article down to a tweet, more people than ever before see those tweets, and the links to their articles within them.
  • With the decline of full-time reporters, there's a vast rise in the number of freelance journalists and bloggers. Instead of sitting at a desk and accepting assignments from editors, this new breed of reporters is empowered to scout for their own stories and build new networks providing earlier insights into research projects. For scientists, freelance reporters are a lot easier to get to know than traditional reporters.
  • There are a number of organizations gaining prominence as intermediaries, including Sense About Science and the Science Media Centre, both based in the UK but expanding internationally. These organizations play critical and credible roles in helping the general public make sense of science research, and how it is covered in the press. Perhaps most importantly, they also help make sense of peer review, which is often the key element to discerning what is real science, and what isn't.

What do you think about the changing make-up of the media ? You can share your thoughts by posting a comment below.

1 Science journalism: Supplanting the old media?, Geoff Brumfiel, 2009, Nature, Vol 458
2 The STM Report: An overview of scientific and scholarly journal publishing, Mark Ware and Michael Mabe, September 2009, International Association of Scientific, Technical & Medical Publishers
3 Elsevier SciVerse Scopus
4 Science Journalism in Crisis? – from the World Conference of Science Journalists 2009, Sallie Robins, 2009, The Euroscientist
5 Scientists behaving badly, B C Martinson and others, 2005, Nature, Vol 43
6 Skeptical of Science, Declan Fahy, September 2011, Columbia Journalism Review
7 Science publishing: The trouble with retractions, Richard Van Noorden, 2011, Nature, Vol 478

Author Biography

Tom Reller

Tom Reller

Tom Reller
VICE PRESIDENT AND HEAD OF GLOBAL CORPORATE RELATIONS
Tom is the primary media spokesman for Elsevier – responsible for the company’s relationships with media, analysts and other online/social media communities. He manages public relations programs and actively works with external organizations to help build Elsevier’s reputation and promote the many contributions Elsevier makes to Health and Science communities. These include partnerships developed through the Elsevier Foundation, where he is responsible for running programs benefitting the global nurse faculty profession.


Related Links

Would you review for a journal that made the names of its reviewers public?

View Results

Loading ... Loading ...

Short Communications

  • Times Higher Education to partner with Elsevier on THE World University Rankings

    THE will use Scopus data alongside SciVal - Elsevier's research metrics analysis tool. Learn more

  • How an author workshop transformed one editor from skeptic to convert

    Professor Raymond Coleman, Editor-in-Chief of Elsevier’s Acta histochemica, believes these events help authors on the path to getting published. Learn more

  • Call for Elsevier Foundation nominations for 2015 physics and math awards

    The nomination deadline is Friday 17th October, 2014. Learn more

  • Elsevier is expanding its use of altmetrics

    Learn more about our plans to expand the use of almetrics on our platforms and the other metrics-related projects we are working on. Learn more

  • Open access FAQs for editors now available

    This new resource on the editor pages of Elsevier.com is designed to provide you with answers to some of the most common open access questions. Learn more

  • What role should we play at conferences?

    We would like to hear your thoughts about how publishers can best support you at conferences. Learn more

  • Humanizing the values for publishing in India

    Dr. D Chandramohan argues that Indian researchers should be encouraged to prioritize Indian journals when choosing a home for their papers. Learn more

  • New program offers funding to research on evaluation metrics

    As interest in measurement metrics continues to grow, Elsevier launches a new program to fund research in this area. Learn more

  • Registrations are now open for the first Altmetrics Conference

    A conference dedicated to altmetrics - the first of its kind - will take place in London this September. Find out how you can register. Learn more

Other articles of interest

Webinars & webcasts

Discover our webinar archive. This digital library features both Elsevier and external experts discussing, and answering questions on, a broad spectrum of topics. Latest additions include:

How to make your journal stand out from the crowd from October 21st, 2014.

Trends in journal publishing from September 18th, 2014.

Learn more about our growing library of useful bite-sized webcasts covering a range of subjects relevant to your work as an editor, including ethics, peer review and bibliometrics.