Return to

Issue 40 – September 2013

In Part I of our two-part Ethics Special we move from a broad overview of the current publishing ethics landscape to a more detailed examination of aspects such as plagiarism, bias and conflicts of interest.



Editor in the Spotlight – Margaret Rees of Maturitas

Maturitas was founded in 1978 and is the official journal of the European Menopause and Andropause Society (EMAS). It is also affiliated with the Australasian Menopause Society. The journal Impact Factor of 2.844 ranks it 14 out of 77 journals in Obstetrics and Gynecology and 19 out of 46 journals in Geriatrics & Gerontology. Submissions […]

Read more >

Maturitas was founded in 1978 and is the official journal of the European Menopause and Andropause Society (EMAS). It is also affiliated with the Australasian Menopause Society.

The journal Impact Factor of 2.844 ranks it 14 out of 77 journals in Obstetrics and Gynecology and 19 out of 46 journals in Geriatrics & Gerontology. Submissions to the journal are running at around 500 per year, with academic institutions annually downloading more than 381,000 articles on ScienceDirect and registering 238,765 article pageviews by subscribers. Maturitas Editor-in-Chief Professor Margaret Rees is a Reader Emeritus in Reproductive Medicine and a Fellow at St Hilda's College, at the University of Oxford. Her ethics experience is extensive – she is Secretary of the Committee on Publication Ethics (COPE), Chair Elect of the Association of Research Ethics Committees (AREC), and member of the Open University Human Research Ethics Committee (HREC); the Central University of Oxford Ethics Committee (CUREC); and the Elsevier Ethics Committee.

Q. What does being a journal editor mean to you and what do you find most rewarding about this role?
A. I have been Editor-in-Chief of a journal since 1998. For the first 10 years I edited Menopause International and since 2008 I have edited Maturitas. Maturitas is a well-established international journal which allows regular interaction with cutting-edge researchers. The editors and editorial board provide an excellent multidisciplinary team. So what is most rewarding is being able to select high-quality articles for publication, thereby stimulating interest in the journal and encouraging the junior researchers, who are our future.

Q.What are your biggest challenges as Editor-in-Chief of Maturitas? How do you overcome these challenges and what extra support can Elsevier provide?
A. The biggest challenge I faced at the outset was that Maturitas was perceived as a women’s health journal, which restricted its focus. I therefore expanded its scope and broadened the editorial board, which is regularly refreshed by junior researchers who are encouraged to join. I also commission review articles to publicise the widened area of interest. Thus, Maturitas is now a multidisciplinary, international, peer-reviewed scientific journal that deals with midlife health and beyond. We publish original research, reviews, clinical trial protocols, consensus statements and guidelines. The scope encompasses all aspects of post-reproductive health in both genders, ranging from basic science to health and social care. Within the first year of my editorship, submissions increased by 50% and downloads by 70%. The Impact Factor has steadily increased and I now tweet via EMAS about selected articles. My main current challenge is to be able to publicize the journal more widely using social media: further assistance from Elsevier would be invaluable. In addition, the speed of manuscript processing and minor language editing could be improved. While I have a dedicated language editor, whom I selected, it would be inappropriate to use him for minor edits which could be undertaken by typesetters.

Q.In many areas of research, the growth of paper submissions is outpacing the growth of qualified reviewers and resulting in pressure on the peer-review system. What do you think the solution to this problem is and how do you see the peer-review process changing in the future?
A. There are various solutions to this perennial problem which I have deployed, thus reducing the average time to first decision to 21 days. In 2008 it was 61 days. Before passing on manuscripts to the editors, I routinely screen them, e.g. check for originality, look for text similarity with iThenticate, review the quality of the English and take into account ethical considerations etc. As Secretary of COPE and member of the Elsevier Ethics Committee, I am committed to high standards. I aim to provide constructive comments to authors for papers rejected outright so that poorly-presented papers, which nonetheless contain good science, can be reconsidered. This reduces the workload of both editors and reviewers. Thus, reviewers are not asked to look too frequently at papers for Maturitas. The pool of reviewers is increased by constant refreshing of the editorial board and asking junior researchers to review. Currently, we publish around 30% of unsolicited articles.

One specific problem with EES is that, historically, different journals have used different logon names and passwords. This has the potential to deter reviewers who are aware that other publishers use single logons and passwords for all their journals. I have not found the Elsevier process of consolidation smooth. It relies on reviewers hunting down their various user names and passwords. During this process some, including me, have been denied access to EES. I am now relieved to know that these problems are being resolved and users now have the option to forgo consolidating their accounts.

Q. We have observed that researchers are increasingly accessing journal content online at an article level, i.e. the researcher digests content more frequently on an article basis rather than a journal basis. How do you think this affects the visibility of your journal among authors?
A. Access of individual papers is becoming the norm, but visibility of the journal as a whole can be maintained through social media and commissioning timely, high-impact reviews.

Q.Recently, there have been many developments in open access particularly in the UK and Europe where, back in July 2012, the UK government endorsed the Finch Report recommendations for government-funded research to be made available in open access publications. The European Commission has since followed suit, making a similar announcement for an open access policy starting in 2014. How do you see these open access changes in your country? And how do you see them affecting authors who publish in your journal?
A. Maturitas offers several open access options. Authors or their funders can choose to pay a publication fee to make an article open access. Each month, using Editor’s Choice, a new feature the journal has introduced, I can summarize for the public the most important research published in the journal and the papers are openly available.

Q. Researchers need to demonstrate their research impact, and they are increasingly under pressure to publish articles in journals with high Impact Factors. How important is a journal’s Impact Factor to you, and do you see any developments in your community regarding other research quality measurements?
A. The Impact Factor remains the gold standard, but does not allow consideration of the size of the field and is inherently slow to respond to stimuli. Thus, changes in a journal’s focus will take a few years to become apparent. New metrics include paper views and Twitter as well as other social media tools which are more immediate, but this needs to be taken on board by funders as well as researchers. Elsevier could help editors by providing education on these new tools. The instruction ‘Print or share this page’ could be made more explicit. The Top 25 Hottest Articles for each quarter should be available during the following month.

Q. As online publishing techniques develop, the traditional format of the online scientific article will change. At Elsevier, we are experimenting with new online content features and functionality. Which improvements/changes would you, as an Editor, find most important?
A. Maturitas authors have the ability to provide AudioSlides. Better presentation of papers with linked references in the margins would help the reader. Those accessing the journal website would benefit from seeing on the page the preview content rather than having to click on it, to stimulate interest. Authors may wish to have links to their webpages. Also, the most cited and read facilities should provide numbers with the title of each article rather than you having to click on each abstract. Thus, as an editor, I would like to have ‘at a glance’ up-to-date tracking of citations and downloads for each article to better profile future commissioned reviews.

Q. Do you use social media or online professional networking in your role as an editor or researcher? Has it helped you and, if so, how?
A. I have recently started with Twitter and LinkedIn using the EMAS portal. Furthermore, since September 2012, I publicize papers published in Maturitas in the monthly EMAS newsletter which is opened by over 30,000 people - downloads increased by 70,000 in 2012. It would be helpful if Elsevier could provide a journal-focused helpline to encourage authors to use social media and have regular calls for papers.

Q. How do you see your journal developing over the next 10 years? Do you see major shifts in the use of journals in the future?
A. While journals will become more electronic, some readers prefer paper which can be read in the bath without mishap. Publicizing collections on various themes online is attractive as it allows authors and readers to see the range of a journal at a glance.

Q. Do you have any tips or tricks to share with your fellow editors about being a journal editor?
A. Becoming an editor of a journal is an exciting but daunting task, especially if you are working alone without day-to-day contact with editorial colleagues. The job requires constant attention to detail to ensure publication of high-quality, well-presented material, maintaining the integrity of the scientific record. It is important that editors act politely, fairly but firmly at all times. Speedy communication with authors and reviewers is essential. All Elsevier journals are members of COPE and its website is a source of useful advice. An editor should also act as an educator to authors and reviewers so that young researchers are encouraged to publish and be involved in the process. They are the editors of the future. An Editor-in-Chief needs to interact regularly with other editors and the editorial board, as well as journal and publishing managers. It is a team effort. Editors should not go out on a limb and difficult decisions should be made in consultation.


Lessons learnt at the 3rd World Conference on Research Integrity

The Canadian city of Montreal played host to the 3rd World Conference on Research Integrity in May this year and attendees enjoyed the luxurious problem of choosing from a packed program of fascinating sessions. This personal report is therefore not comprehensive but I hope that it gives you a flavour of the event. As someone […]

Read more >

The Canadian city of Montreal played host to the 3rd World Conference on Research Integrity in May this year and attendees enjoyed the luxurious problem of choosing from a packed program of fascinating sessions. This personal report is therefore not comprehensive but I hope that it gives you a flavour of the event.

As someone who deals with publishing and research ethics cases on a daily basis, I found it both depressing to hear other stakeholders report similar dilemmas and reassuring to find that Elsevier’s approach is largely aligned with that of others. However, I’d like to share with you my take on those speakers who inspired me to consider these issues on a deeper level or from an entirely new angle.

Scientists are human too!

If we are to meaningfully prevent research misconduct, we need to understand the underlying behavioral psychology that drives cheating in the first place.

Professor Fred Grinnell

Professor Fred Grinnell made the case that being a scientific maverick requires passion and a burning belief in one’s hypothesis, often in the face of an unbelieving community or elusive evidence. While such passion may drive discovery, it can also be interpreted as an inherent bias – an almost irrational belief that you are right. Professor Grinnell cited examples such as James Watson’s fascinating account of the emotional, competitive race to describe the structure of DNA: a far cry from the clinical objectivity that is often held up as the ideal for scientists.

Professor Dan Ariely made a related point about the irrationality of cheating. Behavioral economics traditionally saw cheating as a logical cost-benefit analysis: how likely am I to get caught versus how much can I benefit from the deceit. The latest research indicates that such decisions are not logical whatsoever but emotional: all efforts towards preventing or disincentivizing misconduct need to recognise that emotion. For example, once someone has started to stray from the right path, they reach a crucial tipping point. If they can confess, wipe the slate clean and be rehabilitated before that point, there is hope. If the community doesn’t offer minor offenders opportunities for rehabilitation, they may feel that there is no open path back and descend into more serious offences.

He also spoke of conflicts of interest as an unavoidable fact of life: we should focus on recognizing and acknowledging them, rather than pretending they can be totally eliminated. For example, even a researcher’s most noble desire to help patients by completing a successful clinical trial can conflict with the best interests of an individual patient within that trial.

How to blow the whistle (or oboe) and still have a career afterwards

Elsevier is regularly approached by younger researchers seeking guidance on how to deal with everyday ethics issues, for example, inappropriate authorship or perhaps they have witnessed misappropriation of data. Our Ethics in Research & Publication program tries to help by providing them with tools to make the right decision, so I listened with great interest to two speakers who have decades of experience in ethics education.

Professor C K Gunsalus

Many speakers recommended Professor C K Gunsalus’ seminal guide: “How to blow the whistle and still have a career afterwards”. She advises young researchers to have a simple ’script‘ which they are comfortable with and ready to use should they need to confront a colleague’s unethical behavior, especially where there is a power imbalance, such as with a supervisor.

Professor Joan Sieber elegantly proposed the need for an even more subtle skill set than whistle-blowing: the art of blowing the oboe - in other words, handling ethics dilemmas in an effective but low-key manner that exposes the ‘blower’ to less personal risk.

Challenges facing publishers

Speakers from many publishing houses shared their experiences of developing publishing ethics policies in an ever-changing environment. In her talk on “Challenges of author responsibilities in collaborations”, Nature’s Dr Veronique Kiermer spoke of the two sides to authorship: on the one hand, it conveys credit but on the other, that credit comes with accountability. Elsevier’s own Mark Seeley highlighted the dilemma that editors and publishers face - sometimes we have to accept that we just don’t know what actually happened in the lab and may never know. While it is frustrating to make decisions without all the facts, we are committed to making the fairest decision based on the facts available to us.

Dr Bernd Pulverer

Dr Bernd Pulverer from the EMBO Journal cleverly presented real (anonymized) cases that initially looked extremely suspicious, only for a valid explanation to be found once the author was asked for more information. This was a perfect illustration of the need for editors to always give authors the benefit of the doubt and the right to respond: a need regularly reinforced to Elsevier by similar experiences.

For more coverage of the 3rd World Conference on Research Integrity you may wish to read the personal accounts of Liz Wager on Elsevier Connect and Alice Meadows on Scholarly Kitchen’s website.

The Conference has also led to the development of the Montreal Statement, a draft version of which is now available to view on the Conference website. It contains a series of recommendations for individual and collaborative research.

Co-chairs of the Conference: (L-R) Sabine Kleinert, The Lancet’s Senior Executive Editor and former Vice Chair of the Committee on Publication Ethics (COPE) and Melissa Anderson, Associate Dean of Graduate Education and Professor of Higher Education at the University of Minnesota.

Author biography

Catriona Fennell

Catriona Fennell
Following graduation from University College Galway, Ireland, Catriona joined Elsevier as a Journal Manager in 1999. She later had the opportunity to support and train hundreds of editors during the introduction of the Elsevier Editorial System (EES). Since then, she has worked in various management roles in STM Journals’ Publishing and is now responsible for its author-centricity and publishing ethics programs.


The ethics pitfalls that editors face

When talk turns to matters of research misconduct, the author community is most commonly left to shoulder the blame. However, there are unethical practices of which journal editors may fall foul. In this article we examine two of the most common – undisclosed conflicts of interest and citation manipulation. The complex world of conflicts of […]

Read more >

When talk turns to matters of research misconduct, the author community is most commonly left to shoulder the blame. However, there are unethical practices of which journal editors may fall foul. In this article we examine two of the most common – undisclosed conflicts of interest and citation manipulation.

The complex world of conflicts of interests

It’s all too easy for the situation to arise — an editor receives a submitted manuscript to review that has links to a company or organization in which the editor has some interest. Or perhaps an editor wishes to publish their own research in their journal; in highly-specialized fields, there may be no appropriate alternative publications to choose from. In PERK, Elsevier’s Publishing Ethics Resource Kit, we refer editors to guidelines [1] issued by the International Committee of Medical Journal Editors. These advise that:

“Editors who make final decisions about manuscripts must have no personal, professional, or financial involvement in any of the issues they might judge. Other members of the editorial staff, if they participate in editorial decisions, must provide editors with a current description of their financial interests (as they might relate to editorial judgments) and recuse themselves from any decisions in which a conflict of interest exists. Editorial staff must not use information gained through working with manuscripts for private gain. Editors should publish regular disclosure statements about potential conflicts of interests related to the commitments of journal staff.”

In its Code of Conduct for journal editors [2], COPE (the Committee on Publication Ethics) advises journal editors to establish systems for managing their own conflicts of interest, as well as those of their staff, authors, reviewers and editorial board members. It also recommends that journals introduce a declared process for handling submissions from the editors, employees or members of the editorial board to ensure unbiased review.

But even with these guidelines in place, deciding what constitutes a conflict of interest can be a subjective business. As a general rule of thumb, as an editor, your goal should always be to ensure that whatever action you take, it is transparent and is made free of actual or perceived bias.

Editors may also face challenges in maintaining the neutrality necessary for proper editorial decision-making. While you of course strive to be objective, you are likely very familiar with many of the individuals involved in research and publication in your field. As human beings, it may be difficult to remain completely impartial when dealing with, for example, a paper from a PhD student in your own lab, or a VIP with whom you are friendly. On the other hand, it is possible you may subconsciously disfavor submissions from individuals with whom you have had some kind of prior conflict — for example, someone who has failed to support you for funding or tenure, or someone who has rejected your own submission.

Another area of concern is when an editor succumbs to improper pressure when making an editorial decision. On a number of occasions, we have seen individuals or companies demand either that the editor publishes, or refrains from publishing, a particular paper. You should make editorial decisions based on editorial and scientific factors, not on political pressure or legal threats. It is our responsibility as publishers and journal owners to ensure that you feel confident enough to operate in this manner, by standing behind your reasonable decisions.

The ICMJE guidelines for conflicts of interest.

Citation ethics: a recapitulation of current issues*

Recent computational advances and the Internet have contributed to an increase in available content that some say has resulted in ‘information overload’ or ‘filter failure’. Scholarly communications have not escaped this trend, which is why journal performance indicators can play an important role in scientific evaluation as they provide systematic ways to compare journals. There are many different metrics available, using sources such as the relatively traditional counts of articles and citations, or the more recently available web usage or downloads. Altmetrics even make use, amongst other flavours of impact, of social media mentions. Using a variety of indicators helps yield a picture that is as thorough as possible, providing insights into the diverse strengths and weaknesses of any given journal [3,4], even though opinions on the appropriate use of journal-level bibliometrics indicators can be divided [5].

An example of the donut which can be found on many Scopus articles.

Yet, journal performance metrics have long been used as prime measures in research evaluation, and many editors see it as part of their editorial duty to try to improve bibliometrics indicators and rankings for their journal [6]. The importance of these rankings, and how people perceive ethics misconduct, may be influenced by their geographical, cultural, academic, or even personal background.

As a consequence, a diversity of strategies and behaviors that endanger the validity of bibliometrics indicators has been observed:

  • Author self-citation, i.e. writing papers that cite articles previously authored, often with the intention of boosting one’s bibliometrics performance.
  • Journal self-citation, i.e. publishing papers that cite content previously published in the same journal. Journal level self-citations can be voluntary, for instance with an editorial citing several papers previously published in the journal, or coerced [7,8], for instance when an editor demands citations to previous journal content be added as a condition for publication.
  • Citation cartels [9], also called citation stacking, i.e. collusion across journals to inflate each other’s citations. This can even happen to a journal editor unknowingly - for instance, an author could also be an associate editor of Journal A, and include in their paper submitted to Journal B several gratuitous references to Journal A.

These are problematic because citations are meant to provide scientifically-justifiable, useful references, which can then be used to calculate several performance indicators measuring scientific impact. Superfluous citations can distort the validity of these metrics, and that makes them unethical. Practical consequences for the journal in question can include damaged reputation: for instance, when this kind of activity results in an anomalous citation pattern, the journal runs the danger of being suppressed from the Thomson Reuters’ Journal Citation Report [10] and losing its Impact Factor for two or more years. The list of titles suppressed from the JCR seems to increase in length every year, with 66 journals for the most recent year [11]. However, we need to see this rise in context as it may not only be attributable to an increase in unethical behavior - various factors could be at play, including JCR coverage expansion or improvements to the data monitoring process.

* Note: This section is based on recent articles in Editors’ Update and Elsevier Connect.

Author biographies

Sarah Huggett

Sarah Huggett
As part of the Scientometrics & Market Analysis team, Sarah provides strategic and tactical insights to colleagues and publishing partners, and strives to inform the bibliometrics debate through various internal and external discussions. Her specific interests are in communication and the use of alternative metrics such as SNIP and usage for journal evaluation. After completing an M.Phil in English Literature at the University of Grenoble (France), including one year at the University of Reading (UK) through the Erasmus programme, she moved to the UK to teach French at Oxford University. She joined Elsevier in 2006 and the Research Trends editorial board in 2009.

Linda Lavelle

Linda Lavelle
Linda is a member of Elsevier’s legal team, providing support and guidance for its companies, products and services. She is also responsible for Elsevier’s Global Rights-Contracts team, and is a frequent speaker on matters of publication ethics. Linda earned her law degree from the University of Michigan and also has an MBA. She joined Harcourt in 1995, which subsequently became part of Elsevier. Before that time, she served in a law firm, and held a number of positions in the legal, scientific, and information publishing industry.


[1] International Committee of Medical Journal Editors, “Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Ethical Considerations in the Conduct and Reporting of Research: Conflicts of Interest”.

[2] Committee on Publication Ethics (COPE), “Code of Conduct and Best Practice Guidelines for Journal Editors”. March 2011.

[3] Amin, M & Mabe, M (2000), “Impact Factors: use and abuse”, Perspectives in Publishing, number 1,

[4] Bollen J, Van de Sompel H, Hagberg A, Chute R (2009) A Principal Component Analysis of 39 Scientific Impact Measures. PLoS ONE 4(6): e6022. doi:10.1371/journal.pone.0006022,

[5] San Francisco Declaration on Research Assessment and Elsevier’s view

[6] Krell, FK (2010), “Should editors influence journal impact factors?”, Learned Publishing, volume 23, issue 1, pages 59-62, DOI:10.1087/20100110,

[7] Wilhite, AW & Fong, EA, (2012) “Coercive Citation in Academic Publishing”, Science, volume 335, issue 6068, pages 542–543, DOI: 10.1126/science.1212540,

[8] Cronin, B (2012), “Do me a favor”, Journal of the American Society for Information Science and Technology, volume 63, Issue 7, page 1281, DOI: 10.1002/asi.22716,

[9] Davis, P (2012), “Citation Cartel Journals Denied 2011 Impact Factor”, Scholarly Kitchen blog, 29 June 2012,




Research misconduct – three editors share their stories

We approached three leading editors with the following question: we know that the three most common forms of ethical misconduct are falsification, fabrication and plagiarism. Please share with us the impact these have had on submissions to your journal and how you have handled them. In their answers below they touch on the ethics challenges […]

Read more >

We approached three leading editors with the following question: we know that the three most common forms of ethical misconduct are falsification, fabrication and plagiarism. Please share with us the impact these have had on submissions to your journal and how you have handled them. In their answers below they touch on the ethics challenges in their fields and how they are working to combat them.

Henrik Rudolph is Dean of Faculty Military Sciences for the Netherlands Defence Academy (NLDA). He has been Editor of Applied Surface Science - a journal devoted to applied physics and the chemistry of surfaces and interfaces - for more than eight years and Editor-in-Chief since 2011. During that time, he has handled several thousand manuscripts and become experienced both in the use of iThenticate (software for plagiarism detection and prevention) and the identification of suspicious manuscripts.

First of all, I prefer to talk about academic misconduct rather than ethical misconduct, since the latter is a much broader issue. It includes, for example, papers with experiments that are either prohibited by law (usage of lab animals) or due to their use of restricted materials are impossible to repeat in a normal research environment.

The frequency of academic misconduct has been rather stable since Applied Surface Science started using EES in July 2005. Close to 10% of the papers we receive show some sign of academic misconduct, but since the total number of submissions is increasing, the absolute number is also rising. The most common issue we see is too large an overlap with previously published material, i.e. plagiarism. Cases are evenly divided between self-plagiarism and regular plagiarism. These submissions are most often identified in the editorial phase (by the managing editor or editor) and are rejected before they are sent out for review. iThenticate is an important instrument for detecting academic misconduct, but often common sense is an equally important instrument: do certain parts of the paper look much more polished language-wise than the rest? Has the spelling suddenly changed from UK English to US English? We have even had cases where authors have copied the spelling mistakes in the papers they have plagiarized. If it looks fishy it probably is fishy.

Another common issue is the reuse of figures from previously published work. This is much more difficult to detect, but it can often be found by comparing the figure captions. We have seen all kinds of manipulations to mislead the reader: turning the figure 90 degrees, cropping the figure differently or even showing the negative image. These issues are found by editors but also by great reviewers. I am afraid that what is detected in only the tip of the iceberg – we are simply not equipped to detect this kind of academic misconduct. Also, the human gut feeling plays an important role here: does the figure look like the rest of the figures in graphics style? Does the date imprinted in the picture (often done in our field of work) correspond with the rest of the figures or is the figure much older than the rest? My colleague Professor Frans Habraken, who unfortunately passed away in 2011, was especially keen in detecting this kind of academic misconduct. He could spend a large portion of a day flipping, cropping and comparing figures.

Reusing old figures can be (self-) plagiarism, but it can also be pure falsification. Once in a while we encounter submissions which claim to have observed certain phenomena and support this with old material. Falsification is the most difficult type of academic misconduct to detect. As long as the results look plausible and are in line with expectations we, as human beings, are willing to accept them. Requesting the raw data for all experiments in a submission would help us. While we currently don’t require this from authors, it would be a natural extension of working online. Cloud space is getting cheaper by the day and any given experiment in our field should not generate terabytes of information. This would make it possible to let statistical tools loose on the experimental results and editors and reviewers could look closely at the underlying data. Falsification is seemingly the least common form of academic misconduct, but that could be related to the difficulties in detecting it. We also enter a grey area: is it academic misconduct to leave out data or experiments that were not in line with expectations?

Besides the above-mentioned common forms of academic misconduct, there is a more serious threat arising. The pressure on (young) academic staff to publish is huge. Often people are included as authors when they have contributed only marginally, if at all. This might sound like a rather innocent kind of altruism, but it is highly misleading for the reader and very irritating for an editor. Even worse are the cases where major contributors are left out as authors. Behind every single case reported (around 5-6 per year) there is some kind of conflict. Either the author did not agree with the interpretation of the data or had a personal conflict with the corresponding author. Last, but not least, we also have cases where authors have been included without their knowledge. This is sometimes done out of gratitude - he/she helped me greatly - and sometimes as an acknowledgement of the accomplishment of an established expert in the field.

Occasionally, we see people publishing data which they were not allowed to publish, or should have asked permission to publish. In these cases there is most likely nothing wrong with the data or the submission, but the authors gave away something that was not theirs to give away; the copyright to the paper. While these cases are few and far apart, they always have legal aspects that are beyond the capacity of an average Editor (-in-Chief), so I would suggest the Editor (-in-Chief) contacts the legal department of Elsevier as soon as possible. Your publisher or other contact person at Elsevier will help you with this.

At Applied Surface Science, we have agreed that all cases of academic misconduct are handled by the Editor-in-Chief. This makes it simpler to stick to one line of action and ensures the Editor-in-Chief gains the experience necessary to handle the different kinds of academic misconduct we see. But no academic misconduct case is alike, so there is never a dull moment while investigating one. We keep track of the academic misconduct cases and put notes in the author profiles in EES. We even involve collaborators if there is reason to believe that it was a group issue rather than an individual rogue author. As Editor-in-Chief, I often kindly ask an author not to submit new papers to the journal for a while. This step is often taken when an author is caught for a second time. Repeat offenders are unfortunately rather common and it is therefore important to keep track of past transgressions.

Bottom line for detecting academic misconduct: don’t underestimate the stupidity of the transgressor and don’t underestimate your own ability to be misled.

Professor Ulrich Brandt is Deputy Chair of the Nijmegen Centre for Mitochondrial Disorders (NCMD) at Radboud University Medical Centre in The Netherlands. For many years he served as the Chair of the ethics committee of the Goethe-University in Frankfurt, Germany. He is also Editor-in-Chief of Biochimica and Biophysica Acta (BBA), comprising nine topical sections, and advises on many of the journal’s publishing ethics cases.

The problem of publication ethics is not too big for our journal and our field, at least not if you consider the amount of cases we are aware of, which is less than one per month. On the other hand, this is probably only the tip of the iceberg. While cases are sometimes identified by reviewers, most frequently they are discovered following complaints by colleagues or peers.

There is a certain upward trend in the number of cases and this probably has two causes: increased awareness and more people carrying their disputes with colleagues over to journal editors. The most common forms of research misconduct we see involve author disputes; (self-) plagiarism; manipulated figures; and improper citations. However, I am concerned about the fact that some improper things – for example, the pasting together of Western blots – are not even looked at as scientific misconduct by some people.

Recent publishing ethics cases we have dealt with include:

  • Complaints that a peer has not properly cited somebody’s work.
  • Complaints that a person who produced data presented in the paper was not properly acknowledged as an author or did not authorize the publication.
  • Repeated use of the same Figure panels – but with different labeling. We’ve also seen suspiciously similar data between different Figures (bands in blots, curves, etc…).
  • Self-plagiarism by publishing the same data in two languages without proper citation of the first publication.

The editor is not usually in the position to investigate these cases and therefore – except in clear cases of misconduct – can only moderate between the parties involved. If it can help to clarify the situation, we should confront the authors with the allegations and ask for original data. However, once there are indications of serious scientific misconduct, it is time to inform the organization of the corresponding author and ask for an investigation of the case. The verdict reached by the organization in question can help to inform your decision-making.

I think it is important to avoid getting involved in personal disputes and ignore anonymous complaints, unless they are severe and immediately seem justified.

Apart from picking good and knowledgeable reviewers, there is little that can be changed in the peer-review system that would help with this problem. I don’t think that ideas like publishing reviews and reviewers’ names of accepted papers will be helpful. Authors should know that their papers may be checked by anti-plagiarism software, because this will have a good deterrent effect.

I have found the resources Elsevier has available useful, for example, membership of COPE (the Committee on Publication Ethics) and PERK (the Publishing Ethics Resource Kit). Also, publicizing that Elsevier journals and editors are actively involved in such activities will make us less attractive to potential bogus authors.

Overall though, the problem of scientific misconduct cannot be solved by the journals. It is often a matter of the culture within a given scientific community.

Ben Martin is Professor of Science and Technology Policy Studies at the University of Sussex and an Editor on the journal Research Policy, which explores policy, management and economic studies of science, technology and innovation. He recently authored an extended editorial for his journal entitled Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of research assessment [1]. It discusses whether peer review is continuing to operate effectively in policing research misconduct in the academic world.

In my field, the problem of research misconduct is substantial and it is growing – perhaps that is also because we are becoming better at uncovering these cases. Typically, the cases we see involve self-plagiarism, redundant publication or duplicate submissions.

They are normally identified by alert reviewers, sometimes by editors, and occasionally with the benefit of information on the 'grapevine' from editors of other journals who have encountered problems with a particular individual.

The role of the editor in these cases is to oversee the process of investigation (including ensuring all the facts are double-checked independently), ask the author(s) to respond, and decide on the outcome and appropriate sanction.

I find following the COPE 'flowcharts' useful. I also consult the discussions of previous similar cases on COPE’s website. It is also important to check each step with other editors and with Elsevier and avoid the trap of becoming too upset by misbehaving authors – the danger is that you will then overreact.

If we want to solve these problems we need the academic community to be willing to discuss them openly - particularly about where the line between acceptable and unacceptable research behavior should be drawn. We also need more systematic training of young researchers with regard to such matters (what the rules are, what to do if they spot misconduct, the role of referees, editors and publishers etc…).*

* Note from Ed: In the November Part II of this Ethics Special, we will take a closer look at some of the activities already underway at Elsevier to help train early career authors and reviewers.


[1] Ben R Martin, “Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of research assessment”, Research Policy, Volume 42, Issue 5, June 2013.


Bias in research: the rule rather than the exception?

Dr Kevin Mullane and Dr Mike Williams, two of the editors of the Elsevier journal, Biochemical Pharmacology, discuss some of the causes and prevalence of bias in the fields of biomedical research – and the implications for the wider research community. As the primary purpose of scientific publication is to share ideas and new results […]

Read more >

Dr Kevin Mullane and Dr Mike Williams, two of the editors of the Elsevier journal, Biochemical Pharmacology, discuss some of the causes and prevalence of bias in the fields of biomedical research - and the implications for the wider research community.

As the primary purpose of scientific publication is to share ideas and new results to foster further developments in the field, the increasing prevalence of fraudulent research and retractions is of concern to every scientist since it taints the whole profession and undermines the basic premise of publishing.

While most scientists tend to dismiss the problem as being due to a small number of culprits - a shortcoming inherent to any human activity - there is a larger issue on the fringes of deception that is far more prevalent and of equal concern, where the adoption of certain practices can blur the distinction between valid research and distortion – between "sloppy science", "misrepresentation", and outright fraud [1].

Bias in research, where prejudice or selectivity introduces a deviation in outcome beyond chance, is a growing problem, probably amplified by:

  • the competitive aspects of the profession with difficulties in obtaining funding;
  • pressures for maintaining laboratories and staff;
  • the desire for career advancement (‘first to publish’ and ‘publish or perish’); and, more recently,
  • the monetization of science for personal gain.

Rather than being "disinterested contributors to a shared common pool of knowledge" [2], some scientists have become increasingly motivated to seek financial rewards for their work through industrial collaborations, consultancy agreements and venture-backed business opportunities; even to the exclusion of concerns regarding the accuracy, transparency and reproducibility in their science.

Bias tends to be obscured by the sheer volume of data reported. The number of publications in Life Sciences has increased 44% in the last decade, and at least one leading biomedical journal now publishes in excess of 40,000 printed pages a year. Data is generally viewed as a "key basis of competition, productivity growth...[and]... innovation" [3], irrespective of its conception, quality, reproducibility and usability. Much of it, in the opinion of Sydney Brenner, has become "low input, high throughput, no output science" [4].

Indeed, while up to 80% of research publications apparently make little contribution to the advancement of science - "sit[ting] in a wasteland of silence, attracting no attention whatsoever" [5], it is disconcerting that the remaining 20% may suffer from bias as reflected in the increasing incidence of published studies that cannot be replicated [6,7] or require corrections or retractions [8], the latter a reflection of the power of the Internet.

Categories of bias

Although some 235 forms of bias have been analyzed, clustered and mapped to biomedical research fields [9], for the purposes of this brief synopsis, a cross-section of common examples are grouped into three categories:

1. Bias through ignorance can be as simple as not knowing which statistical test should be applied to a particular dataset, reflecting inadequate knowledge or scant supervision/mentoring. Similarly, the frequent occurrence of inappropriately large effect sizes observed when the number of animals used in a study is small [10-13], that subsequently disappear in follow-up studies that are more appropriately powered or when replication is attempted in a separate laboratory, may reflect ignorance of the significance of determining effect sizes and conducting power calculations [11,12,14].

The concern with disproportionate large effect sizes from small group sizes has been recognized by the National Institutes of Health (NIH) [15], which now mandates power calculations validating the number of animals necessary to determine if an effect occurs before funding a program. However, this necessitates preliminary, exploratory analyses replete with caveats, which might not get revisited, and is not a requirement with many other funding agencies. Too often studies are published with the minimal number of animals necessary to plug into a Student's t-test software program (n=3) or based on 'experience' or history. Replication of any finding as a standard component of a study is absolutely critical, but rare.

2. Bias by design reflects critical features of experimental planning ranging from the design of an experiment to support rather than refute a hypothesis; lack of consideration of the null hypothesis; failure to incorporate appropriate control and reference standards; and reliance on single data points (endpoint, time point or concentration/dose). Of particular concern is the failure to perform experiments in a blinded, randomized fashion, which can result in 3.2- and 3.4-fold higher odds, respectively, of observing a statistically significant result when compared to studies that were appropriately blinded or randomized [16]. While the impact of randomization might come as a surprise, since many animal studies are conducted in inbred strains with little heterogeneity, the opportunity to introduce bias into non-blinded experiments, even unintentionally, is very obvious. It is paramount that the investigator involved in data collection and analysis is unaware of the treatment schedule. How an outlier is defined and to be handled (e.g. dropped from the analysis), or what sub-groups are to be considered, must be established a priori and effected before the study is un-blinded. Despite its importance in limiting bias, one analysis of 290 animal studies [16] and another of 271 publications [15] revealed that 86-89% were not blinded.

Another important consideration in experimental design is the control of potentially confounding factors that can influence the experimental outcome indirectly. In the field of pharmacology, at a basic level this might include the importance of controlling blood pressure when conducting evaluations of compounds in preclinical studies of heart attack, stroke or thrombosis; or the recognition that most compounds lose specificity at higher doses; but consideration might also need to be given to other factors such as the significance of chronobiology (where, for example, many heart attacks occur within the first 3 hours of waking), referenced in [30].

3. Bias by misrepresentation. Researchers are an inherently optimistic group – the 'glass half full' is more likely brimming with champagne than tap water. Witness the heralding of the completion of the Human Genome Project or the advent of gene therapy, stem cells, antisense, RNAi, any "-omics" - all destined to have a major impact on eradicating disease in the near-term. This tendency for over-statement and over-simplification carries through to publications. The urge and rush to be first to publish a new "high-profile" finding can result in "sloppy science" [1], but more significantly can be the result of a strong bias [17]. Early replications tend to be biased against the initial findings, the Proteus phenomenon, although that bias is smaller than for the initial study [17]. It is not clear which is more disturbing – the level of bias and selective reporting found to occur in the initial studies; the finding that ~70% of follow-on studies contradict the original observation; or that it is so common and well-recognized a phenomenon that it even has a name.

A recent evaluation of 160 meta-analyses involving animal studies covering six neurological conditions, most of which were reported to show statistically significant benefits of an intervention, found that the "success rate" was too large to be true and that only 8 of the 160 could be supported, leading to the conclusion that reporting bias was a key factor [18].

The retrospective selection of data for publication can be influenced by prevailing wisdom promoting expectations for particular outcomes, or, where the benefit of hindsight at the conclusion of a study allows an uncomplicated sequence of events to be traced and promulgated, as the only conclusion possible.

While research misconduct in terms of overt fraud [1,19,20] and plagiarism [21] is a topic with high public visibility, it remains relatively rare in research publications while data manipulation, data selection and other forms of bias are increasingly prevalent. Whether intentional, the result of inadequate training or due to a lack of attention to quality controls, they foster an approach and attitude that blurs the distinction between necessary scientific rigor and deception, and probably contribute substantially to the poor reproducibility of biomedical research findings [6,7].

Scientific bias represents a proverbial "slippery slope", from the subjectivity of "sloppy science" [1] and lack of replication [22] to the deliberate exclusion or non-reporting of data [6,7] to outright fabrication [19,20]. Plagiarism, distortion of data or its interpretation, physical manipulation of data, e.g., western blots [23], NMR spectra [24] to make the outcomes more visually appealing or obvious (often ascribed to the seductive simplicity of PowerPoint and the ease of manipulation with Photoshop), and blatant duplicity in the biopharma industry in the selective sharing of clinical trial outcomes [25] with inconclusive/negative trials often not reported [26], all contribute to the expanding concerns regarding scientific integrity and transparency.

This is an issue that obviously increases in importance as the outcomes of investigator bias impact the expenditure of millions of dollars on research programs that are progressed based on data presented; where inappropriate New Chemical Entities are advanced into clinical trials also exposing patients to undue risk; and unvalidated biomarkers are promoted to an anxious and misinformed public.

Correcting bias

With the increase in bias, data manipulation and fraud, the role of the journal editor has become more challenging, both from a time perspective and with regards to avoiding peer-review bias [27]. And while keeping the barriers high [8,28], much of the process still depends on the integrity and ethics of the authors and their institutions. It is paramount that institutions, mentors and researchers promote high ethical standards, rigor in scientific thought and ongoing evaluations of transparency and performance that meet exacting guidelines. Clinical trials with a full protocol defining size of the study, randomization, dosing, blinding and endpoints have to be registered before the study can begin, and, at the conclusion of the study, every patient has to be accounted for and included in the analysis. A proposal has been made [29] that non-clinical studies should adopt the same standards and, while not a requirement, such guidelines provide a useful rule of thumb to consider when designing any study. These topics, and their impact on the translation of research findings to the clinic, will be discussed in greater detail in an upcoming article in Biochemical Pharmacology [30].

Author biographies

Kevin Mullane

Kevin Mullane

Kevin Mullane
Kevin’s main guise has been as a drug hunter at multinational pharmaceutical (Wellcome, CIBA-Geigy) and biotechnology companies (Gensia, Chugai Biopharmaceuticals), before becoming President and CEO of Inflazyme Pharmaceuticals. Subsequently he has been an advisor to industry, academia, foundations and VC companies, evaluating technologies and developing translational opportunities. Kevin received his PhD from the University of London.

Michael Williams

Michael Williams

Michael Williams
Mike retired from the pharmaceutical industry in 2010 after 34 years in drug discovery research with Merck, CIBA-Geigy, Abbott and Cephalon. He has been actively involved with the biotech industry as a consultant, SAB member and executive (Nova, Genset, Adenosine Therapeutics, Antalium, Tagacept, Elan, Molecumetics) and has published extensively in the areas of pharmacology and drug discovery. He received his PhD and DSc degrees from the University of London in an era long before e-books could be downloaded.


[1] Stemwedel JD, “The continuum between outright fraud and "sloppy science": inside the frauds of Diederik Stapel (part 5)”, Scientific American June 26, 2013.

[2] Felin T, Hesterly WS, "The Knowledge-Based View, Nested Heterogeneity, And New Value Creation: Philosophical Considerations On The Locus Of Knowledge", Acad. Management Rev 2007, 32: 195–218.

[3] Manyika J, Chui M, Brown B, Bughin J, Dobbs R, Roxburgh C, Byers AH, “Big data: The next frontier for innovation, competition, and productivity“, McKinsey Global Institution, April 2011.

[4] Brenner S, “An interview with... Sydney Brenner”, Interview by Errol C. Friedberg, Nat Rev Mol Cell Biol 2008; 9:8-9.

[5] Mandavilli A, “Peer review: Trial by Twitter”, Nature 2011; 469, 286-7.

[6] Prinz F, Schlange T, Asadullah K, “Believe it or not: how much can we rely on published data on potential drug targets?”, Nature Rev Drug Discov 2011; 10: 712-3.

[7] Begley CG, Ellis LM, “Drug development: Raise standards for preclinical cancer research“, Nature 2012, 483, 531-533.

[8] Steen RG, Casadevall A, Fang FC, “Why has the number of scientific retractions increased?“, PLoS ONE 2013: 8: e68397.

[9] Chavalarias D, Ioannidis JPA, “Science mapping analyses characterizes 235 biases in biomedical research”, J Clin Epidemiol 2010; 63: 1205-15.

[10] Ioannidis JPA, “Why most published research findings are false“, PLoS Med 2005: e124.

[11] Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, et al., “Power failure: why small sample size undermines the reliability of neuroscience”, Nat Rev Neurosci 2013; 14: 365-76.

[12] Henderson VC, Kimmelman J, Fergusson D, Grimshaw JM, Hackam DG, “Threats to validity in the design and conduct of preclinical efficacy studies: a systematic review of guidelines for in vivo animal experiments“, PLoS Med 2013: e1001489.

[13] Sean ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR, “Publication bias in reports of animal stroke studies leads to major overstatement of efficacy“, PLoS Biol 2010; 8: e1000344.

[14] Kilkenny C, Parsons N, Kadyszewski E, Festing MFW, Cuthill IC, et al., “Survey of the quality of experimental design, statistical analysis and reporting of research using animals“, PLoS One 2009; 4: e7824.

[15] Wadman M, “NIH mulls rules for validating key results”, Nature 2013: 500:14-6.

[16] Bebarta V, Luyten D, Heard K, “Emergency medicine animal research: does use of randomization and blinding affect the results?”, Acad Emerg Med 2003; 10; 684-7.

[17] Pfeiffer T, Bertram L, Ioannidis JPA, “Quantifying selective reporting and the Proteus Phenomenon for multiple datasets with similar bias“, PLoS One 2011; 6: e18362.

[18] Tsilidis KK, Panagiotou OA, Sena ES, Aretouli E, Evangelou E, et al., “Evaluation of excess significance bias in animal studies of neurological diseases“, PLoS Biol 2013; 11: e1001609.

[19] Kakuk P, “The Legacy of the Hwang Case: Research Misconduct in Biosciences”, Sci Engineer Ethics 1; 2009: 645-62.

[20] Bhattacharjee Y. “The Mind of a Con Man“, New York Times Magazine April 26, 2013.

[21] “Science publishing: How to stop plagiarism”, Nature 481, 21–23.

[22] Ivan Oransky, “The Importance of Being Reproducible: Keith Baggerly tells the Anil Potti story, Retraction Watch, May 4, 2011.

[23] Rossner M, Yamada KM, “What's in a picture? The temptation of image manipulation”, J Cell Biol 2004;166:11-5.

[24] Smith III AB, “Data Integrity”, Organic Letts 2013, 15: 2893-4.

[25] Eyding D, Lelgemann M, Grouven U, Harter M, Kromp M, Kaiser T et al., “Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials”, BMJ 2010;341:c4737.

[26] Doshi P, Dickersin K, Healy D, Vedula SW, Jefferson T, “Restoring invisible and abandoned trials: a call for people to publish the findings”, BMJ 2013; 346:f2865.

[27] Lee CJ, Sugimoto CR, Zhang G, Cronin B,”Bias in peer review”, J. Amer Soc Info Sci Technol 2013: 64:2-17.

[28] “Reducing our irreproducibility”, Nature 2013: 496: 398.

[29] Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG, “Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research“, PLoS Biol 2010; 8: e1000412.

[30] Mullane K, Winquist RW, Williams M, “The translational paradigm in drug discovery”, Biochemical Pharmacology, 2014.


Understanding and addressing research misconduct

At what point does an author’s behavior earn the label ‘unethical’? When does taking inspiration from another cross the line and become plagiarism? Deciding what constitutes research misconduct is never easy. In this article, Linda Lavelle, a General Counsel on Elsevier’s legal team, reflects on the challenges of defining and responding to these cases, while […]

Read more >

At what point does an author’s behavior earn the label ‘unethical’? When does taking inspiration from another cross the line and become plagiarism? Deciding what constitutes research misconduct is never easy. In this article, Linda Lavelle, a General Counsel on Elsevier’s legal team, reflects on the challenges of defining and responding to these cases, while Publishing Director, Charon Duermeijer, focuses on the roles editors and publishers should play.

Dealing with ethics issues in journal publishing demands increasingly significant amounts of time and attention from scientific and medical journal editors. Our editors tell us that the seemingly ancillary responsibility of handling allegations of ethics breaches has become incredibly frustrating and time-consuming. In fact, when we host seminars for journal editors on a variety of publishing subjects, our ethics sessions (often titled ’Liars, Cheats, and Thieves‘) are consistently the best-attended presentations, and generally stimulate more discussion than any other publishing topic.

Why are ethics situations so challenging for editors?

Why are these cases so difficult for journal editors? Well, for one thing, some ethics issues fall into grey areas. While a situation involving data fabrication is clearly an ethics breach, cases such as disputed authorship or duplicate submission may be less clear. While uncredited text constitutes copyright infringement (plagiarism) in most cases, it is not copyright infringement to use the ideas of another.  The amount of text that constitutes plagiarism versus ‘fair use’ is also uncertain — under the copyright law, this is a multi-prong test that involves subjective analysis and balancing (in other words, guessing!) — so there is often not a clear answer as to when something does in fact constitute an ethics violation.

There are also widespread misconceptions about certain ethics issues, particularly in developing countries where education relating to publishing ethics may not be so freely available. We once had a situation where a young scientist insisted that the plagiarism allegation against him was unfounded. He told us that although most of the paper was in fact a word-for-word copy of an article authored by another scientist, his submission wasn’t plagiarism because he had changed the first sentence of every paragraph.

Sometimes ethics situations pose challenges because they arise from standards and rules that are different from journal to journal. For example, rules and practices relating to disclosing conflicts of interests, or what constitutes article authorship, may vary from discipline to discipline, or even from journal to journal — there are currently no uniform standards for conflict disclosure or authorship requirements that span all of scientific and medical publishing.

Another reason behind the difficulty in dealing with these situations is that editors are often not in a position to know what has really occurred. Does the person bringing the complaint to our attention have some personal bone to pick with the author who is being accused of the ethics breach? In one situation, we didn’t initially realize that the ‘complainant’ was the ex-wife of the ‘complainee’; in another, a professor falsely accused a former colleague who, it turned out, had not supported him for tenure. When there are complaints regarding who should be an author, or about unethical research practices, how can a journal editor know what really went on behind the scenes at the institution during the research and writing of the paper? An editor does not have resources to do police-like investigations, nor should that be the role of the editor in these cases. Nonetheless, the lack of information often puts the editor in a quandary when trying to decide the proper resolution.

One of the primary reasons that editors find these cases so frustrating is because these types of breaches deeply offend their personal conviction in the integrity of scientific publishing. And, of course, ethics breaches have the potential to damage the reputation of their journal, to which they have dedicated so much commitment and personal conviction.

For many publishers and editors, news of ethical misconduct invokes a strong response. Our first instinct may well be that it should not be tolerated and the authors involved should be punished. But is that appropriate? Is punishment always the answer? Once we start diving into the actual facts, we typically find that there are many shades of grey in allegations of ethical misconduct. This is exactly where the role of the editor comes in.

There are various ways that publishers and editors are alerted to a case. Often referees discover an issue while reviewing a paper and bring that to the attention of the editor. Subsequently, the issue is flagged with us, the publisher. At first, all concerned are disappointed that such a case has even happened and then in our journal! Personally, I feel thankful that a referee (or any other whistleblower) has helped the journal to rectify the situation and protected the scientific literature from any consequences of misconduct. It probably means that the editor has picked a very good and thorough referee.

As a first step in our systematic approach, we consult PERK (our Publishing Ethics Resource Kit) guidelines and point our editors in the right direction. Whatever the issue and final outcome is though, we as a publisher need the expert, in-depth knowledge of the scientific community, in this case the editor and referee; this is truly a team effort. Publishers are usually not equipped with the exact scientific background to understand the shades of grey. We have to rely on the scientific knowledge of true insiders which is why editors are ultimately responsible for the assessment of alleged misconduct.

If you are an experienced editor for one of our journals, you may have already dealt with one or more ethics cases and know the process well. Some of your fellow editors may be less experienced because they are new, or have just been fortunate that, until now, their journal has not been affected. The ethics cases we deal with at Elsevier are very diverse and you could say that some authors have become very creative in their actions! We had a recent case where a referee noted that the discussed results were remarkably similar to those produced by his PhD student not too long ago, when they were all happily working in the same group. As the investigations still continue, the editor is trying to find out what really happened and if this accusation is correct. As emotions can run high, it requires a lot of patience and persistence but mostly a neutral eye for the situation at hand. In another recent case, the corresponding author supposedly added co-authors with whom he may not have written this particular paper; the jury is still out on that one. The recently added invitation to all co-authors to add their ORCID ID to a submission in Elsevier's Editorial System (EES) is a first step to address such issues, as it alerts people that they have been listed as co-authors for a particular paper. Previously, co-authors were not consulted at all.

On occasion, editors can get understandably nervous about who is legally responsible for making the ultimate decision but rest assured, Elsevier can offer legal support and has insurance in the unlikely case of litigation. 

At Elsevier, we handle hundreds of publishing ethics cases a year. Investigations around the actual case have to be handled very carefully and it may take some time to come to a conclusion which is fair to all parties involved (in some cases even up to a few years). When the outcome of an investigation results in retracting a paper from ScienceDirect, the retraction may come to the attention of Retraction Watch or similar websites. It could very well be that you as the responsible editor will be contacted by a journalist for comment; please feel free to consult your publisher who can help you address this in a timely manner. In Part II of this Ethics Special (publication date November), our Vice President of Global Corporate Relations, Tom Reller, will offer some advice on dealing with these situations.

As you continue to grow into your role as an editor, you will likely become more familiar with the variations. And although we may all feel instinctively disappointed by the alleged offence, we constantly need to consider whether the punishment (if proven guilty) is appropriate and proportionate; we aim to not only be consistent within a journal but across all Elsevier journals. After all, there are many shades of grey….

Author bios

Linda Lavelle

Linda Lavelle
Linda is a member of Elsevier’s legal team, providing support and guidance for its companies, products and services. She is also responsible for Elsevier’s Global Rights-Contracts team, and is a frequent speaker on matters of publication ethics. Linda earned her law degree from the University of Michigan and also has an MBA. She joined Harcourt in 1995, which subsequently became part of Elsevier. Before that time, she served in a law firm, and held a number of positions in the legal, scientific, and information publishing industry.

Charon Duermeijer

Charon Duermeijer
Since Charon joined Elsevier in 2000, she has held various publishing roles for Physics. In close collaboration with many editors around the world, she has worked on improving and setting the strategy for various journals. She holds a PhD in Geophysics from the Utrecht University in The Netherlands. Prior to Elsevier, she worked at Kluwer Academic Publishers and Goldfields of South Africa. She is currently responsible for the Physics team and their journals.

Mark Seeley

Guest Editorial: Elsevier General Counsel, Mark Seeley

Ethics in society and business has many meanings — conducting one’s self in a way that respects and recognizes others and their contributions, but also the notion of compliance with our societally-sanctioned behavioral processes (including laws and regulations). Ethics in scientific and medical publishing takes these meanings a step further — by touching on the […]

Read more >

Ethics in society and business has many meanings — conducting one’s self in a way that respects and recognizes others and their contributions, but also the notion of compliance with our societally-sanctioned behavioral processes (including laws and regulations). Ethics in scientific and medical publishing takes these meanings a step further — by touching on the relationships within the scholarly communication process involving the journal, the relevant scientific or medical community that a particular journal serves, and society at large, publishing ethics encompasses a complex set of relationships which are mutually reinforcing. Scholarly journals have enormous prestige due to the stewardship that such journals have exhibited, over the decades, concerning the ‘record of science’ and the trust that their respective communities and society as a whole have in that record.

In one sense, publishing ethics allegations diminish that historical reputation and trust — climate change ‘deniers’ have used some controversies concerning undeclared potential conflicts of interest to suggest that the fundamental research is flawed, even though there is little evidence of substantial scientific disagreement on the broad question of the impact of industrial activities on climate. On the other hand, increasing the transparency and visibility over processes for managing publishing ethics allegations could — and should — shore up the journal’s reputation.

Society has highly unrealistic expectations of scientists, journal editors, and journals. The concept of peer review, for example, is often portrayed extremely simplistically as a ‘quality testing’ process —and consequently society expects that if an article has been through the peer-review process it should be completely sound in its calculations and methods. The reality is that peer review is about some fairly subjective points — a paper’s potential impact and importance and how the paper fits into the theoretical developments in the relevant discipline. It is also sometimes about identifying true ‘outliers’ in research results or methods, thus giving the journal and the editor an opportunity to review these in more depth. It is not a ‘Consumer Reports’ second laboratory duplicating the purported results of a particular paper for testing and quality assurance purposes. And yet, society is not wholly wrong to expect that the scholarly community — and thus the journal — should be better able to recognize fraudulent results or plagiarism and report on such violations as responsibly as possible.

The responsibility for correcting the scientific record when ethics allegations are confirmed falls largely on the journal editor and the publishing team, often with support from the relevant research funders, universities and institutions, and other investigators or peer reviewers. While Elsevier is adding a small team of publishing ethics experts, and while Elsevier legal team members are often involved in highly contentious or ‘legalistic’ matters, the expertise of the journal editor is fundamental to any significant investigation or consideration. The editor has the appropriate grounding in the particular discipline, an understanding of the relevant expertise of various research teams, and a general sense of which facts are more likely correct than not. We must always keep in mind that the standard of ‘proof’ that we look for is a general ‘more likely than not’ standard, not a deeply rigorous, criminally-oriented, ‘beyond a shadow of doubt’ standard. One reason for this standard level is that we anticipate that the relevant scholarly community, as an informed reader/consumer of journal content, will come to its own conclusions as to the merits and relevance of particular allegations and claims, assuming the requisite transparency and disclosure.

Law is the profession of skeptics, and a skeptical point of view is often useful in judging the merits of competing narratives in a publishing ethics dispute. Law is supposed to teach us to be logical, to think through the alternatives and contrary points of view, and to disregard emotion and ‘threats’. There are sometimes vague and sometimes very specific threats that are made by complainants or subjects of inquiries about resorting to legal process and formal legal complaints. These are generally quite silly. The few courts that have been asked to opine about scholarly publishing complaints have generally been respectful of the scientific process and charmingly denigrating about the ability of the legal process to improve on the underlying scientific investigation or conclusion. But my team is prepared to help when complaints like this are made or when it is otherwise deemed useful by editors or publishing team members. We are happy to help, and we are quite passionate about publishing ethics issues and process.

For me personally, I have been involved in publishing ethics policy discussions, the drafting of policy and procedural documents, individual investigations of a ‘legalistic’ nature, and service on our formal ‘retractions panel’ for more than ten years now (along with my UK-based colleague, Senior Vice President of Research & Academic Relations, Mayur Amin). I have enjoyed my discussions with editors and our publishing team members on these issues, and I like to think I have helped to contribute to good policy-setting and reasonably professional retraction processes. I’m also fairly opinionated, so I wanted to conclude this introduction with some general comments and observations.

One can make the argument that the level of ethics issues has not significantly increased, but is simply more visible now. However, I think the better view — one more consistent with the evidence on the number of retractions — is that we are seeing an actual rise in volume. The number of formal retractions as recorded on Elsevier’s ScienceDirect platform has more than doubled between 2004 and today — in fact, for 2013 it looks as if we will have close to 200 retractions, which would be five times the number we had in 2004. I believe this is due to the ramp up of the pressure to publish and occasionally the pressure to take short-cuts — and I believe this pressure is increasing across the board, including in the rapidly-developing countries of the world where scientific research is exploding. As vehicles for investigation and comparisons, I applaud the efforts of COPE (the Committee on Publication Ethics) and CrossCheck (both of which you will hear more about in Part II of this Ethics Special), but would note that they are only vehicles and not a substitute for editorial judgment.

Researchers should be allowed the benefit of the doubt, particularly at an early point in their career, and we should accept that researchers will sometimes make mistakes at this stage which they can then learn from. This is why I am leery of the notion of ‘blacklisting’ an author or research group. I do not believe that authors should be given carte blanche when it comes to ethics violations — simply that we should be careful not to rush to judgment and be careful in recommending the appropriate sanction for the relevant degree of ethics violation.

I think that not all misconduct is equal — to me outright fraud is the most dangerous and has the most impact on the community (as it may cut off otherwise promising research areas), and fraud allegations are relatively rare. I would distinguish between plagiarism, which involves taking credit for someone else’s research efforts, as opposed to merely copying a stray paragraph or two — the latter is certainly wrong and deserves some form of censure but the former is more inherently improper as a destabilization of the research environment. I think arguments about authorship, and particularly the idea of one author as opposed to all co-authors being identified in a retraction notice as the ‘culprit’, are unnecessary and ultimately do not advance science. Of course, I accept that a co-author who was an equal participant in a project and a paper deserves recognition for their contribution.

Elsevier has had to address some publication ethics issues over the past several years, but I believe we have addressed them head-on and pro-actively. Last year, there was some discussion about the ‘faking’ of peer-reviewer identities in our article submission system, which we acknowledged and dealt with by improving our system (unfortunately identity ‘theft’ in this sense is difficult to prevent entirely except by improving on personal security systems in areas like passwords). We have also had public controversies about a now-retired editor who accepted many of his own authored papers for his journal, and controversies about genetically modified foods and the raising of children by same-sex couples. In all of these controversies we have worked hard to achieve the ‘right’ resolution — first by emphasizing science (if appropriate scientific and publishing processes have been followed, then our view is that we should stick with the result, even if it is not ‘politically correct’) and second by emphasizing transparency and disclosure. Again, we trust the relevant scientific community to put things into context and judge things on their scientific merits.

By sticking with science and transparency, we can give assurance to society that our publishing processes are trustworthy — not that they are perfect but that they can be relied on. Science publishing, however, is not headline-oriented, short-term journalism — it is about the long-term process of building on discoveries and theories.

I hope you enjoy this special edition as I know I will!

Mark Seeley
Senior Vice President & General Counsel, Elsevier
Chair of the Copyright and Legal Affairs Committee, STM (International Association of Scientific, Technical & Medical Publishers)


Welcome to Part I of our Ethics Special edition

Publishing ethics, research misconduct… call it what you will it has become one of the greatest challenges many journal editors face today. In fact, a growing number of you have been moved to pen editorials on the subject – two recent examples being Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of […]

Read more >

Publishing ethics, research misconduct… call it what you will it has become one of the greatest challenges many journal editors face today.

In fact, a growing number of you have been moved to pen editorials on the subject – two recent examples being Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of research assessment by Research Policy’s Professor Ben Martin and Falsification, Fabrication, and Plagiarism: The Unholy Trinity of Scientific Writing by Dr Anthony L Zietman, Editor-in-Chief of the International Journal of Radiation Oncology * Biology * Physics.

As Elsevier General Counsel, Linda Lavelle, notes in this issue, “when we host seminars for journal editors on a variety of publishing subjects, our ethics sessions (often titled 'Liars, Cheat, and Thieves') are consistently the best-attended presentations….”

With research misconduct clearly such an area of concern, we have devoted the final two issues of 2013 to the topic.

This edition, Part I of our Ethics Special, moves from a broad overview of the current publishing ethics landscape to a more detailed examination of aspects such as bias and conflicts of interest. Part II, due for publication in early November, will take a closer look at the resources offered by Elsevier and the wider industry to support you when these cases arise.

What will I find in this issue?

Part I of our Ethics Special opens with a Guest Editorial by our SVP and General Counsel for the legal department, Mark Seeley. He reflects on the rise in publishing ethics cases and talks frankly about his own thoughts on how they should be addressed.

In Understanding and addressing research misconduct we hear from an Elsevier lawyer and a publisher about what constitutes research misconduct and the roles editors and publishers have to play once a case has been identified.

Two editors from the journal Biochemical Pharmacology explore research bias – and its implications – in Bias in research: the rule rather than the exception?.

We also hear from the editor community in Research misconduct – three editors share their stories. Our interview subjects discuss the ethics challenges in their fields and how they are working to combat them.

It’s not only authors who can find themselves crossing ethical boundaries and in The ethics pitfalls that editors face we examine two of the most common editor pitfalls – undisclosed conflicts of interest and citation manipulation.

Lessons learnt at the 3rd World Conference on Research Integrity highlights the key points one of Elsevier’s publishing ethics experts took home with her from this year’s World Conference on Research Integrity.

We complete the edition with Editor in the Spotlight – Professor Margaret Rees. As Editor-in-Chief of Maturitas and current Secretary of COPE (the Committee on Publication Ethics), she draws on her extensive ethics experience to answer our questions.

What does that leave for Part II?

The second part of our Ethics Special, scheduled for publication in early November, will contain a range of articles designed to keep you up to date with the publishing ethics support on offer. Features include an interview with the current Chair of COPE, tips on dealing with the media, information on how we are working with authors and reviewers to train them on good ethical practice and a range of practical advice (and an offer of free software!) from The Office of Research Integrity.

We really hope this edition answers some of your questions on this topic and perhaps inspires some new ones. As always, I really look forward to hearing your views and you can email me at

Do you find that authors who have recently published in your journal are more likely to agree to review?

View Results

Loading ... Loading ...

Short Communications

  • Clarification of our policy on prior publication

    Addressing a question that has arisen recently from both editors and authors Learn more

  • Beware of fraudulent emails requesting payment

    Paul Doda from Elsevier's Legal team warns against emails that appear to be from Elsevier requesting payment from authors. Learn more

  • Read Heliyon’s first papers

    Today the journal Heliyon celebrates a major milestone – the publication of its first papers. Learn more

  • EBioMedicine: The importance of collaboration

    What do you get when you mix the editorial leadership from Cell with the editorial leadership of The Lancet, and sprinkle it with the publishing expertise of Elsevier? Learn more

  • Elsevier launches the Green and Sustainable Chemistry Challenge

    The winning entries will receive funding for research projects designed to help solve today's most pressing global challenges in health, environment or engineering. Learn more

  • How Atlas can help you demonstrate the value of research in your journal

    This new virtual publication features summaries of research articles with social impact which are then promoted to a broad readership Learn more

  • Revamped journal visuals offer greater insight into journal performance

    Our Journal Insights tool now offers increased information on metrics, publication speed and 'reach' for participating titles. Learn more

  • Authors embrace Share Link – the free access service for new articles

    The initiative has now been rolled out to more than 2,000 journals and some links have attracted more than 1,000 clicks Learn more

  • Unleashing the power of academic sharing

    Elsevier updates its article-sharing policies, perspectives and services — and invites researchers and hosting platforms to work with us to develop innovative sharing options and guidelines Learn more

Other articles of interest

Webinars & webcasts

Discover our archive of webinars and training webcasts. Upcoming webinars, which are free to attend, will be listed in due course, below.

To sign-up for one or more of these events, visit our registration form.