17 Sep 2013 3 Comments
Research misconduct – three editors share their stories
We approached three leading editors with the following question: we know that the three most common forms of ethical misconduct are falsification, fabrication and plagiarism. Please share with us the impact these have had on submissions to your journal and how you have handled them. In their answers below they touch on the ethics challenges in their fields and how they are working to combat them.
Henrik Rudolph is Dean of Faculty Military Sciences for the Netherlands Defence Academy (NLDA). He has been Editor of Applied Surface Science - a journal devoted to applied physics and the chemistry of surfaces and interfaces - for more than eight years and Editor-in-Chief since 2011. During that time, he has handled several thousand manuscripts and become experienced both in the use of iThenticate (software for plagiarism detection and prevention) and the identification of suspicious manuscripts.
First of all, I prefer to talk about academic misconduct rather than ethical misconduct, since the latter is a much broader issue. It includes, for example, papers with experiments that are either prohibited by law (usage of lab animals) or due to their use of restricted materials are impossible to repeat in a normal research environment.
The frequency of academic misconduct has been rather stable since Applied Surface Science started using EES in July 2005. Close to 10% of the papers we receive show some sign of academic misconduct, but since the total number of submissions is increasing, the absolute number is also rising. The most common issue we see is too large an overlap with previously published material, i.e. plagiarism. Cases are evenly divided between self-plagiarism and regular plagiarism. These submissions are most often identified in the editorial phase (by the managing editor or editor) and are rejected before they are sent out for review. iThenticate is an important instrument for detecting academic misconduct, but often common sense is an equally important instrument: do certain parts of the paper look much more polished language-wise than the rest? Has the spelling suddenly changed from UK English to US English? We have even had cases where authors have copied the spelling mistakes in the papers they have plagiarized. If it looks fishy it probably is fishy.
Another common issue is the reuse of figures from previously published work. This is much more difficult to detect, but it can often be found by comparing the figure captions. We have seen all kinds of manipulations to mislead the reader: turning the figure 90 degrees, cropping the figure differently or even showing the negative image. These issues are found by editors but also by great reviewers. I am afraid that what is detected in only the tip of the iceberg – we are simply not equipped to detect this kind of academic misconduct. Also, the human gut feeling plays an important role here: does the figure look like the rest of the figures in graphics style? Does the date imprinted in the picture (often done in our field of work) correspond with the rest of the figures or is the figure much older than the rest? My colleague Professor Frans Habraken, who unfortunately passed away in 2011, was especially keen in detecting this kind of academic misconduct. He could spend a large portion of a day flipping, cropping and comparing figures.
Reusing old figures can be (self-) plagiarism, but it can also be pure falsification. Once in a while we encounter submissions which claim to have observed certain phenomena and support this with old material. Falsification is the most difficult type of academic misconduct to detect. As long as the results look plausible and are in line with expectations we, as human beings, are willing to accept them. Requesting the raw data for all experiments in a submission would help us. While we currently don’t require this from authors, it would be a natural extension of working online. Cloud space is getting cheaper by the day and any given experiment in our field should not generate terabytes of information. This would make it possible to let statistical tools loose on the experimental results and editors and reviewers could look closely at the underlying data. Falsification is seemingly the least common form of academic misconduct, but that could be related to the difficulties in detecting it. We also enter a grey area: is it academic misconduct to leave out data or experiments that were not in line with expectations?
Besides the above-mentioned common forms of academic misconduct, there is a more serious threat arising. The pressure on (young) academic staff to publish is huge. Often people are included as authors when they have contributed only marginally, if at all. This might sound like a rather innocent kind of altruism, but it is highly misleading for the reader and very irritating for an editor. Even worse are the cases where major contributors are left out as authors. Behind every single case reported (around 5-6 per year) there is some kind of conflict. Either the author did not agree with the interpretation of the data or had a personal conflict with the corresponding author. Last, but not least, we also have cases where authors have been included without their knowledge. This is sometimes done out of gratitude - he/she helped me greatly - and sometimes as an acknowledgement of the accomplishment of an established expert in the field.
Occasionally, we see people publishing data which they were not allowed to publish, or should have asked permission to publish. In these cases there is most likely nothing wrong with the data or the submission, but the authors gave away something that was not theirs to give away; the copyright to the paper. While these cases are few and far apart, they always have legal aspects that are beyond the capacity of an average Editor (-in-Chief), so I would suggest the Editor (-in-Chief) contacts the legal department of Elsevier as soon as possible. Your publisher or other contact person at Elsevier will help you with this.
At Applied Surface Science, we have agreed that all cases of academic misconduct are handled by the Editor-in-Chief. This makes it simpler to stick to one line of action and ensures the Editor-in-Chief gains the experience necessary to handle the different kinds of academic misconduct we see. But no academic misconduct case is alike, so there is never a dull moment while investigating one. We keep track of the academic misconduct cases and put notes in the author profiles in EES. We even involve collaborators if there is reason to believe that it was a group issue rather than an individual rogue author. As Editor-in-Chief, I often kindly ask an author not to submit new papers to the journal for a while. This step is often taken when an author is caught for a second time. Repeat offenders are unfortunately rather common and it is therefore important to keep track of past transgressions.
Bottom line for detecting academic misconduct: don’t underestimate the stupidity of the transgressor and don’t underestimate your own ability to be misled.
Professor Ulrich Brandt is Deputy Chair of the Nijmegen Centre for Mitochondrial Disorders (NCMD) at Radboud University Medical Centre in The Netherlands. For many years he served as the Chair of the ethics committee of the Goethe-University in Frankfurt, Germany. He is also Editor-in-Chief of Biochimica and Biophysica Acta (BBA), comprising nine topical sections, and advises on many of the journal’s publishing ethics cases.
The problem of publication ethics is not too big for our journal and our field, at least not if you consider the amount of cases we are aware of, which is less than one per month. On the other hand, this is probably only the tip of the iceberg. While cases are sometimes identified by reviewers, most frequently they are discovered following complaints by colleagues or peers.
There is a certain upward trend in the number of cases and this probably has two causes: increased awareness and more people carrying their disputes with colleagues over to journal editors. The most common forms of research misconduct we see involve author disputes; (self-) plagiarism; manipulated figures; and improper citations. However, I am concerned about the fact that some improper things – for example, the pasting together of Western blots – are not even looked at as scientific misconduct by some people.
Recent publishing ethics cases we have dealt with include:
- Complaints that a peer has not properly cited somebody’s work.
- Complaints that a person who produced data presented in the paper was not properly acknowledged as an author or did not authorize the publication.
- Repeated use of the same Figure panels – but with different labeling. We’ve also seen suspiciously similar data between different Figures (bands in blots, curves, etc…).
- Self-plagiarism by publishing the same data in two languages without proper citation of the first publication.
The editor is not usually in the position to investigate these cases and therefore – except in clear cases of misconduct – can only moderate between the parties involved. If it can help to clarify the situation, we should confront the authors with the allegations and ask for original data. However, once there are indications of serious scientific misconduct, it is time to inform the organization of the corresponding author and ask for an investigation of the case. The verdict reached by the organization in question can help to inform your decision-making.
I think it is important to avoid getting involved in personal disputes and ignore anonymous complaints, unless they are severe and immediately seem justified.
Apart from picking good and knowledgeable reviewers, there is little that can be changed in the peer-review system that would help with this problem. I don’t think that ideas like publishing reviews and reviewers’ names of accepted papers will be helpful. Authors should know that their papers may be checked by anti-plagiarism software, because this will have a good deterrent effect.
I have found the resources Elsevier has available useful, for example, membership of COPE (the Committee on Publication Ethics) and PERK (the Publishing Ethics Resource Kit). Also, publicizing that Elsevier journals and editors are actively involved in such activities will make us less attractive to potential bogus authors.
Overall though, the problem of scientific misconduct cannot be solved by the journals. It is often a matter of the culture within a given scientific community.
Ben Martin is Professor of Science and Technology Policy Studies at the University of Sussex and an Editor on the journal Research Policy, which explores policy, management and economic studies of science, technology and innovation. He recently authored an extended editorial for his journal entitled Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of research assessment . It discusses whether peer review is continuing to operate effectively in policing research misconduct in the academic world.
In my field, the problem of research misconduct is substantial and it is growing – perhaps that is also because we are becoming better at uncovering these cases. Typically, the cases we see involve self-plagiarism, redundant publication or duplicate submissions.
They are normally identified by alert reviewers, sometimes by editors, and occasionally with the benefit of information on the 'grapevine' from editors of other journals who have encountered problems with a particular individual.
The role of the editor in these cases is to oversee the process of investigation (including ensuring all the facts are double-checked independently), ask the author(s) to respond, and decide on the outcome and appropriate sanction.
I find following the COPE 'flowcharts' useful. I also consult the discussions of previous similar cases on COPE’s website. It is also important to check each step with other editors and with Elsevier and avoid the trap of becoming too upset by misbehaving authors – the danger is that you will then overreact.
If we want to solve these problems we need the academic community to be willing to discuss them openly - particularly about where the line between acceptable and unacceptable research behavior should be drawn. We also need more systematic training of young researchers with regard to such matters (what the rules are, what to do if they spot misconduct, the role of referees, editors and publishers etc…).*
* Note from Ed: In the November Part II of this Ethics Special, we will take a closer look at some of the activities already underway at Elsevier to help train early career authors and reviewers.
 Ben R Martin, “Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of research assessment”, Research Policy, Volume 42, Issue 5, June 2013.