17 Sep 2013 8 Comments
Kevin Mullane & Michael Williams
Bias in research: the rule rather than the exception?
Dr Kevin Mullane and Dr Mike Williams, two of the editors of the Elsevier journal, Biochemical Pharmacology, discuss some of the causes and prevalence of bias in the fields of biomedical research - and the implications for the wider research community.
As the primary purpose of scientific publication is to share ideas and new results to foster further developments in the field, the increasing prevalence of fraudulent research and retractions is of concern to every scientist since it taints the whole profession and undermines the basic premise of publishing.
While most scientists tend to dismiss the problem as being due to a small number of culprits - a shortcoming inherent to any human activity - there is a larger issue on the fringes of deception that is far more prevalent and of equal concern, where the adoption of certain practices can blur the distinction between valid research and distortion – between "sloppy science", "misrepresentation", and outright fraud .
Bias in research, where prejudice or selectivity introduces a deviation in outcome beyond chance, is a growing problem, probably amplified by:
- the competitive aspects of the profession with difficulties in obtaining funding;
- pressures for maintaining laboratories and staff;
- the desire for career advancement (‘first to publish’ and ‘publish or perish’); and, more recently,
- the monetization of science for personal gain.
Rather than being "disinterested contributors to a shared common pool of knowledge" , some scientists have become increasingly motivated to seek financial rewards for their work through industrial collaborations, consultancy agreements and venture-backed business opportunities; even to the exclusion of concerns regarding the accuracy, transparency and reproducibility in their science.
Bias tends to be obscured by the sheer volume of data reported. The number of publications in Life Sciences has increased 44% in the last decade, and at least one leading biomedical journal now publishes in excess of 40,000 printed pages a year. Data is generally viewed as a "key basis of competition, productivity growth...[and]... innovation" , irrespective of its conception, quality, reproducibility and usability. Much of it, in the opinion of Sydney Brenner, has become "low input, high throughput, no output science" .
Indeed, while up to 80% of research publications apparently make little contribution to the advancement of science - "sit[ting] in a wasteland of silence, attracting no attention whatsoever" , it is disconcerting that the remaining 20% may suffer from bias as reflected in the increasing incidence of published studies that cannot be replicated [6,7] or require corrections or retractions , the latter a reflection of the power of the Internet.
Categories of bias
Although some 235 forms of bias have been analyzed, clustered and mapped to biomedical research fields , for the purposes of this brief synopsis, a cross-section of common examples are grouped into three categories:
1. Bias through ignorance can be as simple as not knowing which statistical test should be applied to a particular dataset, reflecting inadequate knowledge or scant supervision/mentoring. Similarly, the frequent occurrence of inappropriately large effect sizes observed when the number of animals used in a study is small [10-13], that subsequently disappear in follow-up studies that are more appropriately powered or when replication is attempted in a separate laboratory, may reflect ignorance of the significance of determining effect sizes and conducting power calculations [11,12,14].
The concern with disproportionate large effect sizes from small group sizes has been recognized by the National Institutes of Health (NIH) , which now mandates power calculations validating the number of animals necessary to determine if an effect occurs before funding a program. However, this necessitates preliminary, exploratory analyses replete with caveats, which might not get revisited, and is not a requirement with many other funding agencies. Too often studies are published with the minimal number of animals necessary to plug into a Student's t-test software program (n=3) or based on 'experience' or history. Replication of any finding as a standard component of a study is absolutely critical, but rare.
2. Bias by design reflects critical features of experimental planning ranging from the design of an experiment to support rather than refute a hypothesis; lack of consideration of the null hypothesis; failure to incorporate appropriate control and reference standards; and reliance on single data points (endpoint, time point or concentration/dose). Of particular concern is the failure to perform experiments in a blinded, randomized fashion, which can result in 3.2- and 3.4-fold higher odds, respectively, of observing a statistically significant result when compared to studies that were appropriately blinded or randomized . While the impact of randomization might come as a surprise, since many animal studies are conducted in inbred strains with little heterogeneity, the opportunity to introduce bias into non-blinded experiments, even unintentionally, is very obvious. It is paramount that the investigator involved in data collection and analysis is unaware of the treatment schedule. How an outlier is defined and to be handled (e.g. dropped from the analysis), or what sub-groups are to be considered, must be established a priori and effected before the study is un-blinded. Despite its importance in limiting bias, one analysis of 290 animal studies  and another of 271 publications  revealed that 86-89% were not blinded.
Another important consideration in experimental design is the control of potentially confounding factors that can influence the experimental outcome indirectly. In the field of pharmacology, at a basic level this might include the importance of controlling blood pressure when conducting evaluations of compounds in preclinical studies of heart attack, stroke or thrombosis; or the recognition that most compounds lose specificity at higher doses; but consideration might also need to be given to other factors such as the significance of chronobiology (where, for example, many heart attacks occur within the first 3 hours of waking), referenced in .
3. Bias by misrepresentation. Researchers are an inherently optimistic group – the 'glass half full' is more likely brimming with champagne than tap water. Witness the heralding of the completion of the Human Genome Project or the advent of gene therapy, stem cells, antisense, RNAi, any "-omics" - all destined to have a major impact on eradicating disease in the near-term. This tendency for over-statement and over-simplification carries through to publications. The urge and rush to be first to publish a new "high-profile" finding can result in "sloppy science" , but more significantly can be the result of a strong bias . Early replications tend to be biased against the initial findings, the Proteus phenomenon, although that bias is smaller than for the initial study . It is not clear which is more disturbing – the level of bias and selective reporting found to occur in the initial studies; the finding that ~70% of follow-on studies contradict the original observation; or that it is so common and well-recognized a phenomenon that it even has a name.
A recent evaluation of 160 meta-analyses involving animal studies covering six neurological conditions, most of which were reported to show statistically significant benefits of an intervention, found that the "success rate" was too large to be true and that only 8 of the 160 could be supported, leading to the conclusion that reporting bias was a key factor .
The retrospective selection of data for publication can be influenced by prevailing wisdom promoting expectations for particular outcomes, or, where the benefit of hindsight at the conclusion of a study allows an uncomplicated sequence of events to be traced and promulgated, as the only conclusion possible.
While research misconduct in terms of overt fraud [1,19,20] and plagiarism  is a topic with high public visibility, it remains relatively rare in research publications while data manipulation, data selection and other forms of bias are increasingly prevalent. Whether intentional, the result of inadequate training or due to a lack of attention to quality controls, they foster an approach and attitude that blurs the distinction between necessary scientific rigor and deception, and probably contribute substantially to the poor reproducibility of biomedical research findings [6,7].
Scientific bias represents a proverbial "slippery slope", from the subjectivity of "sloppy science"  and lack of replication  to the deliberate exclusion or non-reporting of data [6,7] to outright fabrication [19,20]. Plagiarism, distortion of data or its interpretation, physical manipulation of data, e.g., western blots , NMR spectra  to make the outcomes more visually appealing or obvious (often ascribed to the seductive simplicity of PowerPoint and the ease of manipulation with Photoshop), and blatant duplicity in the biopharma industry in the selective sharing of clinical trial outcomes  with inconclusive/negative trials often not reported , all contribute to the expanding concerns regarding scientific integrity and transparency.
This is an issue that obviously increases in importance as the outcomes of investigator bias impact the expenditure of millions of dollars on research programs that are progressed based on data presented; where inappropriate New Chemical Entities are advanced into clinical trials also exposing patients to undue risk; and unvalidated biomarkers are promoted to an anxious and misinformed public.
With the increase in bias, data manipulation and fraud, the role of the journal editor has become more challenging, both from a time perspective and with regards to avoiding peer-review bias . And while keeping the barriers high [8,28], much of the process still depends on the integrity and ethics of the authors and their institutions. It is paramount that institutions, mentors and researchers promote high ethical standards, rigor in scientific thought and ongoing evaluations of transparency and performance that meet exacting guidelines. Clinical trials with a full protocol defining size of the study, randomization, dosing, blinding and endpoints have to be registered before the study can begin, and, at the conclusion of the study, every patient has to be accounted for and included in the analysis. A proposal has been made  that non-clinical studies should adopt the same standards and, while not a requirement, such guidelines provide a useful rule of thumb to consider when designing any study. These topics, and their impact on the translation of research findings to the clinic, will be discussed in greater detail in an upcoming article in Biochemical Pharmacology .
CARDIOVASCULAR EDITOR, BIOCHEMICAL PHARMACOLOGY & PRESIDENT, PROFECTUS PHARMA CONSULTING INC.
Kevin’s main guise has been as a drug hunter at multinational pharmaceutical (Wellcome, CIBA-Geigy) and biotechnology companies (Gensia, Chugai Biopharmaceuticals), before becoming President and CEO of Inflazyme Pharmaceuticals. Subsequently he has been an advisor to industry, academia, foundations and VC companies, evaluating technologies and developing translational opportunities. Kevin received his PhD from the University of London.
COMMENTARIES EDITOR, BIOCHEMICAL PHARMACOLOGY & ADJUNCT PROFESSOR, DEPARTMENT OF MOLECULAR PHARMACOLOGY AND BIOLOGICAL CHEMISTRY, FEINBERG SCHOOL OF MEDICINE, NORTHWESTERN UNIVERSITY, CHICAGO.
Mike retired from the pharmaceutical industry in 2010 after 34 years in drug discovery research with Merck, CIBA-Geigy, Abbott and Cephalon. He has been actively involved with the biotech industry as a consultant, SAB member and executive (Nova, Genset, Adenosine Therapeutics, Antalium, Tagacept, Elan, Molecumetics) and has published extensively in the areas of pharmacology and drug discovery. He received his PhD and DSc degrees from the University of London in an era long before e-books could be downloaded.
 Stemwedel JD, “The continuum between outright fraud and "sloppy science": inside the frauds of Diederik Stapel (part 5)”, Scientific American June 26, 2013.
 Felin T, Hesterly WS, "The Knowledge-Based View, Nested Heterogeneity, And New Value Creation: Philosophical Considerations On The Locus Of Knowledge", Acad. Management Rev 2007, 32: 195–218.
 Manyika J, Chui M, Brown B, Bughin J, Dobbs R, Roxburgh C, Byers AH, “Big data: The next frontier for innovation, competition, and productivity“, McKinsey Global Institution, April 2011.
 Brenner S, “An interview with... Sydney Brenner”, Interview by Errol C. Friedberg, Nat Rev Mol Cell Biol 2008; 9:8-9.
 Mandavilli A, “Peer review: Trial by Twitter”, Nature 2011; 469, 286-7.
 Prinz F, Schlange T, Asadullah K, “Believe it or not: how much can we rely on published data on potential drug targets?”, Nature Rev Drug Discov 2011; 10: 712-3.
 Begley CG, Ellis LM, “Drug development: Raise standards for preclinical cancer research“, Nature 2012, 483, 531-533.
 Steen RG, Casadevall A, Fang FC, “Why has the number of scientific retractions increased?“, PLoS ONE 2013: 8: e68397.
 Chavalarias D, Ioannidis JPA, “Science mapping analyses characterizes 235 biases in biomedical research”, J Clin Epidemiol 2010; 63: 1205-15.
 Ioannidis JPA, “Why most published research findings are false“, PLoS Med 2005: e124.
 Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, et al., “Power failure: why small sample size undermines the reliability of neuroscience”, Nat Rev Neurosci 2013; 14: 365-76.
 Henderson VC, Kimmelman J, Fergusson D, Grimshaw JM, Hackam DG, “Threats to validity in the design and conduct of preclinical efficacy studies: a systematic review of guidelines for in vivo animal experiments“, PLoS Med 2013: e1001489.
 Sean ES, van der Worp HB, Bath PMW, Howells DW, Macleod MR, “Publication bias in reports of animal stroke studies leads to major overstatement of efficacy“, PLoS Biol 2010; 8: e1000344.
 Kilkenny C, Parsons N, Kadyszewski E, Festing MFW, Cuthill IC, et al., “Survey of the quality of experimental design, statistical analysis and reporting of research using animals“, PLoS One 2009; 4: e7824.
 Wadman M, “NIH mulls rules for validating key results”, Nature 2013: 500:14-6.
 Bebarta V, Luyten D, Heard K, “Emergency medicine animal research: does use of randomization and blinding affect the results?”, Acad Emerg Med 2003; 10; 684-7.
 Pfeiffer T, Bertram L, Ioannidis JPA, “Quantifying selective reporting and the Proteus Phenomenon for multiple datasets with similar bias“, PLoS One 2011; 6: e18362.
 Tsilidis KK, Panagiotou OA, Sena ES, Aretouli E, Evangelou E, et al., “Evaluation of excess significance bias in animal studies of neurological diseases“, PLoS Biol 2013; 11: e1001609.
 Kakuk P, “The Legacy of the Hwang Case: Research Misconduct in Biosciences”, Sci Engineer Ethics 1; 2009: 645-62.
 Bhattacharjee Y. “The Mind of a Con Man“, New York Times Magazine April 26, 2013.
 “Science publishing: How to stop plagiarism”, Nature 481, 21–23.
 Ivan Oransky, “The Importance of Being Reproducible: Keith Baggerly tells the Anil Potti story, Retraction Watch, May 4, 2011.
 Rossner M, Yamada KM, “What's in a picture? The temptation of image manipulation”, J Cell Biol 2004;166:11-5.
 Smith III AB, “Data Integrity”, Organic Letts 2013, 15: 2893-4.
 Eyding D, Lelgemann M, Grouven U, Harter M, Kromp M, Kaiser T et al., “Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials”, BMJ 2010;341:c4737.
 Doshi P, Dickersin K, Healy D, Vedula SW, Jefferson T, “Restoring invisible and abandoned trials: a call for people to publish the findings”, BMJ 2013; 346:f2865.
 Lee CJ, Sugimoto CR, Zhang G, Cronin B,”Bias in peer review”, J. Amer Soc Info Sci Technol 2013: 64:2-17.
 “Reducing our irreproducibility”, Nature 2013: 496: 398.
 Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG, “Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research“, PLoS Biol 2010; 8: e1000412.
 Mullane K, Winquist RW, Williams M, “The translational paradigm in drug discovery”, Biochemical Pharmacology, 2014.