Improving Systems to Promote Research Integrity

Introduction

The integrity of research is fundamental to the advancement of knowledge, the public’s support for research, and the autonomy of the academic profession [1]. Research integrity is based on the adherence to core values of objectivity, honesty, openness, fairness, accountability, and stewardship [2]. The multidimensional aspect of ethical scientific conduct involves researchers who rely on reliable results and public support. Further, the public relies on scientific progress, which could be perilous and harmful to unethical scientific activities [1].

Smith defines research misconduct as the fabrication, falsification, or plagiarism in proposing, performing, or reviewing research or reporting results [3]. In 1981, Dr. John R. Darsee, a cardiology research fellow, admitted to falsifying data in most of his research. Prof. Eugene Braunwald, a world-renowned mentor, had ordered to withdraw all of Dr. Darsee’s work from various scientific meetings and peer-reviewed publications and had to notify the funding agency, the National Institutes of Health (NIH), of such dishonest work [4]. Because of the rampant research misconduct in the ’70s and ’80s, the US Congress passed the Health Research Extension Act in 1985, which was transformed and is currently known as the Office of Research Integrity (ORI) in 1992. Its primary role is to prevent research fraud, promote research integrity through oversight and education, and review institutional findings and recommendations [5]. ORI’s misconduct case summaries are published on their official website with specific administrative actions imposed on dishonest research findings [6]. Of note is the case of Dr. Ylbin Lin, a postdoctoral fellow found guilty of falsifying, fabricating, and plagiarizing six papers and eight manuscripts. He falsely assembled random paragraphs of text, tables, and figures from previous publications and manuscripts to improve his citation metrics. Dr. Lin agreed to exclude himself voluntarily for ten years from contracting with US government agencies [7]. Similarly, in the survey by Bouter and colleagues, the major research fraud problems in modern research have been identified, such as selective reporting, selective citing, and flaws in quality assurance and mentoring [8]. Such research dishonesty has been demonstrated even among faculty staff and scholars across all science disciplines [9,10]. 

Two significant contributors that influence researchers to engage in research fraud are the competitiveness and pressure to acquire grants and publication acceptance in top-tier journals [8,9]. Peer review, a gauge of journal integrity, has recently been challenged [3]. Smith contested that flaws and abuses have been described in peer-reviewing, such as the “power” of a name, bias over negative results, and objectivity [3]. 

Overall, preventive measures have been formulated and implemented to address this research misbehavior. Professional, legal, and peer sanctions have been applied and supported [10]. Notably, various forms of system approaches are gaining interest, from protocol registration to open data and blind review [11,12].

This report elaborates further on the researchers’ risk factors for engaging in unethical research practices and tackles published preventive measures beneficial to all concerned stakeholders.

 

Challenges, Drivers, and Adverse Outcome of Unethical Research Practices

The performance of research activities is often placed on a specific pedestal of professional needs. The honesty and integrity of scientists are widely believed to be threatened by pressures to publish, unsupportive research environments, and other structural, sociological, and psychological factors, such as academic advancement, job security, promotion to a higher level of training, and securing of research funds [13-15]

Although monetary prizes were initially introduced as metaphors of prestige [16], recently, they are being utilized as an incentive to attract young research aspirants to sustain research interest and work or as a cash reward for exemplary research performance. Quan and colleagues described the landscape cash-per-publication reward policy in China. The amount has been increasing for cash rewards ranging from USD30 to USD165,000 for a single publication. The majority are given to the first author, and there is no regard for author requirement, primarily when papers are published in prestigious journals (e.g., Nature, Science) [17]. The negative impact of the monetary reward policy was apparent in the bias to recognize exemplary research output published in unpopular journals, disregard of authorship rule, abuse of bibliometric indicators, and displaced academic goals [17,18]. Furthermore, Hvisendahl reported rampant academic fraud, such as plagiarism, academic dishonesty, ghostwritten papers, and fake peer review scandal, in many Chinese publications [19].

Another critical stumbling block and a significant research misconduct driver is the challenge of publishing negative results [20-24]. With the pressure of acquiring grants and publication in a high-impact journal, researchers fall prey to fabricating, falsifying, or distorting data to land in top-tier journals [20,22,24]. Fanelli reported a 22% increase from 1990 to 2007 in the frequency of publication of papers with positive results in most high-yield countries [21]. In their analysis of unpublished and published papers in social sciences, Franco and colleagues reported that those with solid results are likely to be written up, submitted, and accepted for publication. The misconception of less publication potential for studies yielding null data has led to shelving off and abandoning interesting research work [24]. The adverse outcome of this predicament could lead to situations that limit the replicability assumption of science, as replication cannot be meaningful without the potential acknowledgment of failed replications. Moreover, blocking the publication and interpretation of null results may further extinguish questionable researcher practices [22].

Recently, heightened publications of papers with positive results have created conspicuous paranoia in the research community [11,25-28]. Researchers engage in unethical research practices with the culture of publish-or-perish and similar drivers mentioned above to propel professional growth. HARKing (Hypothesizing After the Results are Known), analytic flexibility, and data dredging are interrelated where post hoc design and data are manipulated to achieve significance and hypotheses fitting with selective writing and reporting. [11,25-28]. Such approaches dramatically increase and understate the risk of false positives, most severe in studies with small samples and imprecise variables [29].

 

Measures to Promote Medical Research Integrity

Research reproducibility refers to the ability of a researcher to duplicate the results of a prior study using the same materials used by the original investigator to yield the same results [30-32]. As per the discussion of Ioannidis and Goodman and colleagues, several ill research practices of data manipulation can be checked and addressed by towing somebody else’s work (meta-research), such as the complexity of design and measurement tools, statistical criteria, heterogeneity of experimental results, incentives, reporting, and claim for probable false conclusion [30,31]. Fallacious research practice has also affected the validity of related concepts like Bayesian statistics. Johnson reported in Bayesian hypothesis testing that the root cause of non-reproducibility has been traced to the conduct of significance tests at inappropriately high levels of significance. Modifications of common standards of evidence are proposed to reduce the rate of non-reproducibility of scientific research by a factor of 5 or greater [23,32].

The tripartite (administration-researcher-journal) drivers of research misconduct rest heavily on the role of the journal itself. Recently, there is the mushrooming of journal publishing companies brought about by the ease of publication platform and has become attractive because of expanded worldwide exposure using the Open Access mode [33-35]. Such phenomenon spiraled, and Jeffrey Beall made a list and implicated the predatory nature of some journals and publishers, apparently victimizing eager researchers to propel their stagnant professional and academic careers [33]. These journals attract submission through aggressive emailing and advertising with high acceptance rates at the expense of proper peer-review, and therefore, the quality of submitted papers is questionable [33,34]. Recently, Krawczyk and Kulczycki have questioned Beall’s list, generally equating open access to being predatory [35]. Their study has shown that the major themes by which Beall has characterized predatory journals are also widely present in non-Beall publications. The overgeneralization of the flaws of some open access journals to the entire open access movement has led to unjustified prejudices among the academic community toward open access [35]. Nonetheless, Richtig and his group have proposed an algorithm that can be utilized to discriminate between open-access journals that are potentially suitable for article submission versus predatory journals, as shown in Figure 1 [34].

The ultimate concern of unethical research is putting public health in danger. A systematic approach to research integrity must be observed and strictly followed [11,12,36,37]. Optimal interventions need to understand and harness the motives of various stakeholders who operate in scientific research and who differ on the extent to which they are interested in promoting publishable, fundable, translatable, or profitable results. These approaches are summarized in Figure 2. The specific interventions address specific issues of research misconduct, such as vague protocol, copious data collection, flexible data analysis, and distorted reporting. Proposed solutions are study registration, published protocol, open data, the team of rivals, registered reports, and blind analysis [11,37]. 

Whether regarded as the driver or victim, the researcher is amid research misconduct pandemonium [38-40]. Command of the English language seems to be a significant factor in committing plagiarism. In two reports involving Chinese researchers, plagiarism dominated the results of the interview as a determinant for unethical research practice [38,39]. Interestingly, cultural traits have also been implicated in intertwining the perception of research dishonesty [39]. Satalkar and Shaw delved into factors and circumstances that shape researchers’ understanding of research integrity. Their study results have shown that among researchers, early education, moral values inculcated by the family, and participation in team sports were the earliest influences on notions of honesty, integrity, and fairness [40]. Notably, researchers’ personality traits, including the degree of ambition and internal moral compass, were perceived as critical in determining the importance they attributed to conducting research with high ethical standards [40]. Nonetheless, respondents were agreeable that education and training on research integrity and more precise working definitions and guidelines are critical to prevent ill and unacceptable research practices [38,39].

Lastly, the researchers’ active participation in promoting research integrity has been perceived to be equally important as their work impacting the issue [12]. During the 6th World Conference on Research Integrity (WCRI), researchers’ who commit to robust, rigorous, and transparent practices were recognized and depicted through the Hongkong Principles (HKPs) (Figure 3). Five principles were introduced, such as responsible research practices, transparent reporting, open science (open research), valuing a diversity of types of research, and recognizing aal contributions to research and scholarly activity. The principles target exploratory and confirmatory research and analysis, focusing on rewarding behaviors that strengthen research integrity and avoidance of harmful research practices. If implemented, the HKPs could play a critical role in researchers’ evidence-based assessments, put research rigor at the heart of assessment, and open up research to the broader benefit of society [12].

 

Conclusion and Insights

Players in the research community—the institution, mentors, researchers, and the journal- have ruffled the integrity of the research practice influenced by pressure to reach and sustain a certain level of prestige, recognition, and promotion. Data manipulation from design to publication significantly threatened the end recipient of research—the public. Researchers’ cultural backgrounds and personal traits have also been shown to influence unethical research practices. System approaches to mitigate research malpractice and promote research integrity have been evolving to target directly the core of the problem---the researchers!

  1. National Institutes of Health. What is research integrity [Internet]. National Institutes of Health. U.S. Department of Health and Human Services; [cited 2022Oct12]. Available from: https://grants.nih.gov/policy/research_integrity/what-is.htm#:~:text=Research%20integrity%20includes%3A,accepted%20professional%20codes%20or%20norms.
  2. Smith R. Peer review: a flawed process at the heart of science and journals. Journal of the royal society of medicine. 2006 Apr;99(4):178-82.
  3. Butterfield F. HARVARD SUSPENDS DOCTOR FOR FRAUD. New York Times. 1981Dec16;
  4. Historical Background [Internet]. The Office of Research Integtity. U.S. Department of Health and Human Services; Available from: https://ori.hhs.gov/historical-background#:~:text=ORI%20began%20an%20intramural%20research,site%20was%20initiated%20in%201995.
  5. Steneck NH. Introduction to the Responsible Conduct of Research. Office of Research Integrity; 2007.
  6. Lin Y. Case Summary: [Internet]. Office of Research Integrity. U.S. Department of Health and Human Services; [cited 2022Oct12]. Available from: https://ori.hhs.gov/content/case-summary-lin-yibin
  7. Bouter LM, Tijdink J, Axelsen N, Martinson BC, Ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity. Research integrity and peer review. 2016 Dec;1(1):1-8.
  8. Holtfreter K, Reisig MD, Pratt TC, Mays RD. The perceived causes of research misconduct among faculty members in the natural, social, and applied sciences. Studies in Higher Education. 2020 Nov 1;45(11):2162-74.
  9. Pratt TC, Reisig MD, Holtfreter K, Golladay KA. Scholars’ preferred solutions for research misconduct: results from a survey of faculty members at America’s top 100 research universities. Ethics & Behavior. 2019 Oct 3;29(7):510-30.
  10. Gorman DM, Elkins AD, Lawley M. A systems approach to understanding and improving research integrity. Science and Engineering Ethics. 2019 Feb;25(1):211-29.
  11. Moher D, Bouter L, Kleinert S, Glasziou P, Sham MH, Barbour V, et al. The Hong Kong Principles for assessing researchers: Fostering research integrity. PLoS Biol [Internet]. 2020;18(7):e3000737. Available from: http://dx.doi.org/10.1371/journal.pbio.3000737.
  12. Ayodele FO, Yao L, Haron H. Promoting ethics and integrity in management academic research: Retraction initiative. Science and engineering ethics. 2019 Apr;25(2):357-82.
  13. Fanelli D, Costas R, Larivière V. Misconduct policies, academic culture and career stage, not gender or pressures to publish, affect scientific integrity. PloS one. 2015 Jun 17;10(6):e0127556.
  14. Edwards MA, Roy S. Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science. 2017;34. https://doi.org/10.1089/ees.2016.0223
  15. Zuckerman H. The proliferation of prizes: Nobel complements and Nobel surrogates in the reward system of science. Theoretical medicine. 1992 Jun;13(2):217-31.
  16. Quan W, Chen B, Shu F. Publish or impoverish: An investigation of the monetary reward system of science in China (1999-2016). Aslib Journal of Information Management. 2017; 69:486-502.
  17. Hosseini M, Lewis J. The norms of authorship credit: Challenging the definition of authorship in The European Code of Conduct for Research Integrity. Accountability in research. 2020 Feb 17;27(2):80-98.
  18. Hvistendahl M. China’s publication bazaar. Science. 2013; 324:1035-9.
  19. Lehrer D, Leschke J, lhachimi s, et al. Negative results in social science.European Polictical Science. 2007; 6:51-68.
  20. Fanelli D. Negative results are disappearing from most disciplines and countries. Scientometrics. 2012; 90:891-904.
  21. Ferguson CJ, Heene M. A vast graveyard of undead theories: Publication bias and psychological science’s aversion to the null. Perspectives on Psychological Science. 2012; 7:555-61.
  22. Johnson, VE. Revised standards for statistical evidence. Physical Sciences PINAS. 2013; 110:19313-7.
  23. Franco A, Malhotra N, simonovits G. publication bias in the social sciences: Unlocking the file drawer. Science. 2014; 345:1503-5.
  24. Kerr NL. HARKing: hypothesizing after the results are known. Pers Soc Psychol Rev. 1998; 2:196-217.
  25. Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE. 2009. doi.org/10.1371/journal.pone.0005738
  26. Ioannidis JP. Meta-research: Why research on research matters. PLoS biology. 2018 Mar 13;16(3): e2005468.
  27. Genova G, de la Vara JL. The problem is not professional publishing, but the publish-or-perish culture. Sci Eng Ethics. 2019; 25:617-9.
  28. Erasmus A, Holman B, Ioannidis IPA. Data-dredging bias. BMJ Evidence-Based Medicine. 2022; 27:209-11.
  29. Goodman SN, Fanelli D, Ioannidis JPA. What does research reproducibility mean? Sc Transl Med. 2016; 8:341-53.
  30. Ioannidis JP. Meta-research: Why research on research matters. PLoS biology. 2018 Mar 13;16(3): e2005468.
  31. Van Bavel JJ, Mende-Siedlecki P, Brady WJ, Reinero DA. Contextual sensitivity in scientific reproducibility. Proceedings of the National Academy of Sciences. 2016 Jun 7;113(23):6454-9.
  32. Beall J. Predatory publishers are corrupting open access. Nature. 2012; 489:179.
  33. Richtig G, Berger M, Lange-Asschenfeldt B, Aberer W, Richtig E. Problems and challenges of predatory journals. J Eur Acad Dermatol Venereol [Internet]. 2018;32(9):1441–9. Available from: http://dx.doi.org/10.1111/jdv.15039.
  34. Krawczyk F, Kulczycki E. How is open access accused of being predatory? The impact of Beall’s lists of predatory journals on academic publishing. The Journal of Academic Librarianship. 2021 Mar 1;47(2):102271.
  35. Mejlgaard N, Bouter LM, Gaskell G, Kavouras P, Allum N, Bendtsen A. Research integrity: nine ways to move from talk to walk. Nature. 2020.586;358-60.
  36. Ioannidis JP. How to make more published research true. Revista Cubana de Información en Ciencias de la Salud (ACIMED). 2015 Feb 23;26(2):187-200.
  37. Yi N, Nemery B, Dierickx K. Perceptions of plagiarism by biomedical researchers’ an online survey in Europe and China. BMD Medical Ethics. 2020; 21:44-60.
  38. Li D, Cornelis G. Differing perceptions concerning research misconduct between China and Flanders: a qualitative study. Accountability in Research. 2021 Feb 17;28(2):63-94.
  39. Satalkar P, Shaw D. How do researchers acquire and develop notions of research integrity? A qualitative study among biomedical researchers in Switzerland. BMC Medical Ethics. 2019 Dec;20(1):1-2.

 


 

Figure 1. Decision algorithm that can be used by authors to discriminate between open access journals and predatory journals. COPE, Committee on Publication Ethics. Adapted from Richtig G, Berger M, Lange-Aschenfeldt B, et al. Problems and challenges of predatory journals. J Eur Acad Dermatol Venereol. 2018;32:1441-49.

 


 

Figure 2. Specific interventions to improve and promote research integrity and quality. Adapted from Gorman DM, Elkins AD, Lawley M. A system approach to understanding and improving research integrity. Science and Engineering Ethics. 2019;25:211-29.

 

 

 

 

Figure 3. Indicators of responsible research practices. Adapted from Moher D, Bouter L, Kleinert S, et al. The Hong Kong Principles for assessing researchers: Fostering research integrity. PLoS Biol. 2020;18: e3000737. https://doi.org/10.1371/journal.pbio.3000737

 

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, which permits use, share — copy and redistribute the material in any medium or format, adapt — remix, transform, and build upon the material, as long as you give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. You may not use the material for commercial purposes. If you remix, transform, or build upon the material, you must distribute your contributions under the same license as the original. You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/4.0/.