Skip to main content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Dose Response. 2019 Apr-Jun; 17(2): 1559325819853669.
Published online 2019 Jun 5. doi: 10.1177/1559325819853669
PMCID: PMC6557026
PMID: 31217756

Interests, Bias, and Consensus in Science and Regulation

Abstract

Scientists are human. As such, they are prone to bias based on political and economic interests. While conflicts of interest are usually associated with private funding, research funded by public sources is also subject to special interests and therefore prone to bias. Such bias may lead to consensus not based on evidence. While appealing to scientific consensus is a legitimate tool in public debate and regulatory decisions, such an appeal is illegitimate in scientific discussion itself. We provide examples of decades-long scientific consensus on erroneous hypotheses. For policy advice purposes, a scientific statement or model should be considered as the subject of proper scientific consensus only if shared by those who would directly benefit from proving it wrong. Otherwise, specialists from adjacent fields of science and technology should be consulted.

Keywords: regulatory science, incentives, radiation, secondhand smoke, LNT

Introduction

Toxicology influences government regulation and lifestyle in society. Its influence is more extensive than that of most other branches of science. One reminder is the change in understanding of health consequences from smoking that not only drastically reduced the number of smokers but also led many countries to serious cultural and behavioral changes, including bans on smoking in public places. Another example is ionizing radiation. In the first half of the 20th century, X-rays and radioactive substances were frequently used and occasionally misused, for example, X-ray shoe fitters were found in shoe shops, as mentioned in passing in The Laws of Nature by Peierls.1 When scientists became aware of radiation-induced mutagenesis and later of radiation-induced carcinogenesis, radiation regulations became more and more stringent, giving rise to a number of ethical issues.2 More generally, the activities of the Food and Drug Administration, with its yearly budget of US$5 billion (not to mention the implementation costs of corresponding regulations), are based on the toxicological sciences.

In recent years, the field of regulatory toxicology has expanded significantly.3 In the 1970s and 1980s, many countries legislated series of environmental, health, and safety laws. As a consequence, regulatory agencies increasingly rely on toxicological science to quantify potential new hazards, for example, nonionizing radiation.4

Regulation policy, including toxicological regulation, is based on scientific findings. However, due to different biases described below, regulators may disregard or ignore evidence that contradicts a regulation rather than change the regulation to be consistent with the evidence. To become the basis of regulation, a scientific finding should be accepted by a majority of experts and become an actual matter of consensus. Appealing to scientific consensus is an adequate tool in policy-making and public debate. However, appealing to consensus often occurs in scientific discussion itself, which is absolutely unacceptable, as we show below. We also analyze interests and biases that can lead to consensus on erroneous findings.

Consensus in Science

Consensus has no value in a scientific argument; only experimental evidence matters. As stated already by Galileo Galilei,5 “In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual.” Physician, producer, and writer Michael Crichton formulated:

[T]he work of science has nothing whatever to do with consensus. Consensus is the business of politics. Science, on the contrary, requires only one investigator who happens to be right, which means that he or she has results that are verifiable by reference to the real world. In science, consensus is irrelevant. What are relevant are reproducible results. The greatest scientists in history are great precisely because they broke with the consensus.6

Psychologist Daniel Kahneman explains that scientists tend to experience what he calls “theory-induced blindness”: once a theory is accepted and used as a thinking tool, it is extraordinarily difficult to notice its flaws. Even when one comes upon an observation that does not fit the theory, one assumes that there must be an explanation that was somehow missed.7 Therefore, no consensus of experts can be an argument in scientific discussion. We illustrate this statement with 3 examples of decades-long scientific consensus over erroneous findings.

One such example (somewhat related to toxicology) is understanding the nature of peptic ulcers. For decades, there was scientific consensus that ulcers were caused primarily by stress and spicy food. Correspondingly, the treatment was either dietary or surgical. It was not until 1982 that Barry Marshall and Robin Warren developed their hypothesis related to the bacterial cause of peptic ulcers, a hypothesis that was ultimately acknowledged with the Nobel Prize in Physiology or Medicine of 2005.8

Another example (in the field of chemistry) are quasicrystals. Back in 1850, Auguste Bravais mathematically proved that crystals—solid structures exhibiting long-range order—can have axes of symmetry of second, third, fourth, and sixth order only, namely, only rotations by 180°, 120°, 90°, or 60° can be present.9 For about 150 years, every student of materials science was taught that fifth-order symmetry is impossible and can never be seen in diffraction patterns. Yet, in 1982, Dan Shechtman reported evidence of the fifth-order axis—evidence that was acknowledged in 2011 by the Nobel Prize in Chemistry.10 A posteriori, Bravais’ proof was and remains correct. However, it assumes that a structure exhibiting long-range order necessarily has translational symmetry, that is, the entire crystal lattice can be formed by its unit cell by subsequently translating a single cell into adjacent positions. The quasi-periodic materials discovered by Shechtman exhibit long-range order—therefore, they yield crystalline-like diffraction patterns. But they lack translational symmetry, and thus, Bravais’ theorem is inapplicable to them.

The last example (in the field of biology) is epigenetics, which can be defined as the study of “heritable changes that alter gene expression without changing the primary DNA sequence.”11 Introduced essentially by Lamarck as inheritance of features acquired during an organism’s life span, epigenetics fell out of scientific favor following the work of Charles Darwin and, for most of its history, has been considered pseudo-science. In the USSR of the 1930s, Trofim Lysenko, a favorite of Stalin, brought the area of epigenetics further into disrepute by continually making up false data and propaganda.12 This process culminated in 1948 when the entire field of genetics was outlawed in the USSR, with all geneticists being dismissed from their posts and some imprisoned. It took scientists several decades to realize that, in spite of the wrongdoing of Lysenko and Stalin, epigenetics was a real science. This recognition was finally acknowledged by the Nobel Prizes in Physiology or Medicine of 2006 for Andrew Z. Fire and Craig C. Mello and of 2012 for Sir John B. Gurdon and Shinya Yamanaka.13

Interests and Bias

Erroneous scientific consensus can arise spontaneously without any apparent interests involved, as in the cases of peptic ulcers and quasicrystals. However, there may be different interests that can lead to bias, and bias can ultimately lead to consensus. It has already been mentioned in the literature that decision-makers are both human and political: they are subject to hazard perception biases and political pressures. It has been pointed out that such biases and pressures lead regulators to solutions that are inefficient from a public welfare perspective.14 Both decision-makers and others, including scientists, are human and political. Moreover, people are also “economical”: they respond to incentives and act in their interests. We proceed now to analyze these interests.

Political Interests

The case of epigenetics shows how political pressure, even if applied in the opposite direction, can contribute to scientific consensus: Stalin’s terror in favor of epigenetics emotionally led scientists to ridicule it (in the USSR, genetics was politically rehabilitated in the 1960s after Stalin’s death in 1953, and since then, epigenetics has been considered pseudo-science also in the USSR).

Political interests may be perfectly legitimate, as with attempts to protect the public from the influence of toxic agents. Nevertheless, political bias in science, including the precautionary principle “to be on the safe side,” is unacceptable. As stated by Moghissi et al,15

[t]he fundamental flaw in such an approach is the confusion between the role of the scientific community and the role of regulators and other policy makers. The scientific community must provide the regulators with accurate scientific information including the level of maturity of each scientific issue. It is the task of the regulators to consider the level of maturity of each scientific item and be protective in their decisions if the needed scientific information is less than adequate.

Political interests can be found in 2 important cases directly related to toxicology. The first case involves the assessment of the health consequences of secondhand tobacco smoke (SHS). There is no controversy about the grave health consequences of smoking; therefore, reducing the number of smokers (preferably to zero) is undoubtedly an important policy goal (fair disclosure: all the authors have never smoked). This legitimate policy goal, however, may have given rise to illegitimate political pressure and to research bias when assessing SHS damage, just as economic incentives of tobacco industry–funded research may have given rise to research bias in the opposite direction. The 2 interested parties—researchers funded by the government and tobacco industry—have expressed opposite views on SHS toxicity.16 While professional discussion of this subject is far beyond the scope of this article (none of the authors has expertise in any area of chemical toxicology), the very fact of bias, of one of the parties or of both, seems rather obvious.

The second case involves the assessment of the health consequences of low-dose ionizing radiation. While there is no doubt that high-dose radiation kills acutely and probably causes some lethal cancer in survivors, there is extensive controversy over the health effects of low-dose and low-dose-rate radiation. The linear no-threshold model (LNT) of radiation carcinogenesis assumes that any dose of radiation, no matter how small, is carcinogenic.17 The LNT was adopted during the Cold War era, and it has been shown that the desire to stop the nuclear arms race was a major driver of its adoption.18 While there is no doubt that nuclear war is calamitous and that stopping the nuclear arms race is a perfectly legitimate political goal, accepting an extremely controversial scientific theory (LNT) on political grounds is ethically challenging. Our very skeptical position on LNT is described in a recent paper.19

Economic Interests

It is well accepted that a scientist cannot be a perfect, interest-free intellectual machine. For this reason, every potential conflict of interest, even a remote one, should be disclosed. The accepted practice is to disclose funding sources and relevant affiliations. Research conclusions benefiting the funding agency should be viewed with increased scrutiny because of probable and often even unintentional bias. The first discussion of unintentional bias due to conflict of interest can probably be traced to the Pentateuch (Exodus 23:8, Deuteronomy 16:19).

The above policy of increased scrutiny is fully implemented when research is funded by private sources. We would like to mention, however, that in the case of competitive private funding, even without increased scrutiny, the large number of different funding sources with different (and often conflicting) interests make significant bias rather improbable.

When it comes to public funding, the authors are unaware of increased scrutiny. Moreover, it is often explicitly assumed that public funding is free from interests and possible bias.20 We are going to challenge this assumption. First, we should mention that public funding is none other than governmental funding, namely, part of taxpayers’ money is allocated to certain projects by appropriate government officials. Thus, decisions on policy and funding are taken by human beings, and as such, government officials cannot be perfect, interest-free decision-making machines seeking the public welfare even against their own personal interests. This fact is stressed by another obvious observation that, while personal interests are usually more or less clear to someone, “public welfare” is rarely obvious and is usually the subject of hot debate.

Regarding personal interests, most people—even if they are government officials—want to have stable salaries, so they are not interested in reducing public spending since such reduction endangers their positions. Many are interested in career promotion, and persons pursuing promotion are interested in widening the field of their discretionary power21 and increasing the budget they redistribute.22

Finally, it should be stressed that even perfectly interest-free totally altruistic officials pursuing only public welfare will act exactly in the same direction of increasing authority and budget, provided they believe that they understand public welfare properly.

To summarize, it is completely natural to expect human behavior from human beings even if they are government officials. As human beings, government officials are interested in gaining more discretionary power and in redistributing more funds. Therefore, they are expected to be biased in their decisions. In economic literature, this expectation is called the Niskanen model.23

The trend of being “on the safe side” regarding toxicological or other hazards objectively serves the above-mentioned aims of more regulation and more budget.24 Speaking of SHS and radiation, economic interests act in the same direction as political interests—toward accepting SHS toxicity and the LNT model of radiation risk.

The trend toward more regulation and a larger budget seems to be balanced by the general public’s desire to have fewer restrictions and pay less tax, but often this balance does not work. In fact, the average person is rationally ignorant if a particular issue does not seem important enough,25 so people are ready to rely on expert opinions and do not object to expanding regulation and public spending.

In a democratic society, the interests of officials cannot be eliminated.26 However, they should be properly acknowledged and mitigated by proper transparency and independent scientific scrutiny.

Interests and Partisan Views on SHS and LNT

There was a claim of scientific consensus regarding toxicity, in general, and carcinogenesis, in particular, caused by SHS.27 However, one should note that the above consensus was essentially achieved only after the Center for Indoor Air Research (CIAR), funded by the tobacco industry, was dissolved in 1998 as part of the Tobacco Master Settlement Agreement.28 Before then, many articles published in high-rank medical journals claimed a lack of clear evidence for SHS toxicity. For example, Matanoski et al29 analyzed the fact that nonsmoking wives of smoking husbands had higher lung cancer rates than nonsmoking wives of nonsmoking husbands. The authors concluded that “women who were exposed to husbands who smoked were more likely to be older, have lower education, live in the city, and have other health behaviors that could increase their risk of lung cancer compared with nonsmoking women with husbands who did not smoke.”

The CIAR dissolution followed extensive litigation of 46 states against the main tobacco manufacturers. While fully adequate in resolving public issues, litigation can be hardly considered a legitimate scientific argument. Moreover, speaking about achieving the perfectly legitimate goal of eliminating smoking, the pressure to accept SHS toxicity (CIAR dissolution) may have the opposite effect in the long-term future, as in the case of epigenetics.

Regarding toxicity of low-dose ionizing radiation, LNT is presently the most widely used model for radiation risk assessment. There have even been claims regarding scientific consensus with respect to LNT (see, for example, Boice30) despite increasing criticism of the model (see, for example, Feinendegen et al31 and Calabrese32). Unlike the SHS issue, ionizing radiation research has been funded mostly, if not exclusively, by public sources. However, here, too, we find interested parties such as radiation safety specialists (indifferent to results of radiation applications) and radiation oncologists (directly interested in efficacy of radiation treatment). From the point of view of possible bias, it is extremely interesting to compare 2 different views of LNT in 2 papers from the same compilation, the UpToDate online clinical decision support resource (www.UpToDate.com). The paper on radiation risks of medical imaging is written by specialists in radiation safety.17 In this paper, LNT is exclusively used for assessment of carcinogenicity despite mentioning in passing that other models exist. The paper on radiation therapy is written by a radiation oncologist.33 Radiation doses in radiotherapy are several orders of magnitude higher than in medical imaging, and adverse side effects of radiation are common, so one could reasonably expect an extensive discussion of radiation carcinogenesis. However, radiation carcinogenesis (secondary malignancies) is discussed only briefly, while LNT is not mentioned at all.

Scientific Consensus in Policy-Making

Although consensus is not an argument in scientific discussion, expert opinions surely matter in policy. The question that we attempt to answer here is related to which state of scientific discussion should be considered as scientific consensus for policy-making purposes.

We suggest the following formula: There is proper consensus regarding a scientific statement if this statement is backed by those who would directly benefit from proving it wrong. For example, the toxicity of smoking should be considered a proper scientific consensus since it has not been challenged by tobacco industry–funded research; toxicity of ionizing radiation in high doses should be considered a proper consensus since it is accepted by radiation oncologists who are interested in applying higher radiation doses to their patients.

If, however, all consensus experts benefit from the results of their consensus, we suggest that specialists from adjacent fields of research should be consulted for policy advice. For example, regarding health effects of low-dose ionizing radiation (including assessing risks associated with medical imaging, nuclear power, and more), specialists in radiation oncology should be consulted to balance the possible bias of specialists in radiation safety.

Conclusions

Scientists are human, so they are prone to bias due to political and economic interests. Research funded by public sources is also subject to special interests and therefore prone to bias. Such bias can even lead to consensus not based on evidence. Claims for scientific consensus-based spending and regulation should be subject to special scrutiny.

Consensus is not an argument in scientific discussion; only experimental evidence matters. There are examples of decades-long scientific consensus on erroneous hypotheses. For policy advice purposes, any scientific statement or model should be considered as the subject of proper scientific consensus only if it is shared by those who would directly benefit from proving it wrong. Otherwise, specialists from adjacent fields of science and technology should be consulted.

Acknowledgments

The authors would like thank Prof. Avi Caspi (Jerusalem College of Technology) for his encouragement of this study. The authors acknowledge fruitful discussions with Prof. Gregory Falkovich (Weizmann Institute of Science), Prof. Eli Sloutskin (Bar-Ilan University), Mr Yaakov Socol (Hebrew University Medical School), Mrs Raisa Stroug (Technion—Israel Institute of Technology), and Prof. Alexander Vaiserman (Institute of Gerontology, Kiev, Ukraine). The authors would also like to thank Prof. Ludwig Feinendegen (Heinrich-Heine-University Düsseldorf) and Dr Bobby R. Scott (Lovelace Respiratory Research Institute) for their interest in this work and important feedback.

Footnotes

Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported in part by the Jerusalem College of Technology Grant No. 5969.

ORCID iD: Yehoshua Socol An external file that holds a picture, illustration, etc.
Object name is 10.1177_1559325819853669-img1.jpg https://orcid.org/0000-0003-4167-248X

References

1. Peierls RE. The Laws of Nature. New York, NY: Charles Scribner’s Sons; 1956:90. [Google Scholar]
2. Socol Y, Dobrzyński L, Doss M, et al. Commentary: ethical issues of current health-protection policies on low-dose ionizing radiation. Dose Response. 2013;12(2):342–348. [PMC free article] [PubMed] [Google Scholar]
3. Snow SJ, Henriquez AR, Costa DL, Urmila P, Kodavanti UP. Neuroendocrine regulation of air pollution health effects: emerging insights. Toxicol Sci. 2018;164(1): 9–20. [PMC free article] [PubMed] [Google Scholar]
4. Shahin S, Banerjee S, Swarup V, Singh SP, Chaturvedi CM. 2.45-GHz microwave radiation impairs hippocampal learning and spatial memory: involvement of local stress mechanism-induced suppression of iGluR/ERK/CREB signaling. Toxicol Sci. 2018;161(2):349–374. [PubMed] [Google Scholar]
5. Misner CW, Thorne KS, Wheeler JA. Gravitation. San Francisco, CA: W. H. Freeman; 1973:38. [Google Scholar]
6. Barrio JR. Consensus science and the peer review. Mol Imaging Biol. 2009;11(5):293. [PMC free article] [PubMed] [Google Scholar]
7. Kahneman D. Thinking, Fast and Slow. New York, NY: Farrar, Straus and Giroux; 2011:277. [Google Scholar]
8. Marshall B. Helicobacter connections—Nobel Lecture, December 8, 2005. 2005. https://www.nobelprize.org/uploads/2018/06/marshall-lecture.pdf. Accessed March 18, 2019.
9. Bravais A. Mémoire sur les systèmes formés par les points distribués régulièrement sur un plan ou dans l’espace [Memoir on the systems formed by points regularly distributed on a plane or in space]. J Ecole Polytech. 1850;19:1–128. (English: Memoir 1, Crystallographic Society of America, 1949). [Google Scholar]
10. Shechtman D. The discovery of quasi-periodic materials—Nobel Lecture. October 5, 2018. 2011. https://www.nobelprize.org/uploads/2018/06/shechtman-lecture_slides.pdf. Accessed January 01, 2019.
11. Figueroa ME. Principles of epigenetics. UpToDate®. Wolters Kluwer, Alphen aan den Rijn. 2018. https://www.uptodate.com/contents/principles-of-epigenetics. Accessed March 18, 2019.
12. Spector T. No Nobel, but epigenetics finally gets the recognition it deserves. The Conversation. 2013. http://theconversation.com/no-nobel-but-epigenetics-finally-gets-the-recognition-it-deserves-18970. Accessed March 18, 2019.
13. Esteller M. Cancer, epigenetics and the Nobel Prizes. Mol Oncol. 2012;6(6):565–566. [PMC free article] [PubMed] [Google Scholar]
14. Viscusi WK, Hamilton JT. Are risk regulators rational? Evidence from hazardous waste cleanup decisions. Am Econ Rev. 1999;89(4):1010–1027. [Google Scholar]
15. Moghissi AA, Calderone R, Azam F, et al. Regulating ionizing radiation based on metrics for evaluation of regulatory science claims. Dose Response. 2018;16(1):1559325817749413. [PMC free article] [PubMed] [Google Scholar]
16. Muggli ME, Forster JL, Hurt RD, Repace JL. The smoke you don’t see: uncovering tobacco industry scientific strategies aimed against environmental tobacco smoke policies. Am J Public Health. 2001;91(9):1419–1423. [PMC free article] [PubMed] [Google Scholar]
17. Lee CI, Elmore JG. Radiation-related risks of imaging, UpToDate®. Wolters Kluwer, Alphen aan den Rijn. 2017. https://www.uptodate.com/contents/radiation-related-risks-of-imaging. Accessed March 18, 2019.
18. Calabrese EJ. The road to linearity: why linearity at low doses became the basis for carcinogen risk assessment. Arch Toxicol. 2009;83(3):203–225. [PubMed] [Google Scholar]
19. Yanovskiy M, Shaki YY, Socol Y. Ethics of adoption and use of the linear no-threshold model. Dose-Response. 2009;17(1):1559325818822602. [PMC free article] [PubMed] [Google Scholar]
20. Resnik DB. Financial interests and research bias. Perspect Psychol Sci. 2000;8(3):255–285. [Google Scholar]
21. Jasay A. The State. Indianapolis, IN: Liberty Fund; 1985. http://oll.libertyfund.org/titles/jasay-the-state. Accessed March 18, 2019. [Google Scholar]
22. Niskanen W. Bureaucracy and Representative Government. Chicago, IL: Aldine-Altherton; 1971. [Google Scholar]
23. Stevens JB. The Economics of Collective Choice. New York, NY: Routledge; 2018. [Google Scholar]
24. Socol Y, Yanovskiy M, Zatcovetsky I. Low-dose ionizing radiation: scientific controversy, moral-ethical aspects and public choice. Int J of Nuclear Governance Economy and Ecology. 2013;4:59–75. [Google Scholar]
25. Downs A. An Economic Theory of Democracy. New York, NY: Harper and Row; 1957. [Google Scholar]
26. Stigler GJ. The theory of economic regulation. Bell J Econ. 1971;2(1):3–12. [Google Scholar]
27. US Department of Health and Human Services 2014. The Health Consequences of Smoking—50 Years of Progress: A Report of the Surgeon General. Atlanta, GA: US Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health; 2014. https://www.ncbi.nlm.nih.gov/pubmed/24455788. Accessed March 18, 2019. [Google Scholar]
28. The master settlement agreement: an overview. 2018. https://publichealthlawcenter.org/sites/default/files/resources/MSA-Overview-2018.pdf. Accessed January 18, 2019.
29. Matanoski G, Kanchanaraksa S, Lantry D, Chang Y. Characteristics of nonsmoking women in NHANES I and NHANES I epidemiologic follow-up study with exposure to spouses who smoke. Am J Epidemiol. 1995;142(2):149–157. [PubMed] [Google Scholar]
30. Boice JD., Jr The linear nonthreshold (LNT) model as used in radiation protection: an NCRP update. Int J Radiat Biol. 2017;93(10):1079–1092. doi: 10.1080/09553002.2017.1328750. [PubMed] [Google Scholar]
31. Feinendegen LE, Pollycove M, Neumann RD. Hormesis by low dose radiation effects: low-dose cancer risk modeling must recognize up-regulation of protection In: Baum RP, ed. Therapeutic nuclear medicine. Berlin, Germany: Springer; 2013:789–805. [PMC free article] [PubMed] [Google Scholar]
32. Calabrese EJ. The linear No-Threshold (LNT) dose response model: a comprehensive assessment of its historical and scientific foundations. Chem Biol Interact. 2019;301:6–25. [PubMed] [Google Scholar]
33. Mitin T. Radiation therapy techniques in cancer treatment, UpToDate®. Wolters Kluwer, Alphen aan den Rijn. 2017. https://www.uptodate.com/contents/radiation-therapy-techniques-in-cancer-treatment. Accessed March 18, 2019.

Articles from Dose-Response are provided here courtesy of SAGE Publications