Here’s the headline in tomorrow’s New York Times: “Teleporters invented! Trains, planes and automobiles headed for the junk heap!” People everywhere will remember this as a seminal event in human history. Where were you when teleporters were invented? But first, people everywhere will debate. Some will leap to the defense of the millions who will lose their jobs in auto manufacturing, airports, and road repair. Others will apply the policy brakes. How do we know this technology is even safe? How will we regulate its use? Governments will get involved in these questions, while industry-funded skeptics will start shooting the messenger and spreading disinformation in order to reduce our consumer confidence. Religions will be caught in a bind because if people can be teleported, how will we know if souls are being teleported as well? How will we even know if we’re abandoning souls? Privacy and security issues will multiply as homeowners, stores, banks, art museums and militaries try to grapple with how to protect themselves. And brushing all this talk aside, researchers and entrepreneurs everywhere will start racing for commercial opportunities, like teleporting cancer cells out of the body, teleporting islands of ocean plastic into recycling centers, and teleporting nuclear waste up to orbiting spaceships that can be flown into the sun.
There are many reasons why we react to science the way we do. Science often has tremendous impact, both good and bad, immediate and long-term, narrow and widespread, subtle and dramatic. But our psychological makeup sets the stage for how we react to these conversations: our personal understanding of the science in question, our trust in science, our trust in the institutions behind a new scientific discovery, our aversion to risk, how we react to feedback from others, our religious, political, social, and economic biases (which result from still other kinds of influences), what kind of “priming” we’ve been exposed to (like a Joe Rogan podcast filled with vaccine disinformation versus a Star Trek episode where Dr. McCoy cures a mysterious disease in 45 minutes), and much more. Over the past ten years, the US National Academies of Science has explored this issue in a brilliant series of conferences and reports on the science of science communication (see the NAS link in the additional reading section below). Some of the Academies’ findings reinvent the wheel from psychology research, some are novel, but a lot of this material—old and new—revolves around the concept of “belief perseverance.”
Belief perseverance is when our preexisting misconceptions and preconceived notions hinder the assimilation of new information. People tend to cling to their beliefs, even when confronted with evidence to the contrary. This creates a cognitive bias that prevents us from accepting new ideas, leading to a fragmented understanding of new concepts.
For psychology researchers, belief perseverance is actually a combination of several other phenomenon, like belief confirmation (our tendency to cherry pick information that reinforces our existing beliefs and ignore the rest), anchoring (where we tend to stick with our first impression), priming (the Joe Rogan thing), and other biases like overconfidence. The availability (or lack thereof) of evidence, plus our fundamental human tendency to try to make sense of the world based on what we know, even if we know very little (part of the Dunning-Kruger effect, which we’ve written about previously), are also part of this complex equation. Belief perseverance can be especially hard to overcome in science because the claims being made are often hard to argue—the concepts can be difficult to grasp, the evidence can seem flimsy to the naked eye, and common sense might tell us that something else is true.
Battling over our beliefs is a staple of human history, so it should come as no surprise that it’s also been a staple of debate over science and science policy. In fact, we’ve seen lots of this kind of debate both inside science and at the intersection of science and society.
Perhaps the most famous example from inside science involves the perseverance of geocentricity. For centuries, natural philosophers were stuck on the idea that the Earth needs to sit at the center of the universe. This view made sense because everything in the sky appears to revolve around the earth; mathematical models during the Middle Ages did an adequate though imperfect job of predicting the positions of the planets (and these predictions were important for needs like timekeeping and astrology forecasts); and the style of education for the first millennia-and-a-half of Western history focused on learning whatever Aristotle said was true (and with regard to geocentricity, Aristotle said that if the earth was rushing through space, then we would detect its motion). The search for actual truth wasn’t commonplace yet, and existing explanations were working just fine, thank you. A geocentric universe was also important to church doctrine in the West, which held that the heavens were fixed and perfect and that man must necessarily be at the center of creation.
This wasn’t just a mindset, mind you. Up until post-Reformation Europe (the early 1500s), most of the population of Europe was Catholic, and the Catholic Church dominated European life and society. So if the church said the Earth was at the center of the universe, then the Earth was at the center of the universe. Or else.
As the evidence began to mount during the early years of the Renaissance, though (not just in Europe but throughout the world), that the earth may in fact be orbiting around the sun, the pushback was widespread and furious. Offending scientists like Galileo were silenced, their works were banished, and evidence from disbelieving scientists was twisted and misinterpreted to align with belief. Eventually, the weight of the evidence became too much and the need for accurate measurements (chiefly to support exploration) became too great, and the sun was finally put at the center of our solar system where it belonged. But all told, this battle between science and belief perseverance took almost 300 years to play out—the time between when Copernicus first published his theory of a heliocentric universe in 1543, to 1822 when the Catholic Church finally allowed this knowledge to be published in textbooks.
Another famous example from inside science is when Einstein published his paper on special relativity in 1905. Einstein offered an explanation about the true nature of space and time, except his universe wasn’t filled with ether. There was no experimental confirmation for the existence of ether, anyway, but many physicists were stuck on the idea because it made intuitive sense. How else could light move through space if it wasn’t moving through some kind of medium (like a wave through water)? In his 1915 paper on general relatively, Einstein further alienated his peers by overturning fundamental concepts set forth by the heretofore peerless Isaac Newton.
The early resistance against Einstein’s ideas ran the gamut from scientific skepticism to antisemitism. It wasn’t until experimental proof of Einstein’s ideas began to emerge—first from the total solar eclipse of 1919—that the world began to suddenly realize Einstein had been right all along. But even Einstein got stuck in the same rut a few years later when he refused to believe in the emerging science of quantum physics because it just didn’t fit his more deterministic worldview. Einstein’s famous quote, “God does not play dice,” was a complaint about how the uncertainty principle behind quantum mechanics simply couldn’t be correct; the theory produced interesting results, but it couldn’t possibly describe how the universe actually worked.
So, belief perseverance happens all the time inside science. And it disrupts from many directions, preventing both scientists and the public from accepting new facts, preventing scientists from even searching for new facts, causing scientists to be overconfident in their views, and to pick out only the facts that align with their beliefs, and causing both scientists and policymakers to bend offending facts to fit their preferred beliefs.
These same dynamics also happen outside science at the intersection of science and society. Consider evolution, for example. When Charles Darwin first introduced his revolutionary idea to the public in his 1859 book, Origin of Species, Darwin became an international sensation. This wasn’t only because Darwin’s work completely transformed our understanding of biology and unlocked new fields of research and exploration. His work also fanned the flames of belief perseverance. A great many scientists, politicians, journalists and regular citizens at the time, particularly in the US, saw in Darwin’s findings not the lesson that all humans are alike, but support for their biases about the superiority of Caucasians. Through Darwin’s work, the imprimatur of science was given to government policies that were heretofore only built on racist beliefs—Jim Crow laws and Supreme Court decisions like Plessy that subjugated and terrorized millions of Black Americans for generations because Blacks were officially, “scientifically” now, deemed as inferior; doctors who sterilized the infirm under the guise of eugenics; and German army officers who rationalized the murder of millions in their quest to build a “master race.” There is, of course, nothing in Darwin’s work or subsequent work of any scientific merit whatsoever that suggests humans differ by race—only our own distortions filtered through our own biases and perceptions. In the early history of evolution, then, this science was readily accepted by (and distorted by) the public to the extent that it neatly aligned with their existing belief systems.
Until it didn’t. Darwin’s Theory of Evolution also threatens religious beliefs, so now that we’re done actively promoting white nationalism in America (at least as official government policy) and have moved on to the Christian nationalism phase of our history, a number of US states—twenty over the last several decades— have tried to remove evolution from schoolbooks, or to “balance” this science with the religious perspective of intelligent design. Here, belief perseverance is working in the opposite direction. Because these state legislators don’t understand how evolution can be reconciled with their personal religious beliefs, or they can’t see evolutionary change happening in humans in real time and therefore don’t think it’s real, or they don’t understand how firmly established evolution theory has been become in biology—or all of the above—then there is simply no room for this science in their worldview. Their pushback is both religious—God created man on the seventh day, end of story—and also pockmarked with misunderstanding: How could humans have possibly evolved from chimps if chimps are still around? We didn’t (if you haven’t already read his book, Steven Gould’s Full House offers an excellent and very readable explanation of evolution; a link is included below in the additional reading section).
Climate change is another prominent example of how belief perseverance collides with science. Like evolution, the direct evidence for climate change (at least on human time scales) is hard to detect. There is no immediate and visible connection between the smoke that spews from a tailpipe in Alabama and the flooding that happens in Maine 50 years later. And even as the evidence of and impacts from climate change continue to mount, opinions about whether this is a human-caused problem are firmly entrenched and increasingly polarized. Skeptics are, unfortunately, highly versed in the climate disinformation rhetoric that has poisoned our national dialogue over the past 40 years. They might say, for example, that the earth warms and cools in long cycles and that there’s no way to know whether we’re causing this latest heat cycle, despite decades of evidence to the contrary and the fact that climate change is a universally accepted fact in science. Or they will note that the northeastern US has been experiencing record snowfalls over the last several years and that another ice age is coming, not global warming, but will ignore the massive heat waves that have been covering our planet in recent years and also ignore the fact that these record snowfalls are probably due to a breakdown in the polar vortex caused by global warming. Or they will say a warming planet will be good for us because more land will become available and more crops will grow, even though this warming will also cause large parts of our planet to become unlivable, as well as cause mass migrations and food shortages on a scale never before seen in human history. Or they might dismiss climate change warnings because they distrust the government. If the government thinks climate change is important, then they’re going to be against it, because their belief system is tied up in anti-government animus.
Added to all this belief perseverance, even if former climate change skeptics do finally accept that climate change is real and urgent, and that our coastlines are flooding, our warming seas are killing marine life, and violent weather events are becoming increasingly common, the solutions proposed by policymakers aren’t sufficiently viable for some to get on board with policy change. Electric cars and solar panels are too expensive to buy, the battery technology needed to store power from wind and solar farms is too immature, and our entire world is too dependent on oil and gas to change this formula in time to make a difference. What good does it do to worry about the problem, then, if there isn’t a solution?
Organic food is another example of where our science and belief systems collide. Grocery stores are filled with thousands of labels that appeal to our desires for safe and healthy foods: low-fat, low-carb, sugar-free, high fiber, heart-healthy, all natural, free-range, hormone-free, nothing artificial, no-GMO. You label it, we’ll buy it. But because there is so little regulation of these labeling claims, because each claim is the buzz word for a multi-billion-dollar market segment—even entire brands and grocery store chains like Sprouts and Whole Foods—and because some of these claims involve scientific evaluation, the government is starting to take notice. The US Department of Agriculture is currently evaluating the labeling standards for deciding when something can honestly be called organic, and when this claim is just a marketing sham.
According to Pew surveys, most Americans think organic food is better for them, even if science has yet to back up this belief. Some research shows potential benefit from these foods; other research does not. Most of the studies conducted to date on these questions have not been rigorous (for example, comparing outcomes with control groups who ate “normal” food) and do not involve large numbers of observations. But the positive inferences—that organics may have lower levels of pesticide residue, for example—have been amplified through a great deal of promotional literature on the benefits of organic diets. Even trusted medical sites like the Mayo clinic simply list the potential benefits of organic foods rather than question the science behind these claims. So, whether we are medical researchers, medical providers, or simply consumers who think it makes sense that organic foods are good idea for whatever reason, then we’re going to accept the science that says organics are good for us, reject the science that says meh, and as a society, continue to spin out a whole ecosystem of industry and regulation promoting and supporting our belief system. Even if we think we’re being pranked—and in this case, it’s easy for manufacturers to add some kind of “includes organic” or “No-GMO” label on their product in order to appeal to the buying public, even that product is kitty litter (true story)—then our belief in organics will not be shaken. And this belief may extend to similar unproven health food claims as well: If we’re into organics, then we’re also more inclined to buy GMO-free products because we believe there is such a thing as corn and wheat that has never been genetically modified (in fact, though, these crops, among many others, have been selectively cross-bred for centuries to develop species that thrive in particular environments, that are more resistant to drought and insects, and that taste better and have better colors).
The same phenomenon of belief perseverance that causes us to cling to our biases even in the absence of evidence or when faced with disproving evidence can also result in us adopting science-sounding ideas that seem right but are actually just pseudoscience. Pseudoscience, it has been said, takes the form of science without the substance of science. While science relies on a systematic and evidence-based approach to understanding the natural world, pseudoscience lacks the rigorous methodology and empirical support necessary to be considered a valid scientific pursuit. In its most common form, pseudoscience is used for financial and political gain. Over time, the sheer volume and visibility of products, services and notions that claim to be scientific but actually aren’t erodes our trust in real science and also erodes our understanding of science. Distinguishing between science and pseudoscience is important for promoting sound decision-making and advancing reliable knowledge in society.
|Rigorous study design, observation, experimentation and analyses||Yes||No|
|Conclusions subject to peer review and other scrutiny||Yes||No|
|Theories and explanations built on existing scientific knowledge and consistent with well-established principles||Yes||No|
|Hypotheses and theories are testable and falsifiable, meaning they can be proven wrong based on empirical evidence||Yes||No|
|Scientists actively seek to disprove hypotheses through experimentation to ensure the validity of findings||Yes||No|
|Ideas not widely accepted until they have withstood extensive scrutiny from the scientific community and have been supported by reproducible evidence||Yes||No|
|Conclusions are most often expressed as hedges (e.g., this data “may” explain this phenomenon, usually not “does” or “must”)||Yes||No|
|Leads to numerous practical applications and technological advancements that have improved human life||Yes||No|
|Theories often have predictive power, allowing scientists to make accurate predictions and develop new hypotheses based on existing knowledge||Yes||No|
|Makes spectacular claims: Guaranteed relief, leaves an invisible veil on the skin to protect its youthful radiance, optimizes the release of energy, 9 out of 10 doctors recommend, improves memory by 20 percent, etc.||No||Yes|
|Only seeks confirmations, not contradictions, and ignores or denies contradictory information||No||Yes|
|Takes the form of science without being scientific: Homeopathy, chiropractic, naturopathy, osteopathy, magnet therapy, colloidal silver, crystal healing, detoxification, acupuncture, astrology, numerology, aromatherapy, intelligent design, etc.||No||Yes|
|Often used in partisan rhetoric (such as all rhetoric supporting antivax, anti-mask, climate change denialism, etc.)||No||Yes|
|Appeals to emotional or personal beliefs rather than being grounded in evidence and logic||No||Yes|
|Ideas gain popularity despite the absence of support among experts in the relevant fields||No|
Belief perseverance pops up in several other ways as well—superstitions, racism, and grudges, and also in friendship, fandom, and love. It’s a powerful cocktail of cognitive and behavioral science.
But at least with regard to science communication, the key outputs of belief perseverance are barriers and reactions like skepticism, disbelief, denial, rationalization, groupthink, confusion, disinformation, and elevating pseudoscientific “sounds reasonable” ideas to the same plane of credibility as actual science. Clearly then, belief perseverance is an important concept in science communication. Understanding how we might be able to overcome belief perseverance—or at least put a dent in it—will be explored in part 2 of this essay.