Psychologist Greta Arancia Sanna asks what keeps us clinging to misinformation and if psychology can help us to loosen our grip.
17 July 2025
The other day, I was in the back of a cab on my way to King's Cross, staring out at the quiet hum of London in the early morning light. To keep myself awake, I struck up a conversation with the driver, commenting on the Senegalese election updates playing softly on the radio. He immediately lit up, eager to talk about his home country, which he had left seven years ago to study pharmacology. We chatted about the highs and lows of integrating into a new country, the challenges of being away from home, and eventually, the conversation turned to the pandemic. And then he said it.
He told me he didn't believe in the Covid-19 vaccine. That it had catastrophic side effects, that people were dying from it, that the government was covering it up. He insisted the vaccine hadn't been tested long enough, that it caused autism, and that he would never take it.
I was caught off guard. Here was someone educated in pharmacology, a field deeply rooted in scientific evidence, and yet he was repeating widely debunked claims. I wanted to counter with data, peer-reviewed studies, reports from medical institutions. But before I could, King's Cross came into view.
As I boarded the Eurostar, I couldn't stop thinking about that conversation. Was he simply irrational? Or was something else at play?
Can we really believe what we want?
Beliefs have fascinated researchers for years, especially when they appear to be resistant to change. We often assume people believe what is convenient for them and refuse to update their views when faced with valid evidence. But can we choose what to believe? If I asked you to close your eyes and convince yourself that the Earth is flat, you'd probably find it impossible. Even if you repeated it over and over, you couldn't truly believe something you know to be false.
And yet, in other domains, particularly those tied to politics, health, or identity, beliefs can seem remarkably entrenched. Climate change deniers, for example, are often accused of ignoring overwhelming scientific evidence. But misinformation isn't limited to extreme cases. Many of us have, at some point, unknowingly believed false claims; like the myth that sugar makes kids hyperactive or that we only use 10% of our brains. These ideas persist, not because people deliberately choose to ignore the facts, but because repetition and familiarity can make falsehoods feel intuitively true (Lewandowsky et al., 2017).
The Continued Influence Effect (CIE) suggests that people continue to rely on misinformation even after it has been corrected because once misinformation fills a gap in our mental model, it's hard to erase. If we accept that a widely publicised claim is false, we are left without a satisfying alternative explanation, and the original, even if incorrect, remains cognitively comfortable (Rich & Zaragoza, 2016).
The role of source credibility in belief updating
As part of my PhD research at University College London (UCL), published in Cognition (Sanna & Lagnado, 2025), I investigated how the perceived reliability of a source affects belief revision. What if people don't reject misinformation because they are incapable of updating their beliefs, but because they don't trust the correction?
In my study, participants were presented with a vignette about a fictional political candidate named Henry Light. Initially well-regarded, Henry is later accused of taking a bribe. Eventually, this accusation is corrected by one of six possible sources: three seen as reliable, including a prosecutor and a government report, and three viewed as unreliable, including a political satire news channel and a celebrity.
Participants were more likely to dismiss the bribery claim when the correction came from a trusted source. When the retraction came from a less reliable one, they held onto the original accusation. In other words, rejecting a correction from a dubious source can be a reasonable judgment, not an irrational one.
In a second version of the study, all sources were identified as prosecutors, but some were described as experienced and reputable, while others had no experience and professional misconduct records. Again, participants only altered their beliefs when the correction came from the more credible source. Moreover, when the initial misinformation came from a highly trusted prosecutor and the correction from a less reliable one, participants continued to rely on the original claim to inform their judgement.
These findings challenge the assumption that people cling to misinformation because they're biased or irrational. Instead, they seem to apply a consistent logic to belief updating, weighing new information based on who delivers it and on previous evidence collected about the claim. In this vein, while people might increase their belief that the bribery occurred if a reliable prosecutor makes and accusation, they also might decrease their perception of the reliability of the prosecutor if she makes a statement, they find unlikely based on what they already know; for example, if they had every reason to believe the politician would not take a bribe.
This rational approach to belief updating is echoed by recent studies using computational models. For example, Connor Desai and colleagues (2020) used Bayesian network models to show that individuals update their beliefs in a way that reflects the perceived diagnosticity of the evidence, rather than simply rejecting inconvenient truths. This was found to be the case even in highly politicised settings (Ecker et al., 2021).
How we can we tackle misinformation?
If we want to tackle misinformation, we need to move beyond simply providing factual corrections. Instead, we should focus on making corrections persuasive by considering:Source credibility – Are corrections coming from sources people already trust?
Worldview – Is the message aligned with individuals' pre-existing beliefs?
Alternative explanations – If misinformation is filling a knowledge gap, what better explanation can we offer?
Recent studies on AI-driven belief correction show that when people engage in personalised, evidence-based dialogue, even with a machine, they are willing to revise entrenched views (Costello et al., 2024). This suggests that the format and tone of a correction matter as much as the facts themselves.
Interventions could also focus on building long-term trust in institutions. A global survey found that 83% of respondents believe scientists should communicate their work to the public, and 52% favour greater involvement in policymaking (Cologna et al., 2025). This points to a strong public appetite for science engagement. Efforts like community science events, transparent policymaking, and accessible communication can help bridge the credibility gap.
Is there a psychological vaccine against untruths?
One promising strategy is 'prebunking': pre-emptively exposing people to the techniques used to spread misinformation. Much like a vaccine, these interventions help people build mental defences before they encounter false claims. For example, Lewandowsky and Van Der Linden (2021) developed short videos that warn viewers about misleading tactics like emotional manipulation or false consensus. These videos have been shown to reduce susceptibility to misinformation.
Researchers have also designed interactive tools like the
Go Viral! game, where players take on the role of a misinformer to learn how fake news spreads. By actively participating in the creation of misinformation in a controlled setting, players become better at spotting deceptive tactics in real life. This 'active inoculation' approach has been shown to significantly reduce belief in false claims. That said, one key limitation of these findings is that, while the techniques might reduce individuals' susceptibility to fake news, they don't necessarily improve true versus fake news discernment, which might lead to scepticism towards all news, independent of their veracity. These initiatives not only need to reduce our belief in fake news, but they should also avoid reducing our beliefs in true news.
When it comes to misinformed beliefs, it is crucial not to be persuaded by overly simplistic initiatives that view information as uniquely beneficial or harmful. Beliefs are not independent entities but constellations of complex networks and no one-size-fits-all solution will solve all our problems. Which brings us to our final point.
Critical thinking and epistemic humility
Beyond prebunking, we should support broader efforts to foster critical thinking and what psychologists call epistemic humility: the ability to recognise our own limits and remain open to the possibility that we might be wrong (Karabegovic & Mercier, 2024).
One effective method is early education in media literacy. Programmes like the Stanford Civic Online Reasoning curriculum teaches students to evaluate online information by checking other sources and reading laterally (McGrew et al., 2019). Brief interventions, as short as one hour, have been shown to improve students' ability to resist misinformation.
These skills are not just about identifying errors. They build a mindset of curiosity, reflection, and self-correction; key traits in an age of information overload. Cultivating these habits early could help create a generation better equipped to navigate a complex and often misleading digital world (Lewandowsky et al., 2017).
Thinking back to my conversation with the cab driver, I doubt anything I said in that moment would have convinced him to get vaccinated. But I also don't believe his mind is beyond changing. His scepticism was coming not from blind denial, but from a lack of trust in the sources promoting vaccination. If he were presented with persuasive, credible evidence from someone he trusted, I believe he could change his mind.
And if that's the case, perhaps the way we fight misinformation needs to change too.
Greta Sanna is a PhD student in Experimental Psychology working with the Psychology and Language Sciences department (PALS) at UCL.
Further ReadingCologna, V., Mede, N. G., Berger, S., Besley, J., Brick, C., Joubert, M., ... & Metag, J. (2025). Trust in scientists and their role in society across 68 countries. Nature Human Behaviour, 1-18.
Connor Desai, S. A., Pilditch, T. D., & Madsen, J. K. (2020). The rational continued influence of misinformation. Cognition, 205, 104453. https://doi.org/10.1016/j.cognition.2020.104453
Costello, T. H., Pennycook, G., & Rand, D. G. (2024). Durably reducing conspiracy beliefs through dialogues with AI. Science, 385(6714), eadq1814. https://doi.org/10.1126/science.adq1814
Festinger, L. (1957). A Theory of Cognitive Dissonance. Stanford University Press.
Guillory, J. J., & Geraci, L. (2010). The persistence of inferences in memory for younger and older adults: Remembering facts and believing inferences. Psychonomic Bulletin & Review, 17(1), 73–81.
https://doi.org/10.3758/PBR.17.1.73Harris, A. J. L., Hahn, U., Madsen, J. K., & Hsu, A. S. (2016). The Appeal to Expert Opinion: Quantitative Support for a Bayesian Network Approach. Cognitive Science, 40(6), 1496–1533.
https://doi.org/10.1111/cogs.12276Johnson, H. M., & Seifert, C. M. (1994). Sources of the continued influence effect: When misinformation in memory affects later inferences. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(6), 1420–1436.
Johnson, M. K., Hashtroudi, S., & Lindsay, D. S. (1993). Source monitoring. Psychological Bulletin, 114(1), 3–28.
Karabegovic, M., & Mercier, H. (2024). The reputational benefits of intellectual humility. Review of Philosophy and Psychology, 15(2), 483-498.
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498.
Lewandowsky, S., & Van Der Linden, S. (2021). Countering misinformation and fake news through inoculation and prebunking. European review of social psychology, 32(2), 348-384.
Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond Misinformation: Understanding and Coping with the "Post-Truth" Era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369.
Lewandowsky, S., Stritzke, W. G. K., Oberauer, K., & Morales, M. (2005). Memory for fact, fiction, and misinformation. Psychological Science, 16(3), 190–195. https://doi.org/10.1111/j.0956-7976.2005.00802.x
Madsen, J. (2016). Trump supported it?! A Bayesian source credibility model applied to appeals to specific American presidential candidates' opinions. Cognitive Science. https://www.semanticscholar.org/paper/Trump-supportedit!-A-Bayesian-source-credibilityMadsen/ccdbba50f93e81fdcb88b94828d68a760568d367
McGrew, S., Breakstone, J., Ortega, T., Smith, M., & Wineburg, S. (2019). Can students evaluate online sources? Learning from assessments of civic online reasoning. Theory & Research in Social Education, 47(2), 165–193.
https://doi.org/10.1080/00933104.2019.1586611Merdes, C., von Sydow, M., & Hahn, U. (2021). Formal models of source reliability. Synthese, 198(23), 5773– 5801. https://doi.org/10.1007/s11229-020-02595-2 Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(3), 303–330.
https://doi.org/10.1007/s11109-010-9112- 2Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(3), 303–330. https://doi.org/10.1007/s11109-010-9112- 2
Rich, P. R., & Zaragoza, M. S. (2016). The continued influence of implied and explicitly stated misinformation in news reports. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(1), 62–74. https://doi.org/10.1037/xlm0000155
Sanna, G. A., & Lagnado, D. (2025). Belief updating in the face of misinformation: The role of source reliability. Cognition, 258, 106090. https://doi.org/10.1016/j.cognition.2024.106090
Sommer, J., Musolino, J., & Hemmer, P. (2024). Updating, evidence evaluation, and operator availability: A theoretical framework for understanding belief. Psychological Review, 131(2), 373.
SOURCE: