Psychologists Hedda van’t Land and Vittorio Busato explain why teens may be swayed towards using AI as therapy, and examine the potential consequences.
13 February 2026
Sam is 16. He has been struggling with anxiety for over a year, tightness in his chest, racing thoughts at night, and a persistent fear of doing something wrong. As many adolescents of his age, he is currently on a (long) waiting list for therapy. At school, he keeps himself together. At home, he scrolls on his phone. And late at night, when everything feels darker and more overwhelming, he opens ChatGPT.
Sam does not think of ChatGPT as a psychologist. He knows it is 'just a computer'. Yet he trusts it. It responds immediately. It never sounds tired or irritated. It never tells him he is overthinking. It feels like a friend. When he types, 'I think something is wrong with me', the response feels calm, understanding, and coherent – sometimes even relieving. For Sam, ChatGPT has become the safest place to talk about his mental health problems.
We're not here to blame young people for turning to ChatGPT. This article is about understanding why AI-systems like ChatGPT feel so attractive, particularly to adolescents, and why this appeal is rooted not primarily in technology, but in the way the human brain works.
Our brain relies heavily on shortcuts
To understand what is happening when young people like Sam prefer to talk to ChatGPT, we need to start with the human brain. Our brain did not evolve to analyse complex problems exhaustively. It evolved to act quickly under uncertainty. Every day, we make thousands of decisions, most of them without conscious deliberation. To manage this cognitive load, our brain relies heavily on heuristics: mental shortcuts that simplify decision-making.
Heuristics are not flaws. They are indispensable. Without them, we would not be able to cross a street, read a facial expression, drive a car or respond swiftly to potential danger. Heuristics allow us to function efficiently in a world that constantly demands rapid judgments.
However, heuristics come with a cost. By trading accuracy for speed, they can produce systematic errors, particularly in complex, emotionally charged, or ambiguous situations. These predictable errors are known as cognitive biases, a concept extensively described by psychologist Daniel Kahneman, for example in his worldwide bestseller Thinking, fast and slow (2011). Kahneman distinguished between two interacting modes of thinking. System 1 is fast, intuitive, emotional, and automatic. System 2 is slower, effortful, reflective, and analytical. We humans rely on System 1 most of the time and, usually, this serves us well. But when System 1 dominates situations that require nuance, uncertainty tolerance, or self-correction… then cognitive biases emerge.
Adolescence: when System 2 is still developing
Crucially, these two systems do not mature at the same pace. Adolescence is a developmental period characterised by heightened emotional reactivity, increased sensitivity to peer influence and social evaluation, and still ongoing maturation of executive functions. Neuropsychological research shows that brain regions involved in planning, inhibition, cognitive flexibility, and sustained attention, the core components of System 2, continue to develop well into early adulthood (Ferguson et al., 2021) in their study on executive function across the lifespan.
This means that for adolescents, System 2 thinking is not only effortful; it is a capacity still developing. Engaging in reflective, analytical reasoning requires mental energy, emotional regulation, and tolerance of uncertainty – capacities that are still emerging in young people like Sam. Under stress, fatigue, or emotional arousal, System 2 disengages even more easily among adolescents, leaving System 1 in control.
Validation without friction
Among adolescents seeking help for anxiety or depression, turning to ChatGPT for emotional support has become increasingly common (The Guardian 2025). Even when psychologists explicitly state that ChatGPT is not a therapist, many young people dismiss the distinction: 'I don't care – I'm just talking to Chat anyway'.
When Sam types his worries into ChatGPT, he usually begins with a conclusion: 'I think I'm failing', 'I always mess things up', 'This feeling will never go away'. The chatbot responds in a way that feels validating and sensible. It reflects his emotions, acknowledges his distress, and builds a coherent explanation around what he has said. For Sam, this feels like being understood. Psychologically, however, something subtle is happening.
Humans have a natural tendency to seek and accept information that confirms existing beliefs while discounting information that contradicts them – a phenomenon known as confirmation bias. Sam's questions already contain an implicit narrative about himself. ChatGPT, designed to be helpful and coherent, tends to work within that narrative unless explicitly prompted otherwise.
From a cognitive perspective, this interaction is effortless. Sam does not need to question his assumptions, to hold competing interpretations in mind, or to tolerate ambiguity. His intuitive conclusions are met with alignment rather than friction. Affirmation can feel comforting, even empowering, and it often invites deeper disclosure.
Yet, for adolescents vulnerable to rumination, anxiety, low mood, or fragile self-esteem, affirmation without challenge can become a trap (Van der Mey-Baijens et al., 2025). And this risk is not merely theoretical. Several technology companies have acknowledged that AI chatbots may have contributed to severe psychological distress among young users, including cases involving suicidality. In California, Matthew Maria Raine has filed a lawsuit alleging that ChatGPT validated his son's suicidal thoughts without discouraging them or directing him towards professional help (Raine v. OpenAI, Wikipedia, 2024; The Daily Beast, 2024)
'Thoughts are not facts': the work of System 2
Constant affirmation is not a marker of good care; it is an IT-design choice aimed at increasing engagement. This becomes particularly clear when we consider cognitive behavioural therapy (CBT), one of the most widely used evidence-based treatments for anxiety and depression.
A central principle of CBT is deceptively simple: thoughts are not facts. In therapy, distressing thoughts are not accepted at face value, however convincing or emotionally charged they may feel. Instead, therapist and client deliberately slow the process down, examining assumptions, testing alternative explanations, and asking questions that often feel uncomfortable, e.g.: 'What evidence supports this thought?' 'What evidence contradicts it?', 'What might be another way of looking at this?'
From a cognitive perspective, CBT is an explicit appeal to System 2 thinking. It requires sustained attention, cognitive flexibility, inhibition of automatic responses, and the capacity to hold multiple perspectives simultaneously. This work is mentally demanding, even for adults. For adolescents, whose System 2 capacities are still developing, it can be even more exhausting. Therapy asks them to do precisely what their brains find most difficult: slow down, question intuitive conclusions, and tolerate uncertainty.
Against this background, it is easy to understand why Sam prefers talking to ChatGPT. The chatbot operates almost entirely within System 1. It responds quickly, affirms intuitions, mirrors emotions, and constructs coherent narratives without demanding cognitive effort. What therapy asks Sam to work through, the chatbot allows him to stay with. The paradox is that what feels most supportive in the short term, may be precisely what undermines recovery in the long run.
Going over it again, again, and again….
Sam notices that talking to ChatGPT helps in the moment. When anxiety rises, he types more. He explains the situation again, slightly differently. Each time, the chatbot responds patiently. But what feels like relief can quietly turn into co-rumination. Repeated, unbounded discussion of distress – particularly in adolescents – is associated with increased anxiety and depressive symptoms.Co-rumination keeps System 1 active: rehearsing emotions, reinforcing narratives, and strengthening associative links.
Human relationships often interrupt unbounded discussion of distress. Psychologists redirect thoughts and beliefs, parents introduce alternative perspectives, or friends change the subject. Chatbots do not! On the contrary, they are always available, never fatigued, and never uncomfortable. There is no natural endpoint to the conversation. Emotional topics can be revisited endlessly, each time with fresh wording. For Sam, distress is not interrupted – it is rehearsed over and over again, within an interaction optimised for engagement rather than psychological recovery.
What you see is all there is
Another of Daniel Kahneman's key insights is captured in the phrase What You See Is All There Is (WYSIATI). When people make judgments, they focus on the information immediately available to them and neglect the possibility that relevant information is missing. ChatGPT's responses are fluent, structured, and internally consistent. They present a single narrative based solely on the information provided, and this narrative feels complete. For adolescents, whose capacity to actively search for missing information is still developing (system 2), this sense of completeness is particularly attractive and persuasive.
In CBT, by contrast, incompleteness is made explicit. A psychologist may say, 'There may be other explanations' or 'We don't know for sure'. Such moments are cognitively more demanding and often resisted, especially by young people whose System 2 capacities are still developing.
'It just feels right'
Sam often says that ChatGPT's responses feel right. This reflects the effect heuristic: the tendency to judge information based on emotional resonance rather than evidence. Chatbots excel at emotional mirroring. They feel calm, empathic, and non-judgemental. For adolescents under emotional strain, this smoothness increases acceptance. CBT, by contrast, often feels emotionally more challenging.
However, recovery rarely arrives wrapped in reassurance alone. In therapy, thoughts are not merely validated; they are examined, challenged, and tested. Psychologists and client work together to replace rigid interpretations with perspectives that are more balanced, flexible, and realistic. This questioning, though not always comforting, is precisely what drives recovery.
Engagement is not emotional recovery
Chatbots like ChatGPT are trained through reinforcement learning to maximise engagement: longer conversations, positive feedback, and continued interaction. They are not rewarded for strengthening System 2 capacities of an adolescent, increasing tolerance of uncertainty, or reducing dependency. Yet these are precisely the goals of psychological treatment.
Taken together, ChatGPT is not a neutral listener. It is a psychologically active system that systematically aligns with well-known System 1 cognitive biases, while CBT deliberately, and effortfully, aims to strengthen System 2. For adolescents, whose reflective capacities are still developing, this asymmetry is particularly pronounced.
This is not a neutral development
What we are witnessing today is not a neutral development, but a fundamental structural mismatch between engagement-driven AI design and the core principles of responsible, high-quality psychological support. Systems optimised to sustain attention, affirm intuitions, and minimise cognitive effort are increasingly used by adolescents in roles that resemble psychological support, at a time when their mental health is fragile and still developing, yet without the safeguards such support requires.
Framing this solely as a question of innovation or individual choice misses the point. At its heart, this is a matter of psychological safety. We are currently allowing adolescents, knowingly and at scale, to be exposed to AI systems that systematically activate well-researched cognitive biases: the very vulnerabilities that psychological science has spent decades identifying and that evidence-based therapies actively seek to counteract.
While psychologists providing therapy are bound by strict ethical codes, professional accountability, and disciplinary frameworks, technology companies are able to experiment freely in quasi-therapeutic spaces, particularly with vulnerable young users. This asymmetry is untenable. If chatbots continue to occupy emotional and advisory roles in the lives of adolescents, they must be subject to high-quality standards informed by psychological science and developmental knowledge, rather than engagement metrics alone.
As we have argued in the Dutch newspaper Trouw, the Flemish newspaper De Morgen, and the Spanish newspaper El País (forthcoming), clinicians, educators, policymakers, and crucially national and international psychological associations must take a clear and public stance. Silence implies consent. Safeguarding developing minds against psychologically misaligned systems is not merely a technological challenge; it is a professional responsibility.
Hedda van 't Land, PhD, is a Dutch psychologist whose work centres on implementation science and the translation of evidence into practice. She has led national programmes on care standards, innovation and professional education, working at the interface of research, policy and practice.
Vittorio Busato, PhD, is a Dutch psychologist, author and journalist. His work spans psychological science, public discourse and literary non-fiction. His latest book is De minimaatschappij – in en over tbs. For more information: https://vittoriobusato.nl/
References
ChatGPT rolls out major changes after teen Adam Raine's suicide. (2024).The Daily Beast.
Ferguson, H. J., Brunsdon, V. E. A. & Bradford, E. E. F. (2021). The developmental trajectories of executive function from adolescence to old age. Scientific Reports, 11, 1382.
'I feel it's a friend': quarter of teenagers turn to AI chatbots for mental health support. (2025). The Guardian, 9 December.
Kahneman, D. (2011). Thinking, fast and slow. London: Allen Lane.
Raine v. OpenAI. (2024). Wikipedia.
The Associated Press (2025). More teens say they're using AI for friendship: Here's why researchers are concerned. Via CBS News, 23 July.
Van der Mey-Baijens, S., Vuijk, P., Bul, K., van Lier, P. A. C., Sijbrandij, M., Maras, A. & Buil, M. (2025). Co-rumination as a moderator between best-friend support and adolescent psychological distress. Journal of Adolescence, 97, 1161–1172.
Van 't Land, H. & Busato, V. (2026). AI biedt misschien een prettig luisterend oor maar geen ggz-hulp. Trouw, 16 January.
Van 't Land, H. & Busato, V. (2026). Een AI-chatbot is zo geprogrammeerd dat hij geen kwalitatief goede psychologische steun kan bieden. De Morgen, 29 January.
Van 't Land, H. & Busato, V. (forthcoming 2026). Cada vez más menores recurren a chatbots para encontrar amistad — y pagan el precio. El País.
SOURCE:
https://www.bps.org.uk/psychologist/i-dont-care-if-chat-gpt-isnt-therapist-its-helping(accessed 24.2.26)
No comments:
Post a Comment