Monday 30 October 2023

‘Could vs should’ in AI therapy

Barry Orr ponders how industrial action might be considered, or averted, from psychological therapists.

16 October 2023


Like it or loathe it, Artificial Intelligence is increasingly part of the landscape across human life – including psychological therapies. Papers and scholarly debates have considered both current and potential roles of AI therapies (e.g. Abrams, 2023; Horn & Weisz, 2020; Jackson, 2023; Knox et al., 2023). Here, I consider less discussed AI boundaries and considerations.

In particular, I look to the entertainment industry for the possible consequences of disrespecting or ignoring such boundaries. Are we psychologists sleepwalking towards legal or industrial actions over AI? In the words of Dr Ian Malcolm from the film Jurassic Park, are our scientists ‘so preoccupied with whether or not they could, that they didn't stop to think if they should’?
From entertainment, to elsewhere

Recent strikes of writer and actors’ unions have partly been around AI concerns – for example, the prospect that all future programming made and released becomes AI depictions of scanned likenesses/digitally created actors (Fitzgerald, 2023). Here, and in elements of the IT industry – whether for profit/economy concerns, displaying innovation, or other reasons – some appear to want AI use to become the norm. Therapists might be wise to pay attention to these strike outcomes: especially if entertainment law rulings could suggest relevant precedents.

In therapy innovation, we appear to be moving, or at least aspiring in some quarters, towards a Turing-test passing, AI virtual therapist. Imagine a therapist capable of being ‘customised’, like a computer game avatar or today’s ChatGPT request. A digitally ‘resurrected’ facsimile of Freud, Horney, Beck etc., or ‘in the style of’ a living therapy giant is possible.

Maybe that, in itself, could seem useful and non-problematic to you. But how far could and should we take AI in catering for preferences? Imagine a digital therapist where gender, race, age, disability statuses could be specified: might that be a potential fostering of discriminatory attitudes? Legally, might such preferences be ruled as concerning a ‘computer’ rather than a person, leaving any discrimination claims difficult to even consider? A challenging issue, but one seemingly overlooked to date.
Mechanised assumptions

Ironically, many therapies have already had traditions of increasing ‘mechanisation’ or ‘manualisation’, with varied views on the merits of this over the decades (e.g. Carey et al., 2020). Mechanised elements have likely also sometimes aided marketability of therapies, as well as replication research (Leichsenring et al., 2017). There could then be the prospect of some therapy practitioners desiring titles and/or status from being the first to ‘pioneer’ or patent forms of ‘replicable’ AI-based therapies. Perhaps by claiming empirically validated trial (EVT) status ‘superiority’ with AI?

Again, maybe you would argue that if mechanised/manualised elements have always been part of therapy, there’s no issue here. But I would counter that spontaneous, ‘here and now’, human elements have always been central to classical therapies: EVT ‘validated’ or not. Spontaneity has been strongly suggested as playing a crucial therapy role (e.g. Yalom, 2002). Can digital displays of past human interactions, or programmed randomness/spontaneity by AI, really compete? Is such spontaneity even replicable?

Perhaps also of relevance here is that previous attempts to port therapy to computers have typically reported limited efficacy, and low adherence rates to date (Musiat et al., 2022). Yet 55 per cent of respondents in one recent study suggested a preference for an AI therapist (Aktan et al., 2022). However, this was from a non-clinical sample. Would we still see that preference in a double-blind type condition, where patients did not know if they were being treated by AI or not?
Training in human relations, exclusively by machine modelling?

Now considering some areas – social anxiety, forms of autism, agoraphobia, etc. – where part of what a good therapist is seeking to do is improving socialisation with others. Perhaps such clients may be particularly likely to express a preference for therapy via AI; but is it really wise to entrust something so intrinsically social and human to machines?

There’s also the issue of how different people respond at different times. Secondary or tertiary care patients have not been necessarily considered separately from primary care patients, for AI. What if the former were to sustain a significant psychotic episode; a severe self-harming relapse; a suicidal attempt; or an undiagnosed brain injury? If any or all of these present, and/or in a co-morbid fashion – can we know how exactly how AI would respond to such complex factors, if a sole therapeutic lead? There has already been at least one case where a ‘large language model’ was implicated in a suicide (Xiang, 2023), as well as reports of chatbot involvement in an assassination attempt on the Queen.

There are also considerations inherent to pretty much any technology. Accessing AI intervention relies on availability to electronic devices. The inaccessibility of automated therapy interventions, for some patients, is already an issue. Patients who lack IT or mobile app access or expertise, or who have a relevant disability, or limited financial means, have always been for services of various kinds. Would this accessibility remain, as the use of AI expands?

As for IT security, consider the possibility of hackers infiltrating AI software, to witness and/or manipulate therapeutic treatments. Could we see ransom demands, and/or the selling of patient disclosures and data; for money, revenge, control? This topic appears to have received little discussion or consideration yet, but may be of critical importance.

Then there are the human relations of the therapists themselves. Workers in some fields have been replaced already by AI automation (Hunt et al., 2022). This alone may already have been cited as too high a cost, by some, for their well-being (Field, 2023). Will they end up being offered therapy or other intervention from the very AI that has replaced them?
Consult early – prevention could be better than cures

To my knowledge, NHS Trusts or Health Boards have not launched yet mass public consultations, around many of these issues. I would be interested to hear from readers whether I am wide of the mark there; or if you have other concerns over the expansion of AI in Psychology I have not covered here.

To be absolutely clear, I am no luddite. I am not adverse to some AI assistance in human therapies. This has already been introduced in some areas, as therapist adjuncts, in non-acute care service efforts. Perhaps, if there is unequivocal health service and public support for AI, voices like mine will inevitably end up minimised or disregarded. It would be natural to perceive self-preservation in what I am saying. I accept that. But again, I encourage you to look at what is happening in other fields. AI implementation is facing legal challenges, for example with authors claiming plagiarism of their works by AI programmers (e.g. Vynck, 2023). If such challenges succeed, AI may change or ultimately not be implemented at all, regardless of the interests of that particular field.

Any linked industrial action would hopefully be a last resort. But if we don’t heed the warnings of Dr Ian Malcolm and myself, might we find ourselves at a ‘point of no return?’ Can we ‘roll back’ to ‘effective enough’ human therapists, potentially augmented by careful and ethical use of AI as a tool, if many of us have been replaced entirely?

Dr Barry Orr is a clinical psychologist, formerly of the NHS, and currently incoming with the Te Whatu Ora health board (Health New Zealand).
Key sources

Abrams, Z. (2023). AI is changing every aspect of psychology. Here’s what to watch for.

Aktan, M. E., Turhan, Z., & Dolu, İ. (2022). Attitudes and perspectives towards the preferences for artificial intelligence in psychotherapy. Computers in Human Behavior, 133, 107273.

Carey, T. A., Griffiths, R., Dixon, J. E., & Hines, S. (2020). Identifying Functional Mechanisms in Psychotherapy: A Scoping Systematic Review. Frontiers in Psychiatry, 11, 291.

Field, M. (2023, September 18). Home workers will be first to lose jobs to AI, Oxford study warns. The Telegraph.

Fitzgerald, T. (2023). SAG 2023 Strike May Hinge On Same Issue As The Writers’ Strike: AI. Forbes.

Horn, R. L., & Weisz, J. R. (2020). Can Artificial Intelligence Improve Psychotherapy Research and Practice? Administration and Policy in Mental Health and Mental Health Services Research, 47(5), 852–855.

Hunt, W., Sarkar, S., & Warhurst, C. (2022). Measuring the impact of AI on jobs at the organization level: Lessons from a survey of UK business leaders. Research Policy, 51(2), 104425.

Jackson, C. (2023). The big issue: The brave new world of AI therapy.

Knox, B., Christoffersen, P., Leggitt, K., Woodruff, Z., & Haber, M. H. (2023). Justice, Vulnerable Populations, and the Use of Conversational AI in Psychotherapy. The American Journal of Bioethics, 23(5), 48–50.

Leichsenring, F., Abbass, A., Hilsenroth, M. J., Leweke, F., Luyten, P., Keefe, J. R., Midgley, N., Rabung, S., Salzer, S., & Steinert, C. (2017). Biases in research: Risk factors for non-replicability in psychotherapy and pharmacotherapy research. Psychological Medicine, 47(6), 1000–1011.

Musiat, P., Johnson, C., Atkinson, M., Wilksch, S., & Wade, T. (2022). Impact of guidance on intervention adherence in computerised interventions for mental health problems: A meta-analysis. Psychological Medicine, 52(2), 229–240.

Vynck, G. D. (2023, September 20). ‘Game of Thrones’ author and others accuse ChatGPT maker of ‘theft’ in lawsuit. Washington Post.

Xiang, C. 'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says. Vice.

Yalom, I. D. (2002). The gift of therapy: Reflections on being a therapist. Piatkus.


SOURCE:

No comments:

Post a Comment