top of page
Search

The End of Therapy as We Know It? AI, Automation, and the Future of Mental Health Care




A Tipping Point for the Talking Professions

We are standing on the edge of a revolution in mental health care. As artificial intelligence becomes faster, cheaper, and more persuasive, the professions of psychotherapy and counselling are being fundamentally reshaped. AI-powered chatbots now deliver cognitive-behavioural techniques, offer mood tracking, and simulate emotionally attuned conversations—often for free or at low cost.


These tools are no longer fringe experiments. Platforms like Wysa, Woebot, Replika, and Mentla are already used by millions. They provide 24/7 access, require no scheduling, and remove the discomfort some clients feel with human disclosure. In a world strained by demand and short on affordable therapists, the appeal is obvious.


But innovation always carries consequence. As AI begins to take on work once thought uniquely human, we face not just a technical shift—but an ethical, professional, and social one. And unless we meet it with clear-eyed reflection, we risk creating a world where the future of therapy is automated, impersonal, and deeply unequal.


Context: The AI Disruption Is Broader Than Therapy

Therapy is not alone. Across industries, AI threatens to displace millions. The Tony Blair Institute estimates up to 3 million UK jobs could be lost over time. The IPPR warns of a worst-case scenario where 7.9 million jobs disappear, especially in routine cognitive fields. Surveys already show that 1 in 4 workers fear losing their roles to AI.


Unlike past technological shifts, this isn’t just about replacing physical labour. AI can perform high-level cognitive tasks: writing code, reviewing legal documents, composing music, offering therapy. There is no obvious next “job frontier” to absorb the displaced. Without radical restructuring, we may be heading toward an hourglass economy: elite jobs at the top, precarious gig work at the bottom, and little in between.


What’s Already Here: AI in the Wild

AI-driven mental health tools are already in use across public and private sectors. The NHS promotes various digital tools for self-management of anxiety and depression. Private platforms increasingly offer AI-assisted therapy as a subscription model—far cheaper than human therapy. While they lack genuine attunement or nuance, they are convenient, always available, and don’t carry stigma or cost barriers.


Some users even report preferring AI. They feel less judged, more in control, and able to engage at their own pace. While anecdotal, and we don't yet have robust evidence of outcomes, these accounts challenge the idea that human relationships are always preferred.


The Coming Shift: A Three-Tier System of Care

As AI becomes embedded in mental health provision, we are likely to see the emergence of a stratified system:


Tier 1: Fully Automated Support

Free or low-cost tools offering basic interventions—chatbots, journaling, mood tracking. Scalable, impersonal, and ubiquitous.


Tier 2: Hybrid Models

AI tools augmented by human support—coaches, paraprofessionals, or therapists overseeing progress or responding to flags.


Tier 3: Elite Human Therapy

In-depth, relational work with highly trained professionals—available only to those who can afford premium fees or (perhaps) in statutory services for the most complex clients with the highest levels of risk, with everyone else being supported with some level of AI.


In this system, the quality of care is not determined by need—but by wealth. The poor get algorithms. The rich get presence.


Public Sector Pressures: The Financial Incentive to Replace

The financial logic for AI in public mental health services is unignorable. NHS trusts and their private partners face escalating demand, limited staff, and pressure to hit access targets.


AI offers a solution that is cheaper, faster, and scalable. A single AI system can triage thousands, deliver guided self-help, and free human therapists for complex cases. But the likely outcome is not just redistribution—it is reduction. Human roles will shrink. Whether that is equal to, better than, or even desirable, is somewhat besides the point when we are considering the intense financial pressures these services are responding to.


Therapists may find themselves supervising tech platforms, reviewing alerts, or delivering oversight rather than care. But even these roles are vulnerable. As AI systems improve, the need for humans in the loop will diminish further. The downward pressure on salaries and job availability is likely to be intense.


Convenience Will Win: The Consumer Appeal of AI Therapy

For most people, the choice won’t be between AI and a brilliant human therapist. It will be between AI and nothing—or between AI and paying £60+ a week they can’t afford.


AI is instant, always available, and doesn’t judge. It doesn’t need appointments, doesn’t get tired, and remembers everything. For users who want support without depth, or for whom convenience trumps relational subtlety, it will be a compelling offer.


Therapists often say, “AI can’t replace the relationship.” But two things are worth noting:


We don’t know if that will always be true. Some users already report finding AI more helpful. Outcomes may depend on personality, context, and preferences.


There’s a danger of confusing “shouldn’t be replaced” with “can’t be replaced.” Market forces don’t always follow ethical preferences. If AI is 'good enough', when good enough is a result of a constellation of factors - including accessibility and price, many people will choose it.


Your Pain is Now Data: Surveillance and the Commodification of Suffering

Of course, AI therapists are ultimately computer programs taking in data and responding to it. Which raises the question of what it is they are going to be doing with that data? Unlike therapists, AI systems are not bound by confidentiality in the same way. The data you share with a chatbot—about your trauma, suicidal thoughts, shame, or history—may be used to improve the system, or sold, or stored indefinitely.


Complex terms and conditions obscure these realities. But, as most of us can attest to whenever we sign up for another online account - most users won’t read them. The incentive for companies to monetise emotional data is enormous—and will only grow. Regulation may help, but we are playing catch up in an area that is advancing far quicker than legislators have the capacity to respond to. As the line between support and surveillance blurs, we must ask: are we offering care—or harvesting vulnerability?


The Profession in Crisis: Identity, Education, Power—and Ethical Strain

The implications of AI extend beyond market disruption—they reach into the personal and professional identities of therapists. For many, therapy is not just a job but a vocation, a core part of who they see themselves as. The prospect of being replaced by an algorithm can evoke a profound psychological toll: grief, disorientation, and moral distress. This may account for some of the resistance to the idea that may be squeezed out by AI or denial that this is even a possibility. What happens to those who trained for years, not just in techniques, but in the art of human presence, only to find themselves sidelined by machines?


Meanwhile, the pipeline of training remains largely unchanged. Universities and private providers continue to prepare new therapists for a world that may no longer exist by the time they graduate - literally. The advance of AI is currently exponential - perhaps that will growth in power will slow or plateau, or maybe it won't and we will see a genuine Artificial General Intelligence that really can perform better than humans at everything - but either way, the current technology is already powerful enough to be enormously disruptive. Simply rolling that out is enough to impact the profession - and there is a lot of money currently ensuring it is rolling out. In five years time AI in therapy will be ubiquitous. In ten - who knows? Few courses address AI, digital delivery, or ethical dilemmas around automation. Without urgent curriculum reform, we risk producing a generation of therapists trained for jobs that are disappearing.


At the same time, professional bodies have shown limited capacity to respond. The BACP, HCPC, and others have yet to offer substantive leadership on how to navigate or influence this shift. This leaves practitioners exposed—and the field vulnerable to being reshaped not by ethics or care, but by the commercial interests of major tech firms.


As therapists are asked to endorse, implement, or work alongside AI tools that they know cannot replicate the human relationship, many may experience a growing sense of moral unease. For those working in overstretched public systems, the tension between what they believe good care requires and what the system demands may become unbearable - we aren't there yet (there are enough ethical challenges already) but it's coming fast - our current health secretary has spoken of his desire to unleash the power of AI into the health service. The question isn't if it's going to happen, but what happens when it does?


This kind of ethical strain is not new—but AI intensifies it. Therapists who once found meaning in relational presence may now be asked to supervise tools that simulate empathy or triage human distress with algorithms. The loss is not just technical or professional. For those therapists who remain, it may amount to a moral injury: a deep and disorienting sense that they are complicit in something that betrays their values. We don't know, but we should be thinking seriously about it.


This then, is perhaps the most sobering prospect: that mental health care, once grounded in public values and community relationships, could become just another arm of the digital platform economy. The risk isn’t just the loss of jobs - although that is likely to be major, not least in the impact on those therapists lives. It’s the loss of professional autonomy and the redefinition of care itself under the logic of convenience, control, and the power of tech giants.


Conclusion: What Kind of Help Do We Want to Offer?

This is not a call for nostalgic resistance. AI is here and we can't put the genie back into the bottle. It has the power to be hugely beneficial and, used ethically with serious consideration, could play a valuable role in widening access to basic support, especially where none exists, as well as improving outcomes for existing therapists. But that power is a double edged sword - it's not just what it can do, it's what we as a society choose to allow it do. We must also name the cost and begin think and speaking seriously about it, otherwise these decisions will be made not by the public, or even knowledgeable health professionals, but by tech giants, politicians looking for quick financial wins, and the short-term logic of the market - which may not be the most effective way to create the kid of mental health system, much less society, we would prefer to live and work in.


The question is no longer “Can AI do therapy?” Its already doing it. It’s: "Do we still value therapy as a human practice—or are we willing to outsource our pain to a machine, because it’s easier and cheaper that way?" The answer, whatever it is, is likely coming sooner than you think.


Christian Hughes is a Psychotherapist and Consultant specialising trauma, moral injury, and ethical strain. His work draws on over 15 years of experience in trauma, identity, and moral complexity. To explore working together, visit www.christiankhughes.com or get in touch at hello@christiankhughes.com.

 
 
 

Comments


If you are in immediate crisis or at risk of harm to yourself or others, please contact NHS 111, your GP, or attend your nearest emergency department. This is not an emergency service.

©2024 ChristianKHughes.com

bottom of page