
AI for mental health now includes not just dispensing advice but also assessing advice given by human therapists.
getty
In today’s column, I examine a somewhat startling reversal of sorts, namely that AI mental health apps are beginning to be used to assess human therapists as to their prowess in dispensing psychological guidance. This alternative form of usage contrasts with AI being focused primarily on providing mental health advice to end-users who are patients or clients.
The idea is that if someone is using a human therapist for psychological analysis and guidance, perhaps leveraging AI to ensure that the human therapist is satisfactorily doing their job would be a prudent course of action. AI can readily double-check any mental health professional and indicate if the sage advice being proffered is offbeat or awry. AI serves as a guardian angel for people seeking human-to-human therapy.
Well, not everyone is happy with this emerging approach, especially some psychiatrists, psychologists, and therapists. Why are they displeased? The answer is straightforward. Nobody necessarily wants to be scrutinized by a lifeless AI that watches and evaluates their every utterance and action. It seems indubitably unfair, impractical, and wholly out of place.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here.
When AI Is On The Other Foot
The mainstay of using AI for mental health guidance consists of everyday people asking generative AI or large language models (LLMs) questions pertaining to their mental health acuity. It is abundantly easy to do. There are reportedly 400 million weekly active users of ChatGPT by OpenAI, of which a notable proportion is likely at times seeking mental health guidance from the popular generative AI app. See my analysis concerning the population-level aspects at the link here.
People still do make use of human therapists. We haven’t yet crossed over into relying solely on AI for mental health advice. There is a rise in people seeking human-based mental health therapy, and the marketplace for human therapists is strong and growing. Demand for such services outstrips the supply of available human therapists.
A surprising twist entails putting the shoe on the other foot.
Here’s what I mean. Those who are engaged in human-to-human therapy are at times turning to AI to gauge the effectiveness and sensibility of what their human therapist is telling them. If their human therapist is saying that this or that is their likely issue or that they ought to do one thing or another, people are conferring with AI as a kind of second consultation.
This makes seemingly good sense. Why should you depend totally on whatever your psychologist or therapist tells you? Indeed, you might normally ask a friend or family member whether the therapeutic advice being offered is reasonably sound. But doing so reveals your intimate discussions, plus a fellow human might blast you or make you feel embarrassed about seeking a therapist.
Voila, bring the matter to AI and use the AI to be your helpful sounding board.
Boom, drop the mic.
Aboveboard Or Under The Table
There are two major ways that this added scrutiny occurs:
- (1) Aboveboard approach. An explicitly aboveboard use of AI as a mental health advisement double-checker.
- (2) Secreted approach. A hidden or under-the-table use of AI as a mental health advisement evaluator.
In the first instance, a client or patient confers with their human therapist and indicates that the use of AI is a desired element in the traditional therapist-client relationship. The conventional dyad or duo of a therapist-client relationship is gradually changing to be a triad, consisting of a therapist-AI-client relationship. See my detailed depiction of what this new triad involves at the link here.
One aspect of the triad is that the client or patient will likely use the AI as a second opinion.
Whatever advice the human therapist gives will be a topic of discussion when the person independently confers with AI. This might involve getting clarification about the human-devised advice. As an example, perhaps the human therapist was somewhat rushed due to time constraints during a therapy session. The client can readily devote gobs of time with AI and ask for clarification and detailed explanations.
That’s the aboveboard method.
The under-the-table or hidden approach consists of a client or patient who opts to secretly interact with AI and tries to gauge what their human therapist is advising. The person doesn’t reveal to their therapist that this is taking place. The belief is that if the therapist knew about it, the therapist would potentially get upset or insist that the person stop using AI.
Meanwhile, the client or patient feels comforted that they have AI in their back pocket, presumably serving as a sensibility monitor for the musings of their human therapist.
Too Many Cooks In The Kitchen
An immediate outcry by therapists is that this leaning into AI is a demonstrative usurping of the valued therapist-patient relationship.
First, people tend not to realize that there is a likely lack of privacy associated with their use of generative AI. As I’ve covered extensively, the AI makers usually stipulate in their online licensing agreement that anything you enter into the AI is available for them to inspect. They can have their AI team look at your seemingly private thoughts. They can even reuse those missives as part of their ongoing data training of the AI. Your privacy is loosey-goosey; see my in-depth explanation at the link here.
Second, AI could easily mess up whatever therapy plan the human therapist is undertaking with their client. It goes like this. The therapist is trying to take you in the direction of A. You consult with AI, and it asserts that B is a much better direction. So, you go back to your human therapist and engage in a dialogue about pursuing B rather than A. The next thing you know, your therapy devolves into an arcane debate over the practice and philosophical underpinnings of psychology and therapeutic methodologies.
This does little to advance your actual mental health progression.
Third, AI can prod people into believing that their therapist is doing a bad job or that the therapist is unprofessional. A person armed with such AI advice would undoubtedly confront their therapist. A kind of third party is now intruding into the therapist-patient relationship. It used to be that the third party was perhaps a friend or family member. Now it is the unfettered disruption of AI.
All in all, using AI in this manner is intrusive, disrupting the therapy that is underway, and consuming vast amounts of attention and energy on something other than aiding the resolution of mental health concerns and otherwise improving mental health.
The Jedi Fight Back
Wait for a second, comes the counterargument, you are looking only at the downsides of this weighty topic. Give the upsides a fighting chance.
First, AI can serve as a handy tool for clients. Generative AI is typically available at a low cost or possibly for free, plus it can be accessed anywhere and at any time. Human therapists cost money. Human therapists aren’t available around the clock, per se, since the cost of having this type of access would be generally impractical. Therapists would be wise to realize that AI is here and rising in use.
Do not deny AI; instead, encompass AI into the therapy process. See my discussion on this at the link here.
Second, it is incumbent on human therapists to proactively anticipate the use of AI by their clients or patients. The aspect of waiting until a client fesses up about AI usage is a signal that the therapist has already lost the initiative. Letting a client go behind your back is going to inevitably undercut the therapist-patient relationship. Be open. Explain what AI can and cannot do. Put the matter squarely in the visible sphere.
Third, human therapists need to be prepared to respond to the AI indications in a measured and thoughtful fashion. If all that you do is raise your eyebrows and bellow that the AI is a wasteland and should be ignored, the odds are that the use of AI by the client will simply go deeper. Trouncing AI out of hand is not prudent. In fact, human therapists ought to have utilized AI themselves so that they can first-hand explain to their clients why the AI might be off-base. They know this because they’ve seen it with their own eyes.
AI Misuse And Misapplied
Without sliding precariously into a doom and gloom perspective, there are serious and sobering worries about how far this use of AI might be extended.
One possibility is that healthcare providers opt to implement AI as a means of monitoring their human therapists. The temptation is mighty. Just let AI work automatically and behind the scenes to ensure that the therapists are supposedly doing the right thing. The AI can assess recorded transcripts of therapist-patient sessions (and even do so by live-streaming off the audio or video of a session). The AI can also evaluate text messages between the therapists and their respective clients. And so on.
This can be done readily and without much of a hassle by the AI. Merely feed the data into AI and ask the AI to assess the therapists. Voila, you’ve got a highly manageable way of keeping your therapists in line.
Whereas delving into the advice-giving of therapists used to be logistically complex and arduous, AI makes this easy-peasy.
Lots of potential problems are bound to surface with AI as an automated assessment tool of the act of therapy:
- Does the AI appreciate the big picture of what a therapist is seeking to do?
- Will the AI apply selective biases against the therapist?
- Can the AI misjudge the therapist and reach unfair and possibly outrightly incorrect conclusions?
- In what ways will the therapist be able to refute the AI assessments?
- How much time will be diverted to battles with AI assessments versus actively performing therapy with clients?
- Etc.
AI For Mental Health Is Here To Stay
Pandora’s box has already been opened, in the sense that AI is here and will continue to be here. I mention this since therapists who adopt classical head-in-the-sand posturing on AI in mental health therapy are facing a losing proposition. Period, end of story.
Not only is AI going to be stridently used by people to seek out mental health advice, but you can bet your bottom dollar that AI for gauging therapists is going to equally be on the rise. It is as obvious as the air we breathe.
The gist is that therapists and mental health professionals must take the bull by the horns. Do not just let AI usage in the mental health realm be a happenstance affair. Be ready for clients who opt to use AI for their mental health advice.
Similarly, and perhaps unexpectedly, also be prepared for clients who clandestinely use AI to double-check the advice they are getting from human therapists. Do not get caught off guard. Have your eyes and ears open. Prepare for the one-two punch of a client touting what AI told them, and especially what the AI told them about you and your sage advice on mental health.
Remember the famous line by Carl Jung: “Everything that irritates us about others can lead us to an understanding of ourselves.” There is a chance that the AI feedback actually has some merit, so don’t just tilt at windmills.
Whether you welcome it or not, you are moving into an era of the therapist-AI-patient relationship. Best to get your act together accordingly. Good luck and good speed.
















