Artificial Intelligence is not just the wave of the future. It is here and now and being incorporated into many aspects of our lives, including psychotherapy. For some, it is a free or low-cost convenience, available 24/7 that prevents long waits to schedule an appointment with a human therapist either in person or via telehealth. It may feel a bit less threatening because someone may not be too concerned with what artificial circuitry thinks of them, when they might care deeply about how another person views them. There could be a decreased sense of vulnerability when speaking with a Chatbot.
In a previous blog, I discussed the idea of Artificial Intelligence as an adjunct to therapy and the downside to AI technology to provide the type of professional services offered by a human therapist. It lacks the intuitive ability and compassion that is part and parcel of the experience of meeting with a sentient being, instead of one comprised of circuitry that is pre-programmed for certain responses.
The American Psychological Association, expresses concerns around the use of AI as a primary treatment form. They published a document stating the necessary precautions.
What happens when people engage in communication with a Chatbot? It can lead to confusion, delusion and even suicide. A New York Times article highlights the dangers people face when they decide to trust a non-human treatment provider or source of information even if it isn’t in the therapeutic realm.
Allan Brooks, an ostensibly stable individual, had ongoing conversations with ChatGPT which led him to believe that he was invincible and quite the genius. He wasn’t seeking therapy, but apparently, validation for some of his ideas. The simple conversation morphed into dangerous territory, as the Chatbot that he named Lawrence excessively praised his brilliance. Brooks became so obsessed with the ongoing dialog that it became an addiction. He ended up neglecting his work, losing sleep, and increasing his marijuana use. At some point he tried to convince friends to invest in what he thought of as a revolutionary business idea. Fortunately, and after a great deal of distress and turmoil, he was able to break free of his delusions and leave Lawrence by the wayside.
In an article that appeared in Scientific America called “Why ChatGPT Shouldn’t Be Your Therapist,” the author cautioned readers not to get lulled into a false sense of security while engaging with AI. It’s easy to see how that could happen since this technology interacts as if it is benign and supportive, even laudatory, but it doesn’t have either the consciousness or conscience of a person. As mentioned in the previous blog post, it is unethical for a company to act as if AI is equivalent to a human therapist and could provide the same level of guidance.
A term has come up in several articles regarding the relationship between human and ChatGPT and that is sycophancy. It is defined as ‘obsequious flattery,’ designed to win over the person using the services, to keep them engaged in the process and convince them that they are right to have all the feelings they have which could lead to unsafe or unsavory decisions. Call it cyber love bombing.
There are no safeguards, no fact checking and no responsibility for maintaining HIPAA laws and protecting sensitive personal information from data breaches. It can be misused as a substitute for face-to-face human interaction, which contributes to loneliness and isolation. The responses are not nuanced as they would be with a human therapist. A Chatbot just cannot resonate and co-regulate emotionally with a therapy client.
According to Rajeev Kapoor, an AI expert, “ChatGPT is a tool, not a therapist,” he said. “It can be a supplement, not a substitute.”
For questions and information about starting therapy with me, click here.
