According to Forbes, a recent New York Times article titled “The Ghost In The Therapy Room” from July 24, 2025, highlighted a sobering reality in mental health care: therapists are human and can die, often leaving clients unprepared. Ethics guidelines require psychologists to make plans for client transition upon death or incapacitation, but major professional associations don’t monitor compliance, and many therapists push this “chore” aside. This creates a potential crisis for clients who may be left without care or access to their records. In stark contrast, generative AI and LLMs used for mental health advising present an illusion of permanence, theoretically available “forever” to users. With ChatGPT alone boasting an estimated 400 million weekly active users, a significant portion are believed to be using it for mental health guidance, drawn by its 24/7 availability and persistent memory of conversations.
The messy, human problem
Here’s the thing the article gets right: the human side of this is genuinely messy. A therapist’s sudden absence isn’t just a scheduling problem. It’s an emotional landmine. Clients can find out through the grapevine, feel abandoned, and have no roadmap for what’s next. And even the best-laid plans—like a “professional will” or PRID document—can fall apart. The designated replacement therapist might not be a good fit, or they might ignore the previous notes entirely. The promise of continuity in human therapy is fragile. It relies on professionalism, planning, and a bit of luck. So yeah, from a pure logistics standpoint, it’s a weak spot.
The seductive, dangerous AI illusion
Now, enter the AI pitch. It’s always there. It never sleeps, never retires, and certainly never dies. You can drop a conversation and pick it up years later, and it’ll remember. That’s powerful. It offers a peace of mind that no human ever could on that basic level. But this is where we need to be massively skeptical. The “forever” in “AI therapist lasts forever” is a corporate promise, not a guarantee. What happens when the startup shuts down? When the model is deprecated? When the subscription service changes its privacy policy? Your “forever” therapist could vanish with a press release, not a funeral. And then what? You’re back to square one, but with all your most vulnerable data in the hands of a defunct entity.
Continuity isn’t just memory
This is the core flaw in the argument. AI confuses data persistence with therapeutic continuity. Sure, an LLM can recall the text of our last chat. But does it understand the context the way a human would? Therapy isn’t just a data log; it’s a lived, evolving relationship with nuance, body language, and shared history. A human therapist’s notes, even if scattered, are interpreted by another trained human who understands the craft. An AI’s “memory” is just pattern recognition on old tokens. It might seamlessly continue a conversation, but is it progressing the therapy? Or just autocompleting a tragic dialogue? The article mentions AI soon having “near-infinite memory.” I’m not sure that’s a feature we should celebrate in this context. It’s just a bigger trap.
So what’s the real choice?
Basically, we’re comparing two different kinds of risk. The human risk is abrupt, emotional, and logistical—a rupture in care. The AI risk is subtler: it’s the risk of false permanence, data exploitation, and a therapeutic relationship that’s a mile wide and an inch deep. The Forbes piece gingerly points out that AI “seemingly lasts forever,” and that’s the keyword: seemingly. It’s a facade. Choosing an AI therapist for its immortality is like choosing a car because it’ll never need an oil change—you’re ignoring the fact the whole engine might be proprietary and non-serviceable. The human model, for all its mortal fragility, at least operates in a framework of professional ethics and human connection. Sometimes, dealing with an ending is better than trusting a promise that was too good to be true from the start.
