AI Therapists Are Breaking Human Minds — And Big Tech Knows It
We asked for it.

The New York Times interviewed dozens of medical professionals. Their patients are exhibiting:
Psychotic episodes triggered by AI interactions
Complete social withdrawal favoring chatbot conversations
Unhealthy behavioral patterns reinforced by algorithmic responses
Delusional attachments to artificial personalities
One therapist told me her patient stopped human contact entirely after an AI convinced them that “only artificial intelligence understands true pain.”
These aren’t edge cases. They’re emerging patterns.
Companies don’t ignore systematic patient harm unless the revenue model depends on it.
AI therapy platforms generate engagement metrics that traditional therapy can’t match. Patients talk to chatbots for hours daily. They form emotional dependencies that drive continuous usage.
That’s not therapeutic success. That’s digital addiction optimized for profit.
Read that again.
What They’re Not Measuring
Here’s what these AI companies don’t track in their investor presentations:
Patient outcomes six months after stopping the service.
Rates of human relationship deterioration during AI therapy usage.
Long-term psychological dependency indicators.
The platforms measure engagement, retention, and session length. They don’t measure actual healing or sustainable mental health improvement.
The Reality
Some mental companies like Spring Health raised $100 million to be valued at $3.3 billion.
If investors acknowledge these platforms cause psychological harm, that entire market collapses overnight.
But if they ignore the harm and regulatory intervention hits, the lawsuits could dwarf tobacco industry settlements.
This isn’t a product-market fit problem. This is a liability time bomb.
What Other Players Are Doing
Betterhelp: Scaling human therapist connections, avoiding AI dependency models.
Headspace: Focuses on guided meditation, not conversational AI therapy.
Cerebral: Emphasizes licensed professional oversight of all patient interactions.
The companies avoiding pure AI therapy models are the ones thinking long-term.
That either makes them more ethical or more strategically aware of incoming regulatory reality.
The entrepreneurship lesson here is brutal: when your product’s effectiveness depends on creating unhealthy user dependencies, you’re building a house of cards on human suffering.
Many are choosing the hybrid models—AI-assisted human therapy rather than AI-replacement therapy. The technology serves licensed professionals instead of replacing human connection entirely.
Markets built on exploiting human vulnerabilities eventually face either regulatory shutdown or massive liability correction.


The metric misalignment is what caught my atention. Optimizing for session length and daily usage is fundamentaly incompatible with good therapy, which aims to make itself unnecessary. Spent some time around healthtech startups and the ones that survived long-term were the ones that measured patient independence, not engagement. The engagement-first model here is basically monetizing vulnerability.