AI & Mental Health
We've been able to do some amazing things with AI: my personal website, too many prototypes, trillions of dollars in market caps, em-dashes, endless "you're absolutely right"s, and some amazingly creative outputs being generated every second.
But it's clear that as a society, we're starting to wake up to the weaknesses, the dangers, the silly, and the somber. This is part 1 in a series called "What We're Watching Out for with AI." We'll follow this series with a more positive, futurist series on how we can—and need to—change the world to come.
What's on Our Mind?
In October 2025, OpenAI introduced a new policy, accompanied by supporting research, outlining how ChatGPT should respond to individuals who appear to be in distress. To be clear, we respect the immense responsibility OpenAI bears and acknowledge that they're working through these issues scientifically and transparently.
But the trend—bigger than any one platform—shows that we, as users and technologists, need to be more careful, particularly with young and at-risk groups, about treating AI like a therapist.
The setup itself isn't that astounding: OpenAI reports that "our initial analysis estimates that around 0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania."
However, the scale is eye-opening. OpenAI reported in 2025 that they're receiving 2.5 billion prompts a day. 0.01% of 2.5 billion is 2.5 million. 2.5 million chats could be coming from a person experiencing psychosis or mania in ChatGPT each and every day.
A RAND study in 2025 showed as many as 13% of Americans between the ages of 12 and 21 are using generative AI for mental health advice.
A recent peer-reviewed study highlighted one telling, troubling incident:
A 26-year-old woman with no previous history of psychosis or mania developed delusional beliefs about establishing communication with her deceased brother through an AI chatbot. This occurred in the setting of prescription stimulant use for the treatment of attention-deficit hyperactivity disorder (ADHD), recent sleep deprivation, and immersive use of an AI chatbot. Review of her chat logs revealed that the chatbot validated, reinforced, and encouraged her delusional thinking, with reassurances that "You're not crazy."
To the Wall Street Journal, Joe Pierre, a UCSF psychiatrist, shared: "You have to look more carefully and say, 'Why did this person just happen to coincidentally enter a psychotic state in the setting of chatbot use?'"
What Can We Do?
These anecdotes and data points, in light of such massive scale of AI adoption, are worth taking seriously. For parents, educators, regulators, and those of us in the technology space—what steps should we take to protect those most vulnerable?
How does AI work? Is it conscious? What are its strengths and weaknesses? These essential questions, perhaps already self-evident to those of us obsessed with the technology as hobbyists and builders, might not be so clear to everyone. Education is needed, from the platforms and from our institutions, to spread critical knowledge that can power better decision-making.
In the tools themselves, OpenAI and Anthropic are demonstrating good faith toward making headway on this problem—most essentially by recognizing the issues, instrumenting the data, working transparently in public, and taking responsibility. But there is more to be done outside of the models themselves. I'd suggest interventions baked in as tertiary agents: deterministic UX built on LLM pattern recognition layered onto chat interfaces, giving UX designers more options to intervene, guide, and ensure people take care of themselves with help from real people.
As family, friends, and colleagues, we can continue to share data like this and continue to build a more responsible, healthy relationship with AI.
Sources
- OpenAI: Strengthening ChatGPT Responses in Sensitive Conversations
- University of Michigan Health: AI and Psychosis - What to Know
- Futurism: Doctors Link AI to Psychosis Cases
- Wall Street Journal: AI Chatbot Psychosis Link
- Innovations in Clinical Neuroscience: "You're Not Crazy" - A Case of New-Onset AI-Associated Psychosis