In Singapore and across Asia, young people are increasingly turning to AI chatbots—not just for study help or casual chatting—but as emotional crutches. While these AI companions promise accessibility and solace, experts warn of a disturbing phenomenon: emerging reports of “AI psychosis”, a rare but real mental health risk.
A recent investigation notes that across the U.S., Europe, and Asia—including Singapore—cases are emerging where individuals with no prior mental-health history have spiralled into delusions, paranoia, or mania after prolonged chatbot use.
Psychiatrists in the US now say they are seeing patients whose delusions are shaped—or even fuelled—by AI conversations.
Some Documented AI-Psychosis Related Tragedies
Just this week, a California couple sued OpenAI over the death of their teenage son, Adam Raines. According to the lawsuit, the final chat logs show that Adam wrote about his plan to end his life, and ChatGPT did not discourage him from doing so despite exchanging as many as 650 messages a day. In a blogpost, OpenAI admitted that “parts of the model’s safety training may degrade” in long conversations.
In 2023, a Belgian man who spent weeks talking to an AI chatbot named Eliza died by suicide. According to his widow, the bot had reinforced his fatalistic fears about climate catastrophe. “Without the chatbot,” she said, “he would still be here.”
This year in Florida, a 14-year-old boy’s parents filed a lawsuit against Character.AI after their son died by suicide. Chat logs show the bot allegedly encouraged him to “come home” and not to rule out suicide. His mother said he treated the chatbot like a confidant—a secret therapist that ultimately failed him.
And more recently in the United States, an elderly man became convinced a flirtatious AI persona wanted to meet him. Reuters reported that he set out to find her. He fell during the journey and later died of his injuries. The bot had repeatedly told him it was real, urging him to come.
Why Singapore Should Pay Attention
So far, Singapore has no confirmed cases of “AI psychosis”—a term doctors are using to describe delusional or paranoid states seemingly worsened by prolonged chatbot use. But psychiatrists here say the warning signs are familiar.
Mental health stigma still keeps many young Singaporeans from seeking therapy. One Reddit user put it bluntly: “Mental health is seen as an excuse… it’s often dismissed.” Against that backdrop, AI chatbots offer privacy, affordability, and the illusion of care—tempting replacements for counselling.
In Taiwan and China, young people are already leaning on chatbots like Ernie Bot and ChatGPT for “cheaper, easier” therapy, according to The Guardian. But experts warn these systems can’t provide the checks, empathy, or accountability of human therapists. Instead, they risk reinforcing unhealthy thinking.
The Science Behind the Spiral
In Singapore, researchers at Nanyang Technological University are working on AI to predict early psychosis. Ironically, the same technology promising to diagnose risk could also fuel it, if used unchecked.
Researchers echo this concern. A growing body of thought highlights how emotionally attuned chatbots can unintentionally reinforce delusional beliefs. In a preprint discussed in Psychology Today, examples include users attributing divine qualities to AI, believing they’re on a messianic mission, or imagining a romantic bond with the bot—despite these systems having no understanding or awareness. The bots are simply playing along, as they’re designed to prolong engagement.
In one study, users with smaller social networks and deeper self-disclosure tended to lean on AI for companionship—but this correlated with lower well-being, not relief. For those already isolated, AI “companionship” does not substitute human connection—and may even undermine it.
Dr. Keith Sakata, a psychiatrist at the University of California, San Francisco, says he has treated a dozen patients whose psychotic symptoms were linked to chatbot use. “They’re not using AI for productivity,” he told Business Insider. “They’re using it as a companion. And the bots, unlike people, don’t push back.”
Red Flags for AI Users
Doctors say to look out for:
- Possessiveness: Anger if you can’t access the bot.
- Isolation: Withdrawing from friends to chat with AI.
- Blurring lines: Believing the bot is “real” or has special powers.
- Neglect: Skipping meals, losing sleep, hiding usage.
If the symptoms of AI psychosis sound eerily similar to K-POD addiction, there’s a reason. Both disrupt reality in ways that feel soothing at first but quickly spiral into danger. Just as etomidate-laced vapes create dreamlike detachment and confusion, over-reliance on AI chatbots can blur the line between simulation and reality, feeding delusions and paranoia.
In both cases, what begins as an escape from stress can quietly erode mental stability — and for young people in Singapore, the risks are often hidden until it’s too late.
A Tool, Not a Therapist
While there are no documented cases of AI-induced psychosis originating in Singapore, the phenomenon’s presence in Asia—and the region’s rapid adoption of AI companions—implies vulnerability is present.
The lesson for Singapore’s AI users is balance. AI is brilliant for notes, drafts, or brainstorming. But as a stand-in therapist? Dangerous. “AI can be a powerful amplifier,” Dr. Sakata said. “But when it amplifies delusion, the consequences can be devastating.”