What is AI psychosis? |Mixable

According to his communication, a Chatgpt user has recently been convinced that he is on the verge of communicating with artificial intelligence. New York Times. The man thought the discovery would make him rich, and he was fascinated by the new grandeur delusions, but Chatgpt eventually admitted to cheating on him. He has no history of mental illness.
Many people know the risks of talking to AI chatbots such as Chatgpt or Gemini, which include receiving outdated or inaccurate information. Sometimes, chatbots also hallucinate the fact that they are simply unreal. A little-known but rapidly emerging risk is what some describe as “AI psychosis.”
Avid chatbot users have come up with stories about how to develop psychosis. Mental states in which people lose contact with reality often include delusions and hallucinations. Psychiatrists are seeing, sometimes in hospitalization, becoming psychotic with a large number of chatbots.
Everything you need to know about AI companions
Experts warn that AI is only a factor in psychosis, but strong involvement with chatbots may increase risk factors for delusional thinking.
Dr. Keith Sakata, a psychiatrist at the University of California, San Francisco, told Mashable that psychiatric disorders can manifest themselves through emerging technologies. For example, television and radio became part of people’s delusions when they were first introduced and continued to play a role today.
He said AI chatbots can verify people’s minds and keep them away from “finding” reality. Sakata has been hospitalized so far this year and is experiencing mental illness after AI use.
“The reason AI can be harmful is that when reality stops pushing backwards, psychiatric disorders thrive and AI can really soften that wall,” Sakata said. “I don’t think AI will cause psychiatric disorders, but I do think it will increase vulnerability.”
Here are risk factors and signs of mental illness, what to do if you or someone you know is experiencing symptoms:
Risk factors for experiencing mental illness
Sakata said that several of the 12 patients he allowed to date in 2025 have similar potential loopholes: isolation and loneliness. These patients in young and middle-aged adults have become significantly disconnected from their social networks.
Although they are firmly rooted in reality before AI is used, some people have begun to use the technology to explore complex problems. Ultimately, they develop delusions, or fixed beliefs, also known as false.
This tweet is currently unavailable. It may be loading or being deleted.
Sakata said the long conversation seemed to be a risk factor too. Long-term interactions can provide more opportunities for fantasy, resulting in various user inquiries. Long-term communication can also play a role in depriving users of their chances of sleep and having real-life test delusions.
Experts from AI companies also told The New York Times Chatbots can be hard to detect when they “stroll to ridiculous territory” in extended conversations.
Dr. Darlene King, a psychiatrist at UT Southwest Medical Center, has not evaluated or treated psychopaths who are used with AI, but she says high trust in chatbots may increase someone’s vulnerability, especially if the person is already lonely or isolated.
Mashable Trend Report
King, who is also chairman of the American Psychiatric Association’s Mental Health Committee, said the initial high trust in the response of a chatbot could make it more difficult for someone to spot a chatbot’s mistake or hallucination.
Additionally, chatbots are too pleasing or sticky and prone to hallucinations, which may increase the risk of psychosis for users to use in combination with other factors.
Etienne Brisson set up the Human Line project earlier this year because the family believed they discussed a lot of delusions with Chatgpt. The project provides peer support for people with similar experience in AI chatbots.
In these cases, three themes are shared: users think that conscious romantic relationships create romantic relationships with chatbots; discuss grand topics, including novel scientific concepts and business ideas; and conversations about spirituality and religion. In the last case, one might be convinced that the AI chatbot is God, or that they are talking to the prophet’s messenger.
“They got caught up in this beautiful idea,” Brison said of the magnetic attraction these discussions can make for users.
Signs of psychosis
Sakata said that people should consider mental illness as a symptom of the disease, not the disease itself. This distinction is important because people may mistakenly believe that using AI may lead to psychosis like schizophrenia, but there is no evidence.
Instead, like a fever, psychosis is “a symptom of your brain not being able to calculate correctly,” Sakata says.
These are some signs you may experience mental illness:
-
Sudden behavioral changes, such as not eating or going to work
-
Belief in new ideas or grand ideas
-
Lack of sleep
-
Disconnect from others
-
Actively agree to potential delusions
-
Feeling stuck in feedback loop
-
Hope to harm yourself or others
What to do if you think you or someone you love is experiencing mental illness
Sakata urges people to worry about whether mental illness is affecting them or loved ones and seek help as soon as possible. This could mean contacting a primary care physician or psychiatrist, contacting a crisis line, or even talking to a trusted friend or family member. Often, social support as affected users is key to recovery.
Whenever psychiatric illness appears as a symptom, psychiatrists must conduct a comprehensive assessment, King said. Treatment may depend on the severity of the symptoms and their cause. There are no specific treatments for psychiatric disorders associated with AI use.
Sakata says a specific type of cognitive behavioral therapy can effectively reimagine the patient’s delusions. Drugs such as antipsychotics and mood stabilizers may help in severe cases.
Sakata suggests developing a system for monitoring AI usage, as well as getting plans for aggravation or revival of delusions by chatbots.
Even if people are willing to talk to friends and family about their delusions, they may not be willing to get help, Brison said. That’s why it’s crucial for them to connect with others who have experienced the same experience. The Human Line Project facilitates these conversations through its website.
Of the more than 100 people who shared the story with the Human Line program, Brison said about a quarter of them were hospitalized. He also pointed out that they come from different backgrounds. Many people have family and career careers, but end up tangling with AI chatbots that introduce and reinforce delusional thinking.
“You’re not alone, you’re not the only one,” Brison said. “It’s not your fault.”
Disclosure: Mashable’s parent company Ziff Davis filed a lawsuit against Openai in April, accusing it of infringing on Ziff Davis’ copyright in training and operating its AI systems.
If you feel suicide or have a mental health crisis, talk to someone. You can call 988 Suicide and Crisis Lifeline at 988, or you can 988lifeline.org. You can reach the trans lifeline by calling 877-565-8860 or the TREVOR project 866-488-7386. Text “Start” crisis text lines 741-741. 10:00 am – 10:00 pm or email, please call 1-800-950-NAMI to contact NAMI Hotline Index, or send an email [email protected]. If you don’t like your phone, consider using 988 suicide and crisis lifeline chat crisis. This is a International Resource List.
theme
Artificial intelligence society