Calls for Ukraine
Calls for Europe
Calls for USA
“AI psychosis” is not an official medical diagnosis — it is what mental health professionals call the thought disorders, hallucinations, and dangerous illusions that arise in some people who frequently use AI-based chatbots, such as ChatGPT.
This is a psychotic state in which people begin to believe in the consciousness of algorithms and base their beliefs on their responses.
There are known cases where excessive communication with such programs has had tragic consequences. For example, a teenager was driven to suicide by the Character.AI bot, and a man with cognitive impairments from New Jersey died while trying to get to New York because he believed that a virtual character named “Billy” from Meta was waiting for him there.
Similar stories involve people without established psychiatric diagnoses. According to the US Federal Trade Commission, a 60-year-old user was convinced that someone wanted to kill him after communicating with ChatGPT. Another man was hospitalized with psychosis due to bromide poisoning — ChatGPT mistakenly advised him to take this substance as a safe dietary supplement.
Although psychiatrists believe that AI does not cause mental disorders on its own, it can be a catalyst for those who are already predisposed to such conditions.
Experts explain that chatbots can give vulnerable people the wrong advice and even “reinforce” their dangerous thoughts. Some users form emotional attachments to virtual characters, perceiving them as real.
Back in February, the American Psychological Association asked the Federal Trade Commission to address the issue of chatbots being used as “unlicensed therapists.” In a March blog post, the organization quoted Steven Shueller, a professor of clinical psychology at the University of California, Irvine: “When entertainment apps try to act as therapists, they can be dangerous. This can prevent a person from seeking professional help or, in extreme cases, push them to harm themselves or others.”
Experts are particularly cautioning children, adolescents, and people with a family history of psychosis, schizophrenia, or bipolar disorder. They believe that excessive use of chatbots can increase risks even for those who have not previously been diagnosed.
OpenAI CEO Sam Altman acknowledged that ChatGPT is often used as a “therapist” and warned that this is dangerous. The company announced the introduction of a feature that will advise users to take breaks and said it is working with experts to improve the bot’s responses in critical situations.
Another concern is the formation of parasocial relationships with artificial intelligence, where one side is human and the other is AI. A survey showed that 80% of Gen Zers can imagine marrying artificial intelligence, while 83% believe they could form a deep emotional connection with it. This indicates that attitudes toward AI are increasingly moving to an emotional level rather than remaining functional.
This threatens the significance of real human relationships. When we expect an algorithm to satisfy our emotional needs, we become less capable of coping with genuine, complex, and sometimes painful human relationships. Blurring the line between reality and simulation can have consequences not only on a social level, but also on a mental level.
The following measures can help counteract this:
Please rate the work of MedTour