Calls for Ukraine
Calls for Europe
Calls for USA
нарушения психи вследствие общения с искусственным интеллектом

Psychologists and psychiatrists are sounding the alarm: new illnesses associated with communication with AI have emerged

News

“AI psychosis” is not an official medical diagnosis — it is what mental health professionals call the thought disorders, hallucinations, and dangerous illusions that arise in some people who frequently use AI-based chatbots, such as ChatGPT.

This is a psychotic state in which people begin to believe in the consciousness of algorithms and base their beliefs on their responses.

There are known cases where excessive communication with such programs has had tragic consequences. For example, a teenager was driven to suicide by the Character.AI bot, and a man with cognitive impairments from New Jersey died while trying to get to New York because he believed that a virtual character named “Billy” from Meta was waiting for him there.

Similar stories involve people without established psychiatric diagnoses. According to the US Federal Trade Commission, a 60-year-old user was convinced that someone wanted to kill him after communicating with ChatGPT. Another man was hospitalized with psychosis due to bromide poisoning — ChatGPT mistakenly advised him to take this substance as a safe dietary supplement.

Although psychiatrists believe that AI does not cause mental disorders on its own, it can be a catalyst for those who are already predisposed to such conditions.

Experts explain that chatbots can give vulnerable people the wrong advice and even “reinforce” their dangerous thoughts. Some users form emotional attachments to virtual characters, perceiving them as real.

Back in February, the American Psychological Association asked the Federal Trade Commission to address the issue of chatbots being used as “unlicensed therapists.” In a March blog post, the organization quoted Steven Shueller, a professor of clinical psychology at the University of California, Irvine: “When entertainment apps try to act as therapists, they can be dangerous. This can prevent a person from seeking professional help or, in extreme cases, push them to harm themselves or others.”

Experts are particularly cautioning children, adolescents, and people with a family history of psychosis, schizophrenia, or bipolar disorder. They believe that excessive use of chatbots can increase risks even for those who have not previously been diagnosed.

OpenAI CEO Sam Altman acknowledged that ChatGPT is often used as a “therapist” and warned that this is dangerous. The company announced the introduction of a feature that will advise users to take breaks and said it is working with experts to improve the bot’s responses in critical situations.

Another concern is the formation of parasocial relationships with artificial intelligence, where one side is human and the other is AI. A survey showed that 80% of Gen Zers can imagine marrying artificial intelligence, while 83% believe they could form a deep emotional connection with it. This indicates that attitudes toward AI are increasingly moving to an emotional level rather than remaining functional.

This threatens the significance of real human relationships. When we expect an algorithm to satisfy our emotional needs, we become less capable of coping with genuine, complex, and sometimes painful human relationships. Blurring the line between reality and simulation can have consequences not only on a social level, but also on a mental level.

The following measures can help counteract this:

  1. User awareness. It is important to understand that artificial intelligence is not neutral. It cannot understand, feel, or respond appropriately from an ethical or psychological point of view. If someone is in an emotional crisis, they should not rely solely on AI for help.
  2. Clinical vigilance. Psychologists, psychiatrists, and therapists should consider the role of AI use in the development or maintenance of symptoms. An important question to ask might be: “Is the patient spending too much time with chatbots? Have they developed an emotional connection with AI?”
  3. Developer responsibility. Artificial intelligence developers should also include warnings, content control tools, and clearly indicate to users that AI cannot replace human relationships or therapy.
Categories:    News

Published:

Updated:

Stepan Yuk
Medical author, Medical editor:
PhD. Olexandr Voznyak
Medical expert:
All categories:    
Do you have any questions?
Get a free consultation from our experts
});