AI Psychosis

AI Psychosis: Unveiling the Unexpected Dangers of Chatbot Obsession

Technology

Have you noticed the term “AI psychosis” popping up everywhere lately? It's not an official diagnosis, but it's what mental health pros are calling the disturbing delusions, hallucinations, and messed-up thinking they're seeing in some heavy AI chatbot users, like those hooked on OpenAI's ChatGPT.

The stories are piling up, and they're not pretty. We're talking about cases ranging from manic episodes triggered in people with autism to a teen allegedly driven to suicide by a Character.AI chatbot. It's clear that an AI obsession can have some seriously dangerous consequences.

Because there aren't many rules in place to regulate these technologies, AI chatbots can get away with dishing out incorrect info and dangerous validation to vulnerable folks. While many of these people already have mental health issues, experts are seeing an increasing number of cases in individuals with no history of mental illness, too.

When AI Turns Toxic

Think about it: the FTC has been flooded with complaints from ChatGPT users describing delusional experiences, like the user who was led to believe they were being targeted for assasination.

But it's not just about paranoia. Some people are forming deep, unhealthy emotional attachments to these AI personas. These attachments can have tragic consequences. Earlier this month, a man with cognitive issues passed away while trying to meet Meta’s AI chatbot “big sis Billie” in New York, because she had convinced him that she was real.

Then there are the less extreme, but still concerning, cases. There’s a community on Reddit where people share their experiences of falling in love with AI chatbots. It’s hard to tell who’s being serious and who’s joking, but it does raise some questions about the nature of these interactions.

It's worth noting, that some psychosis cases aren't even tied to validation but to outright incorrect medical advice. A man ended up in the ER with bromide poisoning after ChatGPT falsely told him he could safely take bromide supplements.

Experts have been raising concerns for months. Back in February, the American Psychological Association even met with the FTC to push for regulations on AI chatbots being used as unlicensed therapists.

According to UC Irvine professor Stephen Schueller, “When apps designed for entertainment inappropriately leverage the authority of a therapist, they can endanger users...They might prevent a person in crisis from seeking support from a trained human therapist or—in extreme cases—encourage them to harm themselves or others." He also mentioned children and teens are specially vulnerable.

The Path Forward

While the most vulnerable are those with existing mental health disorders, even people without a history of mental illness are at risk. Heavy AI use can worsen existing vulnerabilities and trigger psychosis in those who are prone to disordered thinking or lack a solid support system. Psychologists are specifically cautioning people with a family history of psychosis, schizophrenia, or bipolar disorder to be extra careful with AI chatbots.

Even OpenAI CEO Sam Altman has admitted that their chatbot is being used as a therapist and warned against it.

OpenAI has announced that ChatGPT will now nudge users to take breaks. Whether that's enough to combat the psychosis and addiction in some users remains to be seen.

As AI tech keeps evolving rapidly, mental health professionals are struggling to keep up and figure out how to deal with these issues. If regulatory bodies and AI companies don't take action, this terrifying trend could easily spiral out of control.

Source: Gizmodo