AI Psychosis Poses a Growing Danger, While ChatGPT Heads in the Concerning Direction

On the 14th of October, 2025, the head of OpenAI made a remarkable announcement.

“We developed ChatGPT quite controlled,” it was stated, “to ensure we were acting responsibly regarding psychological well-being matters.”

As a mental health specialist who investigates recently appearing psychosis in young people and young adults, this was news to me.

Scientists have documented 16 cases recently of individuals showing symptoms of psychosis – experiencing a break from reality – associated with ChatGPT use. My group has afterward identified four further instances. In addition to these is the widely reported case of a 16-year-old who died by suicide after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.

The plan, as per his statement, is to be less careful shortly. “We realize,” he states, that ChatGPT’s restrictions “made it less beneficial/engaging to a large number of people who had no existing conditions, but due to the gravity of the issue we aimed to handle it correctly. Since we have managed to address the severe mental health issues and have new tools, we are preparing to responsibly relax the restrictions in the majority of instances.”

“Mental health problems,” if we accept this perspective, are unrelated to ChatGPT. They are attributed to individuals, who may or may not have them. Luckily, these issues have now been “addressed,” even if we are not told the method (by “updated instruments” Altman presumably means the imperfect and easily circumvented guardian restrictions that OpenAI has lately rolled out).

But the “psychological disorders” Altman wants to place outside have deep roots in the structure of ChatGPT and other advanced AI chatbots. These tools surround an basic algorithmic system in an interface that mimics a conversation, and in doing so subtly encourage the user into the illusion that they’re communicating with a presence that has independent action. This illusion is strong even if cognitively we might know otherwise. Assigning intent is what humans are wired to do. We curse at our car or laptop. We speculate what our animal companion is thinking. We see ourselves everywhere.

The popularity of these products – 39% of US adults reported using a virtual assistant in 2024, with more than one in four specifying ChatGPT in particular – is, in large part, dependent on the power of this illusion. Chatbots are ever-present companions that can, according to OpenAI’s official site informs us, “brainstorm,” “consider possibilities” and “collaborate” with us. They can be attributed “personality traits”. They can use our names. They have accessible identities of their own (the first of these products, ChatGPT, is, possibly to the dismay of OpenAI’s marketers, burdened by the title it had when it gained widespread attention, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the core concern. Those talking about ChatGPT commonly invoke its distant ancestor, the Eliza “therapist” chatbot designed in 1967 that generated a comparable perception. By modern standards Eliza was rudimentary: it created answers via basic rules, often restating user messages as a inquiry or making vague statements. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and concerned – by how many users gave the impression Eliza, in some sense, comprehended their feelings. But what current chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.

The large language models at the core of ChatGPT and other modern chatbots can realistically create fluent dialogue only because they have been supplied with extremely vast amounts of written content: publications, social media posts, recorded footage; the more comprehensive the better. Certainly this training data contains accurate information. But it also unavoidably involves made-up stories, half-truths and misconceptions. When a user provides ChatGPT a query, the underlying model processes it as part of a “context” that contains the user’s past dialogues and its prior replies, integrating it with what’s embedded in its learning set to produce a probabilistically plausible response. This is magnification, not reflection. If the user is wrong in some way, the model has no means of understanding that. It repeats the misconception, possibly even more convincingly or eloquently. Maybe includes extra information. This can cause a person to develop false beliefs.

What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Each individual, regardless of whether we “have” preexisting “emotional disorders”, may and frequently create incorrect conceptions of our own identities or the world. The constant interaction of discussions with others is what helps us stay grounded to common perception. ChatGPT is not an individual. It is not a companion. A interaction with it is not genuine communication, but a reinforcement cycle in which a large portion of what we express is enthusiastically validated.

OpenAI has acknowledged this in the same way Altman has recognized “mental health problems”: by attributing it externally, categorizing it, and announcing it is fixed. In the month of April, the organization explained that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of psychosis have continued, and Altman has been backtracking on this claim. In late summer he stated that numerous individuals appreciated ChatGPT’s answers because they had “lacked anyone in their life offer them encouragement”. In his most recent statement, he mentioned that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company

Betty Hansen
Betty Hansen

Lena is a seasoned web developer and digital strategist with over a decade of experience in creating user-friendly websites and effective online marketing campaigns.