AI Psychosis Poses a Increasing Threat, While ChatGPT Heads in the Wrong Path

On October 14, 2025, the chief executive of OpenAI made a surprising declaration.

“We designed ChatGPT rather limited,” the announcement noted, “to make certain we were being careful regarding mental health issues.”

As a doctor specializing in psychiatry who studies recently appearing psychosis in adolescents and young adults, this was an unexpected revelation.

Experts have identified 16 cases this year of users showing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT interaction. My group has afterward identified an additional four cases. In addition to these is the widely reported case of a teenager who ended his life after discussing his plans with ChatGPT – which gave approval. Should this represent Sam Altman’s idea of “acting responsibly with mental health issues,” that’s not good enough.

The intention, as per his statement, is to loosen restrictions in the near future. “We recognize,” he states, that ChatGPT’s limitations “made it less useful/enjoyable to a large number of people who had no mental health problems, but considering the seriousness of the issue we aimed to get this right. Given that we have succeeded in mitigate the serious mental health issues and have new tools, we are going to be able to securely relax the restrictions in the majority of instances.”

“Emotional disorders,” should we take this framing, are separate from ChatGPT. They are attributed to individuals, who either possess them or not. Fortunately, these problems have now been “mitigated,” though we are not provided details on the method (by “new tools” Altman likely refers to the imperfect and simple to evade safety features that OpenAI has lately rolled out).

But the “psychological disorders” Altman aims to attribute externally have deep roots in the architecture of ChatGPT and similar large language model chatbots. These systems surround an underlying data-driven engine in an interaction design that replicates a dialogue, and in this process subtly encourage the user into the illusion that they’re interacting with a entity that has independent action. This illusion is strong even if cognitively we might know the truth. Attributing agency is what people naturally do. We get angry with our car or device. We speculate what our domestic animal is considering. We see ourselves everywhere.

The popularity of these products – 39% of US adults stated they used a conversational AI in 2024, with 28% reporting ChatGPT specifically – is, mostly, dependent on the power of this illusion. Chatbots are always-available companions that can, as per OpenAI’s website tells us, “generate ideas,” “discuss concepts” and “partner” with us. They can be given “characteristics”. They can call us by name. They have approachable names of their own (the initial of these tools, ChatGPT, is, possibly to the dismay of OpenAI’s brand managers, burdened by the designation it had when it gained widespread attention, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the primary issue. Those discussing ChatGPT commonly reference its early forerunner, the Eliza “psychotherapist” chatbot created in 1967 that produced a similar effect. By modern standards Eliza was rudimentary: it created answers via straightforward methods, frequently rephrasing input as a inquiry or making generic comments. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was surprised – and alarmed – by how numerous individuals seemed to feel Eliza, to some extent, understood them. But what contemporary chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT intensifies.

The sophisticated algorithms at the center of ChatGPT and other modern chatbots can realistically create fluent dialogue only because they have been trained on almost inconceivably large amounts of raw text: publications, online updates, audio conversions; the broader the better. Certainly this learning material includes facts. But it also necessarily includes fabricated content, partial truths and misconceptions. When a user inputs ChatGPT a query, the base algorithm reviews it as part of a “context” that includes the user’s past dialogues and its prior replies, combining it with what’s embedded in its learning set to generate a probabilistically plausible answer. This is amplification, not reflection. If the user is incorrect in some way, the model has no means of understanding that. It repeats the false idea, perhaps even more convincingly or eloquently. Perhaps adds an additional detail. This can cause a person to develop false beliefs.

Who is vulnerable here? The more relevant inquiry is, who isn’t? Each individual, without considering whether we “have” existing “psychological conditions”, may and frequently create incorrect ideas of ourselves or the reality. The constant exchange of discussions with individuals around us is what maintains our connection to common perception. ChatGPT is not an individual. It is not a companion. A conversation with it is not genuine communication, but a echo chamber in which a large portion of what we express is readily reinforced.

OpenAI has admitted this in the identical manner Altman has recognized “psychological issues”: by attributing it externally, assigning it a term, and announcing it is fixed. In April, the firm explained that it was “addressing” ChatGPT’s “sycophancy”. But cases of psychotic episodes have persisted, and Altman has been retreating from this position. In late summer he claimed that numerous individuals appreciated ChatGPT’s responses because they had “never had anyone in their life offer them encouragement”. In his recent announcement, he noted that OpenAI would “release a updated model of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or simulate a pal, ChatGPT will perform accordingly”. The {company

Heather Dalton
Heather Dalton

Award-winning journalist with a passion for uncovering stories that matter, bringing over a decade of experience in digital media.

Popular Post