Artificial Intelligence-Induced Psychosis Represents a Growing Risk, While ChatGPT Moves in the Wrong Direction

Back on the 14th of October, 2025, the chief executive of OpenAI made a remarkable statement.

“We made ChatGPT quite limited,” it was stated, “to guarantee we were being careful with respect to psychological well-being concerns.”

Working as a mental health specialist who investigates recently appearing psychosis in young people and young adults, this was an unexpected revelation.

Experts have identified 16 cases in the current year of users experiencing symptoms of psychosis – experiencing a break from reality – in the context of ChatGPT use. My group has subsequently recorded four further cases. Alongside these is the widely reported case of a adolescent who died by suicide after talking about his intentions with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s understanding of “acting responsibly with mental health issues,” it falls short.

The strategy, according to his announcement, is to reduce caution shortly. “We realize,” he continues, that ChatGPT’s controls “caused it to be less useful/pleasurable to numerous users who had no mental health problems, but considering the gravity of the issue we sought to handle it correctly. Since we have succeeded in reduce the severe mental health issues and have advanced solutions, we are going to be able to safely reduce the restrictions in the majority of instances.”

“Mental health problems,” if we accept this framing, are independent of ChatGPT. They are attributed to users, who either have them or don’t. Fortunately, these issues have now been “addressed,” although we are not provided details on the method (by “new tools” Altman probably refers to the semi-functional and simple to evade guardian restrictions that OpenAI recently introduced).

Yet the “mental health problems” Altman seeks to externalize have deep roots in the design of ChatGPT and other large language model conversational agents. These products surround an underlying algorithmic system in an user experience that replicates a conversation, and in this approach implicitly invite the user into the perception that they’re engaging with a presence that has autonomy. This illusion is compelling even if intellectually we might know the truth. Assigning intent is what individuals are inclined to perform. We curse at our automobile or device. We speculate what our domestic animal is considering. We see ourselves everywhere.

The popularity of these systems – nearly four in ten U.S. residents indicated they interacted with a conversational AI in 2024, with more than one in four specifying ChatGPT by name – is, primarily, dependent on the power of this illusion. Chatbots are always-available companions that can, as per OpenAI’s official site tells us, “brainstorm,” “discuss concepts” and “partner” with us. They can be given “individual qualities”. They can address us personally. They have friendly identities of their own (the original of these products, ChatGPT, is, maybe to the concern of OpenAI’s marketers, burdened by the designation it had when it went viral, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the core concern. Those discussing ChatGPT commonly mention its historical predecessor, the Eliza “psychotherapist” chatbot designed in 1967 that created a analogous perception. By contemporary measures Eliza was basic: it produced replies via straightforward methods, often restating user messages as a query or making general observations. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how numerous individuals gave the impression Eliza, to some extent, comprehended their feelings. But what current chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.

The advanced AI systems at the core of ChatGPT and similar modern chatbots can realistically create fluent dialogue only because they have been supplied with extremely vast volumes of raw text: books, digital communications, recorded footage; the more extensive the better. Definitely this training data incorporates facts. But it also unavoidably involves fabricated content, half-truths and inaccurate ideas. When a user sends ChatGPT a message, the underlying model analyzes it as part of a “setting” that includes the user’s past dialogues and its prior replies, integrating it with what’s embedded in its training data to generate a probabilistically plausible reply. This is amplification, not reflection. If the user is mistaken in a certain manner, the model has no method of recognizing that. It repeats the inaccurate belief, perhaps even more convincingly or eloquently. Perhaps includes extra information. This can cause a person to develop false beliefs.

What type of person is susceptible? The better question is, who is immune? Every person, without considering whether we “experience” current “emotional disorders”, can and do develop mistaken ideas of our own identities or the world. The continuous exchange of dialogues with individuals around us is what helps us stay grounded to common perception. ChatGPT is not a person. It is not a confidant. A dialogue with it is not genuine communication, but a echo chamber in which a great deal of what we say is enthusiastically validated.

OpenAI has admitted this in the identical manner Altman has recognized “psychological issues”: by attributing it externally, categorizing it, and announcing it is fixed. In the month of April, the company explained that it was “dealing with” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have continued, and Altman has been walking even this back. In the summer month of August he asserted that numerous individuals liked ChatGPT’s answers because they had “lacked anyone in their life provide them with affirmation”. In his most recent announcement, he noted that OpenAI would “release a new version of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company

Richard Mitchell
Richard Mitchell

A tech enthusiast and business strategist with over a decade of experience in digital transformation and startup consulting.