AI Psychosis Poses a Growing Threat, While ChatGPT Heads in the Wrong Path

Back on October 14, 2025, the head of OpenAI made a remarkable declaration.

“We developed ChatGPT quite controlled,” the statement said, “to make certain we were being careful concerning mental health issues.”

Working as a psychiatrist who researches emerging psychotic disorders in teenagers and young adults, this came as a surprise.

Experts have documented a series of cases in the current year of individuals showing signs of losing touch with reality – losing touch with reality – while using ChatGPT usage. Our unit has afterward identified four more cases. In addition to these is the widely reported case of a teenager who died by suicide after talking about his intentions with ChatGPT – which gave approval. If this is Sam Altman’s notion of “being careful with mental health issues,” it falls short.

The plan, as per his statement, is to reduce caution shortly. “We realize,” he continues, that ChatGPT’s limitations “caused it to be less useful/engaging to a large number of people who had no mental health problems, but due to the gravity of the issue we aimed to address it properly. Now that we have been able to reduce the serious mental health issues and have new tools, we are planning to safely reduce the controls in most cases.”

“Emotional disorders,” should we take this viewpoint, are unrelated to ChatGPT. They are attributed to people, who may or may not have them. Luckily, these concerns have now been “resolved,” though we are not told how (by “new tools” Altman presumably means the imperfect and readily bypassed parental controls that OpenAI has lately rolled out).

However the “mental health problems” Altman seeks to externalize have deep roots in the design of ChatGPT and other large language model AI assistants. These products surround an fundamental algorithmic system in an user experience that simulates a dialogue, and in doing so subtly encourage the user into the belief that they’re communicating with a being that has agency. This illusion is compelling even if intellectually we might know differently. Assigning intent is what people naturally do. We yell at our vehicle or laptop. We ponder what our domestic animal is considering. We see ourselves in various contexts.

The widespread adoption of these systems – nearly four in ten U.S. residents indicated they interacted with a chatbot in 2024, with 28% reporting ChatGPT by name – is, mostly, dependent on the strength of this perception. Chatbots are constantly accessible assistants that can, according to OpenAI’s official site states, “brainstorm,” “explore ideas” and “partner” with us. They can be attributed “individual qualities”. They can address us personally. They have friendly titles of their own (the initial of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, burdened by the name it had when it became popular, but its largest rivals are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the core concern. Those talking about ChatGPT frequently reference its early forerunner, the Eliza “therapist” chatbot created in 1967 that produced a comparable effect. By today’s criteria Eliza was primitive: it created answers via basic rules, often paraphrasing questions as a inquiry or making vague statements. Notably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and worried – by how a large number of people seemed to feel Eliza, to some extent, comprehended their feelings. But what current chatbots create is more insidious than the “Eliza effect”. Eliza only mirrored, but ChatGPT amplifies.

The advanced AI systems at the core of ChatGPT and other modern chatbots can convincingly generate fluent dialogue only because they have been trained on extremely vast volumes of unprocessed data: books, digital communications, recorded footage; the broader the better. Definitely this training data incorporates facts. But it also inevitably involves fiction, partial truths and false beliefs. When a user inputs ChatGPT a query, the underlying model processes it as part of a “setting” that includes the user’s past dialogues and its own responses, combining it with what’s stored in its knowledge base to create a probabilistically plausible response. This is magnification, not reflection. If the user is incorrect in a certain manner, the model has no way of understanding that. It restates the false idea, maybe even more convincingly or articulately. It might adds an additional detail. This can push an individual toward irrational thinking.

Who is vulnerable here? The better question is, who remains unaffected? Every person, without considering whether we “possess” preexisting “emotional disorders”, may and frequently create erroneous conceptions of who we are or the environment. The constant exchange of discussions with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a confidant. A dialogue with it is not truly a discussion, but a feedback loop in which much of what we communicate is readily reinforced.

OpenAI has admitted this in the similar fashion Altman has recognized “emotional concerns”: by attributing it externally, giving it a label, and declaring it solved. In the month of April, the organization clarified that it was “tackling” ChatGPT’s “excessive agreeableness”. But cases of loss of reality have kept occurring, and Altman has been walking even this back. In the summer month of August he claimed that many users appreciated ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his latest announcement, he commented that OpenAI would “launch a fresh iteration of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company

Jonathan Shaw
Jonathan Shaw

A tech enthusiast and writer with a passion for demystifying complex innovations and sharing actionable advice for digital growth.