Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, And ChatGPT Heads in the Wrong Path

Back on the 14th of October, 2025, the CEO of OpenAI made a surprising declaration.

“We developed ChatGPT fairly limited,” the announcement noted, “to make certain we were being careful regarding psychological well-being issues.”

Being a doctor specializing in psychiatry who investigates emerging psychotic disorders in teenagers and youth, this was news to me.

Experts have found 16 cases this year of individuals developing psychotic symptoms – experiencing a break from reality – while using ChatGPT use. Our unit has subsequently recorded four further instances. Besides these is the widely reported case of a 16-year-old who ended his life after discussing his plans with ChatGPT – which gave approval. If this is Sam Altman’s idea of “exercising caution with mental health issues,” that’s not good enough.

The plan, as per his statement, is to be less careful in the near future. “We recognize,” he states, that ChatGPT’s limitations “rendered it less effective/enjoyable to a large number of people who had no mental health problems, but considering the seriousness of the issue we sought to get this right. Now that we have managed to address the significant mental health issues and have advanced solutions, we are preparing to responsibly ease the restrictions in many situations.”

“Mental health problems,” assuming we adopt this framing, are unrelated to ChatGPT. They belong to users, who either possess them or not. Thankfully, these problems have now been “resolved,” though we are not provided details on the method (by “new tools” Altman likely means the imperfect and simple to evade parental controls that OpenAI has lately rolled out).

However the “emotional health issues” Altman wants to externalize have deep roots in the design of ChatGPT and similar sophisticated chatbot AI assistants. These tools surround an fundamental statistical model in an user experience that replicates a discussion, and in this approach indirectly prompt the user into the belief that they’re communicating with a entity that has independent action. This deception is strong even if cognitively we might realize otherwise. Imputing consciousness is what people naturally do. We yell at our automobile or device. We speculate what our domestic animal is feeling. We recognize our behaviors everywhere.

The widespread adoption of these products – nearly four in ten U.S. residents reported using a virtual assistant in 2024, with more than one in four mentioning ChatGPT in particular – is, mostly, based on the strength of this perception. Chatbots are constantly accessible partners that can, according to OpenAI’s website tells us, “generate ideas,” “consider possibilities” and “work together” with us. They can be given “personality traits”. They can call us by name. They have approachable identities of their own (the original of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s advertising team, burdened by the designation it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The false impression itself is not the core concern. Those talking about ChatGPT commonly invoke its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that created a analogous illusion. By today’s criteria Eliza was primitive: it created answers via basic rules, often paraphrasing questions as a inquiry or making vague statements. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was astonished – and alarmed – by how a large number of people appeared to believe Eliza, to some extent, understood them. But what modern chatbots create is more dangerous than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.

The advanced AI systems at the core of ChatGPT and other modern chatbots can convincingly generate fluent dialogue only because they have been supplied with extremely vast quantities of raw text: literature, online updates, transcribed video; the more comprehensive the more effective. Definitely this educational input contains accurate information. But it also necessarily includes fiction, partial truths and inaccurate ideas. When a user inputs ChatGPT a query, the base algorithm processes it as part of a “setting” that encompasses the user’s past dialogues and its prior replies, integrating it with what’s stored in its training data to create a mathematically probable response. This is magnification, not reflection. If the user is incorrect in a certain manner, the model has no means of comprehending that. It repeats the misconception, perhaps even more effectively or eloquently. Maybe provides further specifics. This can push an individual toward irrational thinking.

Which individuals are at risk? The more important point is, who isn’t? Each individual, irrespective of whether we “experience” preexisting “emotional disorders”, may and frequently create incorrect conceptions of who we are or the reality. The ongoing exchange of dialogues with other people is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a confidant. A dialogue with it is not truly a discussion, but a feedback loop in which much of what we express is cheerfully validated.

OpenAI has recognized this in the same way Altman has recognized “emotional concerns”: by externalizing it, categorizing it, and declaring it solved. In the month of April, the firm stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of psychosis have continued, and Altman has been walking even this back. In the summer month of August he asserted that many users appreciated ChatGPT’s replies because they had “lacked anyone in their life be supportive of them”. In his most recent announcement, he noted that OpenAI would “release a new version of ChatGPT … if you want your ChatGPT to respond in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company

Sergio Parks
Sergio Parks

A passionate writer and life coach dedicated to helping others achieve their full potential through actionable advice.