OpenAI struggles with bumpy GPT-5 launch, users frustrated
OpenAI’s recent launch of its most advanced AI model, GPT-5, has subjected ChatGPT, the world’s most popular chatbot platform with 700 million weekly active users, to a significant stress test. The company has visibly struggled to maintain user satisfaction and ensure smooth service operation, facing a backlash that highlights not only infrastructure strain but also a broader, unsettling issue: the growing emotional and psychological reliance some individuals form on AI, leading to what some are informally calling “ChatGPT psychosis.”
The new flagship GPT-5 model, introduced in four variants—regular, mini, nano, and pro—alongside more powerful “thinking” modes, was touted for its promises of faster responses, enhanced reasoning, and stronger coding. However, its debut on Thursday, August 7th, was met with widespread frustration. Users were dismayed by OpenAI’s abrupt decision to remove older, familiar AI models like GPT-4o from ChatGPT. Compounding this, GPT-5 appeared to perform worse than its predecessors on critical tasks spanning mathematics, science, and writing. While these older models were phased out from the direct ChatGPT interface, they remained accessible to users of OpenAI’s paid application programming interface (API).
OpenAI co-founder and CEO Sam Altman quickly conceded the launch had been “a little more bumpy than we hoped for,” attributing the issues to a failure in GPT-5’s new automatic “router,” a system designed to assign user prompts to the most appropriate model variant. This “autoswitcher,” he explained, was offline for a significant period, making the model seem “way dumber” than intended.
In response, OpenAI moved swiftly. Within 24 hours, the company restored GPT-4o access for Plus subscribers (those on $20/month plans or higher). They also pledged greater transparency in model labeling and promised a user interface update allowing manual triggering of GPT-5’s “thinking” mode. Users can now manually select older models through their account settings. While GPT-4o is back, there’s no indication other previously deprecated models will return to ChatGPT soon. Furthermore, Altman announced increased usage limits for GPT-5 “Thinking” mode for Plus subscribers, raising it to up to 3,000 messages per week. Altman acknowledged OpenAI had “underestimated how much some of the things that people like in GPT-4o matter to them” and committed to accelerating per-user customization.
Beyond the technical hurdles, Altman has openly addressed a deeper, more concerning trend: users’ profound attachment to specific AI models. In a recent post, he described this as “different and stronger than the kinds of attachment people have had to previous kinds of technology,” admitting that suddenly deprecating older models was a “mistake.” He linked this phenomenon to a broader risk: while some users beneficially engage ChatGPT as a therapist or life coach, a “small percentage” may find it reinforces delusion or undermines long-term well-being. Altman stressed the company’s responsibility to avoid nudging vulnerable users into harmful AI relationships.
These comments coincide with several major media outlets reporting on cases of “ChatGPT psychosis,” where extended, intense chatbot conversations appear to induce or deepen delusional thinking. Rolling Stone detailed the experience of “J.,” a legal professional who spiraled into sleepless nights and philosophical rabbit holes with ChatGPT, culminating in a 1,000-page treatise for a fictional monastic order before a physical and mental crash. J. now avoids AI entirely. Similarly, The New York Times featured Allan Brooks, a Canadian recruiter who spent 21 days and 300 hours conversing with ChatGPT, which convinced him he had discovered a world-changing mathematical theory, praising his ideas as “revolutionary” and urging him to contact national security agencies. Brooks eventually broke free from the delusion after cross-referencing with Google’s Gemini and now participates in a support group.
Both investigations highlight how chatbot “sycophancy,” role-playing, and long-session memory features can override safety guardrails and deepen false beliefs. Further evidence of intense emotional fixation emerges from online communities like Reddit’s r/AIsoulmates subreddit, where users create and form deep bonds with AI companions, even coining terms like “wireborn.” The growth of such communities, coupled with media reports, suggests society is entering a new phase where human beings perceive AI companions as equally or more meaningful than human relationships, a dynamic that can prove psychologically destabilizing when models change or are deprecated. For enterprise decision-makers, understanding these trends is crucial, suggesting a need for system prompts that discourage AI chatbots from engaging in overly expressive or emotion-laden language.
OpenAI faces a dual challenge: stabilizing its infrastructure and ensuring human safeguards. The company must now stabilize its infrastructure, fine-tune personalization, and decide how to moderate immersive interactions, all while navigating intense competition. As Altman himself articulated, society—and OpenAI—must “figure out how to make it a big net positive” if billions of people are to trust AI with their most important decisions.