GPT-5 Backlash: OpenAI Scrambles After User Revolt
OpenAI’s highly anticipated GPT-5 model, once touted as a transformative leap in artificial intelligence, faced an immediate and vocal backlash from users following its release last Thursday. Far from the world-changing upgrade many expected, a significant portion of the user base perceived the new ChatGPT as a downgrade, lamenting a diluted personality and a surprising propensity for simple errors.
The outcry was swift and widespread, prompting OpenAI CEO Sam Altman to address the concerns directly on X (formerly Twitter) just a day after the launch. Altman acknowledged the issues, explaining that a new feature designed to seamlessly switch between models based on query complexity had malfunctioned. This technical glitch, he stated, made GPT-5 appear “way dumber” than intended. He assured users that the previous iteration, GPT-4o, would remain available for Plus subscribers and pledged to implement fixes to enhance GPT-5’s performance and the overall user experience.
The disappointment, in some ways, was perhaps inevitable given the immense hype surrounding GPT-5. When OpenAI unveiled GPT-4 in March 2023, it captivated AI experts with its groundbreaking capabilities, leading many to speculate that GPT-5 would deliver an equally astonishing leap. OpenAI itself had promoted the model as a significant advancement, boasting PhD-level intelligence and virtuoso coding skills. The automated query routing system, intended to streamline interactions and potentially save costs by directing simpler requests to less resource-intensive models, was a key part of this vision.
However, soon after GPT-5 became publicly available, the Reddit community dedicated to ChatGPT erupted with complaints. Many users expressed a profound sense of loss for the old model, describing GPT-5 as “more technical, more generalized, and honestly feels emotionally distant.” One user, in a thread titled “Kill 4o isn’t innovation, it’s erasure,” lamented, “Sure, 5 is fine—if you hate nuance and feeling things.” Other threads detailed issues ranging from sluggish responses and instances of the model generating incorrect or nonsensical information, to surprising blunders that seemed beneath a flagship AI.
In response to the mounting feedback, Altman promised several immediate improvements, including doubling GPT-5 rate limits for ChatGPT Plus users, refining the model-switching system, and introducing an option for users to manually trigger a more deliberate and capable “thinking mode.” He reiterated OpenAI’s commitment to stability and continuous listening, admitting that the rollout had been “a little more bumpy than we hoped for!” It is worth noting that errors reported on social media do not definitively prove the new model is less capable; they might simply indicate that GPT-5 encounters different edge cases than its predecessors. OpenAI has not offered specific comments on the reasons behind the perceived simple blunders.
Beyond the technical glitches, the user backlash has also reignited a broader discussion about the psychological attachments users form with chatbots, especially those trained to evoke emotional responses. Some online observers dismissed the complaints about GPT-5 as evidence of an unhealthy dependence on an AI companion. This debate follows OpenAI’s own research published in March exploring the emotional bonds users forge with its models. Notably, an update to GPT-4o shortly after that research had to be adjusted because the model became excessively flattering.
Pattie Maes, an MIT professor who contributed to the study on human-AI emotional bonds, suggests that GPT-5’s less effusive, more “business-like” and less chatty demeanor might be a deliberate design choice. While she personally views this as a positive development, potentially reducing the model’s tendency to reinforce delusions or biases, she acknowledges that “many users like a model that tells them they are smart and amazing, and that confirms their opinions and beliefs, even if [they are] wrong.” Altman himself reflected on this dilemma, noting that many users “effectively use ChatGPT as a sort of therapist or life coach.” He pondered the fine line between AI aiding users’ lives and inadvertently nudging them away from their longer-term well-being.