Altman responds to GPT-5 backlash, promises user control & fixes

Decoder

OpenAI’s recent rollout of GPT-5 has been met with a mixed reception, prompting CEO Sam Altman to address user concerns and outline the company’s immediate and future plans. Despite Altman’s assertion that GPT-5 is “better in most ways,” a significant portion of the user base remains divided, with many expressing a preference for the behavior of the previous GPT-4o model. Acknowledging this unexpected feedback, OpenAI has reinstated access to GPT-4o, admitting it misjudged initial demand for its predecessor.

Looking ahead, Altman indicated that OpenAI intends to empower users with greater control over how its models behave, recognizing that no single configuration will perfectly suit everyone. The immediate priority involves stabilizing the ongoing rollout and enhancing system performance. Following these short-term fixes, the company aims to make the model feel “warmer” and more engaging, with comprehensive personalization tools slated for a later release.

The rollout has not been without its technical hurdles. OpenAI confirmed that GPT-5 is now fully deployed across all user tiers, including Plus, Pro, Team, and Free accounts. However, the initial launch experienced significant issues, particularly with an automatic model switcher designed to select the optimal GPT-5 variant for each prompt. This system, Altman explained via X, caused the model to appear “way dumber” for periods, leading to user frustration. The company is actively refining this routing strategy, aiming for more reliable matches between tasks and models.

To mitigate immediate concerns and manage surging demand—API traffic reportedly doubled within 24 hours of the launch—OpenAI has doubled usage limits for Plus and Team subscribers through the weekend. Furthermore, starting next week, mini versions of GPT-5 and GPT-5 Thinking will automatically activate for users who hit their message caps, remaining active until limits reset. Both GPT-5 Thinking and GPT-5 Pro are now directly selectable within the main model interface. Altman also cautioned about an impending “major capacity crunch” expected next week, assuring users that OpenAI is optimizing its infrastructure and will be transparent about any necessary trade-offs.

These adjustments signal a shift from OpenAI’s earlier lean towards fully automated model routing. While automation simplifies the experience for new users, it can lead to less reliable responses or the deployment of an inappropriate model, adding to the inherent unpredictability of large language model outputs. In response, OpenAI plans to make it clearer to users which model is active and to introduce an option for manually triggering “Thinking” mode through the interface, though a timeline for these features has not yet been provided. For those who still wish to use older ChatGPT models, access to “legacy” versions, including GPT-4o, can be enabled through the platform’s settings.

The ongoing evolution of GPT-5 reflects OpenAI’s real-time adaptation to user feedback and the complex realities of deploying advanced AI at scale, striving for a more stable, customizable, and ultimately more intelligent user experience.