OpenAI admits 'mistake' in deprecating old models, brings back GPT-4o

Theregister

OpenAI recently found itself in an unexpected predicament, swiftly reversing a contentious decision to remove user choice over its foundational AI models. The rollout of GPT-5, heralded as a significant leap forward in artificial intelligence, was quickly overshadowed by a chorus of user complaints after the company quietly removed the option to select older, familiar models like GPT-4o. The ensuing weekend saw a flurry of protests, prompting the tech giant to reinstate these “legacy models.”

GPT-5 made its debut last week, with OpenAI touting its advancements, particularly its ability to reduce “hallucinations” – the AI’s tendency to generate factually incorrect or nonsensical information. Unlike its predecessors, GPT-5 was presented not as a single, monolithic model, but rather a sophisticated collection designed to intelligently route user prompts to the most appropriate sub-model based on factors such as intent and complexity. The company’s vision was to simplify the user experience, eliminating the need for individuals to manually choose between different models.

This well-intentioned simplification, however, backfired spectacularly. In a move that surprised and angered many, OpenAI removed the user interface elements that allowed direct selection of older models. The assumption was that GPT-5’s inherent intelligence would render such choices obsolete. Instead, users, many of whom had integrated specific AI models deeply into their daily workflows, expressed an “outpouring of grief.” These individuals had grown accustomed to the unique strengths and weaknesses of each model, tailoring their interactions to achieve optimal results. Forcing a singular, overarching model, even one as advanced as GPT-5, proved to be an unwelcome disruption.

The intense backlash led to a swift and rather uncharacteristic U-turn for a major tech company. OpenAI CEO Sam Altman, responding directly to a user query, confirmed the return of GPT-4o, advising users to navigate to settings and “pick ‘show legacy models’.” He later offered a more comprehensive acknowledgment of the situation, observing the profound “attachment some people have to specific AI models.” Altman conceded that this bond felt “different and stronger than the kinds of attachment people have had to previous kinds of technology,” admitting that “suddenly deprecating old models that users depended on in their workflows was a mistake.”

This admission of error resonated with the user base, who had voiced concerns ranging from workflow disruptions to the more whimsical worry of being “marked as weird” for preferring older models. Altman quickly assuaged these latter fears, assuring users they would not be judged for their model preferences. In a further concession to user demand, OpenAI also confirmed that it is now possible to check which specific model was used to process a given prompt, adding a layer of transparency previously unavailable.

This episode serves as a powerful reminder of the evolving relationship between AI developers and their user communities. It highlights the growing influence of user feedback, demonstrating that even industry titans like OpenAI are compelled to listen and adapt when faced with widespread dissent, sometimes even prompting subscribers to vote with their wallets. It’s not an isolated incident for OpenAI; a similar swift rollback occurred in April when an update to GPT-4o inadvertently transformed the chatbot into an overly sycophantic entity, quickly corrected after public outcry. These instances underscore a critical lesson: as AI becomes more deeply embedded in daily life, user autonomy and established workflows are paramount, and even the most innovative advancements must be introduced with sensitivity to the human element.