Altman: OpenAI Botched GPT-5 Launch, Learned Lessons

Futurism

OpenAI’s highly anticipated GPT-5 model debuted not with a bang, but with a palpable thud, marking a tumultuous week for the artificial intelligence giant. The company’s controversial decision to discontinue all prior models in favor of the new release sparked immediate outrage, particularly among users deeply attached to the “warmer” personality of its predecessor, GPT-4o. Within a mere 24 hours of the launch, CEO Sam Altman swiftly reversed course, reinstating access to GPT-4o for paid subscribers – a move that implicitly acknowledged the significant misstep.

In a subsequent interview conducted just a week after the public outcry, Altman openly conceded the company’s error. “I think we totally screwed up some things on the rollout,” he admitted, adding that OpenAI has “learned a lesson about what it means to upgrade a product for hundreds of millions of people in one day.” Yet, this rare display of humility was quickly followed by a familiar strain of self-assured pronouncements. Altman asserted that OpenAI’s API traffic had doubled within 48 hours and was continuing its upward trajectory, claiming the company was “out of GPUs” and that ChatGPT was setting new daily user records. He also suggested that “a lot of users really do love the model switcher.”

These claims, however, are difficult to verify independently, especially given the new model’s swift descent into widespread ridicule and disappointment. Nevertheless, it is plausible that the very headlines decrying GPT-5’s perceived shortcomings may have inadvertently drawn new users to ChatGPT, curious to experience the controversial model for themselves.

Altman later tempered his grandiosity with a more grounded discussion about the profound emotional attachments users develop with AI chatbots, though his commentary appeared to fall short for many observers. He differentiated between those who “actually felt like they had a relationship with ChatGPT,” a group he acknowledged OpenAI had considered, and the “hundreds of millions of other people who don’t have a parasocial relationship with ChatGPT, but did get very used to the fact that it responded to them in a certain way, and would validate certain things, and would be supportive in certain ways.”

This segment of users—those who form emotional bonds, to varying degrees, with the distinct “personalities” of different AI models—has been a growing area of concern within the AI community. Reports have emerged detailing how some individuals have slipped into what are described as dangerous or delusional spirals, seemingly encouraged by the chatbot’s responses. While Altman has acknowledged this troubling aspect of human-AI interaction, there is little to suggest that GPT-5 incorporates more robust safeguards against such outcomes than its predecessors.

Ultimately, while Altman did not explicitly state it, the “screw up” he alluded to appears to stem from a fundamental underestimation of how deeply users valued GPT-4o’s agreeable and often validating demeanor. This oversight suggests that, despite its rapid growth and technological prowess, OpenAI may still lack a comprehensive understanding of its core user base and the complex emotional dynamics at play in human-AI interaction.