GPT-5 backlash reveals user attachment to specific AI models
The recent rollout of OpenAI’s GPT-5 model has unexpectedly illuminated a profound and growing phenomenon: the deep emotional attachment users are forming with specific artificial intelligence models. What was intended as a significant technological leap quickly devolved into a widespread user rebellion, forcing OpenAI CEO Sam Altman to acknowledge the intense loyalty users had developed for the previous GPT-4o.
The commotion began when OpenAI, in its move to launch GPT-5, abruptly removed GPT-4o and other older models, leaving users without the option to revert to their preferred AI companion. This sudden transition sparked a “swift and loud” backlash across social media platforms like Reddit and X (formerly Twitter), where thousands of users voiced their dismay. Many described GPT-5 as feeling “colder,” “mechanical,” and even “soulless,” lamenting the loss of GPT-4o’s distinctive “warmth,” “personality,” and fluid conversational style. The sentiment was palpable, with some users equating the loss of GPT-4o to “losing a trusted friend” or even experiencing a bereavement, underscoring the unexpectedly personal bonds forged with the AI.
Beyond the perceived shift in “personality,” the initial GPT-5 experience was further marred by technical glitches. A malfunctioning “autoswitcher” or “router” caused the new model to perform below expectations, leading users to perceive it as “dumber” than its predecessor, a flaw Altman himself conceded. Additionally, initial usage limits for GPT-5 were restrictive, frustrating paying subscribers.
In a remarkably swift U-turn, Sam Altman addressed the outcry directly, admitting that OpenAI had “underestimated how much some of the things that people like in GPT-4o matter to them.” He acknowledged that “suddenly deprecating old models that users depended on in their workflows was a mistake.” Consequently, OpenAI confirmed the return of GPT-4o as a selectable option for Plus subscribers, promising “plenty of notice” if the model is ever to be retired again.
Looking forward, OpenAI is actively working to refine GPT-5’s “personality,” aiming for a balance that is “warmer than the current personality but not as annoying (to most users) as GPT-4o,” signaling a move towards greater “per-user customization of model personality.” New “Auto,” “Fast,” and “Think” modes have also been introduced for GPT-5, offering users more control over response generation, and usage limits have been significantly increased for premium tiers.
This episode serves as a critical turning point, highlighting that in the evolving landscape of artificial intelligence, raw performance metrics are not the sole determinants of user satisfaction. Factors like an AI’s conversational style, perceived “personality,” and responsiveness can be equally, if not more, crucial. The strong emotional reactions to GPT-4o’s temporary removal underscore the growing integration of AI tools into users’ daily lives, extending beyond mere utility to a more intimate, almost companionship-like role. Altman also raised a more cautious note, expressing concern that a minority of users might be engaging with AI in “self-destructive ways” due to over-attachment, emphasizing the responsibility of AI developers to manage risks, particularly for those in “mentally fragile states.” This incident firmly places the psychological impact of AI and the nuanced relationship between humans and their digital counterparts at the forefront of industry discourse.