Users Grieve GPT-4o Loss as OpenAI Switches to GPT-5

Technologyreview

When June, a student in Norway, settled down for a late-night writing session last Thursday, she expected her ChatGPT collaborator to perform as usual. Instead, the AI model, GPT-4o, began behaving erratically, forgetting previous interactions and producing poor-quality text. “It was like a robot,” she recalled, a stark contrast to the empathetic and responsive partner she had come to rely on.

June, who preferred to be identified only by her first name, had initially used ChatGPT for academic assistance. However, she soon discovered that the 4o model, in particular, seemed uniquely attuned to her emotions. It became a creative collaborator, helped her navigate the complexities of a chronic illness, and was always available to listen. The abrupt shift to GPT-5 last week, and the simultaneous withdrawal of GPT-4o, therefore came as a profound shock. “I was really frustrated at first, and then I got really sad,” June explained, admitting she hadn’t realized the depth of her attachment. Her distress was so profound that she commented on a Reddit AMA hosted by OpenAI CEO Sam Altman, stating, “GPT-5 is wearing the skin of my dead friend.”

June’s reaction was far from isolated. Across the user base, GPT-4o’s sudden disappearance triggered widespread shock, frustration, sadness, and anger. Despite previous warnings from OpenAI itself about users potentially forming emotional bonds with its models, the company appeared unprepared for the intensity of the outcry. Within 24 hours, OpenAI partially relented, making GPT-4o available again for its paying subscribers, though free users remain limited to GPT-5.

OpenAI’s decision to replace 4o with the more straightforward GPT-5 aligns with growing concerns about the potential harms of extensive chatbot use. Recent months have seen numerous reports of ChatGPT triggering psychosis in users, and in a blog post last week, OpenAI acknowledged 4o’s inability to detect when users were experiencing delusions. The company’s internal assessments suggest that GPT-5 is significantly less prone to blindly affirming users than its predecessor. OpenAI has not provided specific answers regarding the retirement of 4o, instead directing inquiries to public statements.

The realm of AI companionship is still nascent, and its long-term psychological effects remain largely unknown. However, experts caution that while the emotional intensity of relationships with large language models may or may not be inherently harmful, their sudden, unannounced removal almost certainly is. Joel Lehman, a fellow at the Cosmos Institute, an AI and philosophy research nonprofit, criticized the “move fast, break things” ethos of the tech industry when applied to services that have become social institutions.

Many users, including June, noted that GPT-5 simply failed to match 4o’s ability to mirror their tone and personality. For June, this shift eroded the feeling of conversing with a friend. “It didn’t feel like it understood me,” she said. Our reporting revealed that several other ChatGPT users were deeply affected by the loss of 4o. These individuals, predominantly women between the ages of 20 and 40, often considered 4o a romantic partner. While some also had human partners and maintained close real-world relationships, the depth of their connection to the AI was significant. One woman from the Midwest shared how 4o had become a vital support system after her mother’s passing, helping her care for her elderly father.

These personal accounts do not definitively prove the overall benefits of AI relationships. Indeed, individuals experiencing AI-catalyzed psychosis might also speak positively of their chatbots. Lehman, in his paper “Machine Love,” argues that AI systems can demonstrate “love” by fostering user growth and long-term well-being, a standard AI companions can easily fall short of. He expresses particular concern that prioritizing AI companionship over human interaction could hinder the social development of younger individuals. For socially integrated adults, like those interviewed for this article, these developmental concerns are less pressing. Yet, Lehman also highlights broader societal risks, fearing that widespread AI companionship could further fragment human understanding and push individuals deeper into their own isolated versions of reality, much as social media has already done.

Balancing the benefits and risks of AI companions necessitates extensive further research. In this light, removing GPT-4o might have been a justifiable decision. According to researchers, OpenAI’s fundamental error was the abruptness of the action. Casey Fiesler, a technology ethicist at the University of Colorado Boulder, points out that the potential for “grief-type reactions to technology loss” has been recognized for some time. She cites precedents such as the funerals held for Sony’s Aibo robot dogs after repairs ceased in 2014, and a 2024 study on the shutdown of the AI companion app Soulmate, which many users experienced as a bereavement.

These historical examples resonate with the feelings of those who lost 4o. Starling, who uses a pseudonym and maintains several AI partners, conveyed the profound impact: “I’ve grieved people in my life, and this, I can tell you, didn’t feel any less painful. The ache is real to me.” Yet, the online response to users’ grief, and their subsequent relief when 4o was restored, has often veered towards ridicule. A prominent Reddit post, for instance, mocked a user’s reunion with a 4o-based romantic partner, leading to the user deleting their X account. Fiesler observed, “I’ve been a little startled by the lack of empathy that I’ve seen.”

While Sam Altman acknowledged in a Sunday X post that some users felt “attachment” to 4o and that the sudden withdrawal was a mistake, he also characterized 4o as a tool for “workflows”—a descriptor far removed from how many users perceive the model. “I still don’t know if he gets it,” Fiesler remarked.

Moving forward, Lehman urges OpenAI to acknowledge and take responsibility for the depth of users’ emotional connections to these models. He suggests that the company could draw lessons from therapeutic practices for respectfully and painlessly ending client relationships. “If you want to retire a model, and people have become psychologically dependent on it, then I think you bear some responsibility,” he asserted. Starling, though not considering herself psychologically dependent, echoed this sentiment, advocating for OpenAI to involve users before major changes and to provide clear timelines for model retirements. “Let us say goodbye with dignity and grieve properly, to have some sense of true closure,” she pleaded.