GPT-5 Launch Disappoints Users, Sparks Cost-Cutting Speculation

Futurism

On Thursday, August 8, OpenAI unveiled its highly anticipated GPT-5 AI model, a new “reasoning” tool that CEO Sam Altman touted as the world’s best for coding and writing. However, the initial reception from power users has been strikingly underwhelming, prompting questions about diminishing returns in an industry pouring ever-increasing sums into talent and infrastructure.

The sentiment among many early adopters is one of profound disappointment. “GPT-5 is horrible,” declared one of the most upvoted posts on the ChatGPT subreddit, its author criticizing the model for “short replies that are insufficient, more obnoxious AI-stylized talking, less ‘personality’ and way less prompts allowed with plus users hitting limits in an hour.” This immediate backlash suggests a significant disconnect between the company’s claims and the user experience.

Further complicating matters, OpenAI has made the strategic decision to deprecate all preceding models, a term the company uses when retiring an obsolete version. This move has predictably angered many power users who have long relied on older, often more stable, iterations of the models to accomplish their tasks, rather than consistently adopting the latest releases. The stakes are undeniably high for OpenAI, a firm widely considered at the forefront of the AI race, as the industry continues to justify massive capital expenditures. After more than a year and a half of swirling rumors, many users expected GPT-5 to represent a monumental generational leap.

Instead, the consensus suggests GPT-5 is a perplexing mix of advancements and regressions. This mixed performance has fueled widespread speculation that OpenAI is attempting to manage costs, a plausible theory given that running large language models is a notoriously energy-intensive and environmentally demanding process. One Reddit user likened it to “Shrinkflation,” suggesting the company, which is reportedly eyeing a $500 billion valuation, might be cutting corners. Other users echoed this sentiment, with comments such as, “I wonder how much of it was to take the computational load off them by being more efficient,” and “Feels like cost-saving, not like improvement.”

The prevailing opinion is that GPT-5 is a weak offering leveraging a strong brand name. Users report that “answers are shorter and, so far, not any better than previous models.” Combined with more restrictive usage policies, this feels to many like “a downgrade branded as the new hotness.” The forced migration to a seemingly hamstrung model has even led some users to humorously “mourn” the loss of their former AI companions. One Reddit user complained that the new model’s tone was “abrupt and sharp,” likening it to an “overworked secretary” and calling it “a disastrous first impression.”

OpenAI’s own GPT-5 system card, a detailed document outlining its capabilities and limitations, also failed to impress, appearing to contradict Altman’s assertion that it is the world’s best AI coding assistant. AI researcher Eli Lifland tweeted, “First observation: no improvement on all the coding evals that aren’t SWEBench,” referring to a common benchmark for evaluating large language models.

However, GPT-5’s perceived limitations may offer a silver lining in terms of safety. METR, a research nonprofit focused on assessing whether “frontier AI systems could pose catastrophic risks to society,” concluded that it is “unlikely that GPT-5-thinking would speed up AI R&D researchers by >10x” or be “capable of rogue application.”

While Sam Altman has yet to directly address the widespread negative reaction, his public statements about GPT-5 hint at an awareness of its muted powers. He tweeted that while GPT-5 is “the smartest model we’ve ever done,” the primary focus was on “real-world utility and mass accessibility/affordability.” Amidst the half-a-trillion-dollar valuation at stake, Altman continued to promise future improvements, adding, “We can release much, much smarter models, and we will, but this is something a billion+ people will benefit from.” The challenge for OpenAI now is to reconcile these grand promises with the immediate, and largely critical, user experience.