AI Progress Stalls: Experts Warn of 'Peak AI' After GPT-5 Disappointment

Futurism

The long-anticipated release of OpenAI’s GPT-5 has landed with a notable lack of fanfare, failing to ignite the industry with the revolutionary spark many had hoped for. Despite the private sector continuing to pour billions into artificial intelligence development, driven by the elusive promise of exponential gains, a growing chorus within the research community is voicing profound skepticism. Gary Marcus, a neural scientist and long-standing critic of OpenAI, articulated a sentiment increasingly shared across the field: after years of development and staggering investment, AI’s capabilities appear to be stagnating.

While GPT-5 technically registers improved scores on industry benchmarks—metrics that experts have increasingly questioned for their reliability as indicators of true progress—Marcus contends that its practical utility beyond that of a sophisticated chatbot remains limited. More concerning still, the rate at which new models are improving against even these debatable benchmarks seems to be decelerating. As Marcus observed to The New Yorker, “I don’t hear a lot of companies using AI saying that 2025 models are a lot more useful to them than 2024 models, even though the 2025 models perform better on benchmarks.” This suggests a disconnect between raw computational performance and tangible real-world value.

Since at least 2020, Marcus has advocated for a more pragmatic approach to AI development, one that prioritizes narrower, more focused applications over the current broad “general consumer” strategy. In the United States, major tech firms like OpenAI and Anthropic have predominantly pursued “scalable AI,” a development paradigm deeply rooted in financial capital that prioritizes rapid financial growth over the creation of genuinely useful technology. In practice, this has translated into a relentless drive to integrate ever-more graphics processing chips, demanding vast data centers, consuming immense energy, and requiring colossal capital outlays. OpenAI CEO Sam Altman theorized in 2021 that this investment model should unlock near-exponential improvements in AI capabilities, potentially leading to artificial general intelligence (AGI)—the point where AI achieves human-level cognitive abilities.

However, a significant wrinkle has emerged: the technology isn’t truly advancing at the promised rate. Once a solitary voice in an otherwise ebullient AI community, Marcus is no longer alone in his critique of scalable AI. Just recently, Michael Rovatsos, an AI scholar at the University of Edinburgh, suggested that the release of GPT-5 might signify a pivotal shift in AI’s evolution, potentially heralding “the end of creating ever more complicated models whose thought processes are impossible for anyone to understand.” This follows a March survey of 475 AI researchers, which concluded that AGI was a “very unlikely” outcome of the prevailing development approach. Even as far back as 2023, Microsoft co-founder Bill Gates told the German publication Handelsblatt that scalable AI had “reached a plateau,” a prescient observation made even before the debut of GPT-4o, let alone GPT-5.

Several years on, even the most steadfast financial backers of AI are beginning to confront this sobering reality. Despite a better-than-expected second quarter for CoreWeave, OpenAI’s datacenter partner, Wall Street is increasingly questioning big tech’s capacity to deliver on its ambitious AGI promises. As a tangible indicator of this waning confidence, CoreWeave’s stock has recently plummeted by 16 percent. This sharp decline may well be the first tremor signaling that the massive, investment-fueled AI boom is beginning to show significant cracks.