GPT-5 backlash & vibe coding: Vergecast's AI reality check

Theverge

The recent launch of OpenAI’s GPT-5, the latest iteration of its large language model, has been met with a mix of high expectations and considerable backlash. While the model promised a leap forward in artificial intelligence capabilities, particularly in coding, its initial rollout has proven to be a bumpy ride, prompting a closer look at both its practical applications and the broader landscape of corporate tech maneuvers and AI’s evolving role in daily life.

One of GPT-5’s most touted features was its enhanced coding prowess, marketed under the intriguing concept of “vibe coding.” OpenAI suggested that this improved ability would empower even average users to prompt their way to helpful, interactive experiences. However, a recent experiment put this promise to the test, revealing that the tool isn’t quite ready for those without a foundational understanding of coding. Participants, none of whom had prior experience with “vibe coding,” attempted to create their own projects. Despite OpenAI’s emphasis on user-friendly AI-driven development, the endeavor quickly devolved into a series of misadventures, underscoring that the gap between AI’s potential and its current accessibility for the truly uninitiated remains significant.

Beyond the challenges of “vibe coding,” GPT-5’s launch has been marred by a broader user backlash. Users voiced dissatisfaction with the new model, leading OpenAI to take corrective measures. The company committed to not removing older models without warning, specifically bringing back the popular 4o option, which many users missed. Furthermore, OpenAI announced plans to update GPT-5’s “personality” in response to user feedback, and CEO Sam Altman publicly addressed what went wrong with the model’s initial performance graphs, acknowledging the issues. Amidst these public relations challenges, OpenAI also reportedly awarded some employees “special” multimillion-dollar bonuses, a move that drew mixed reactions.

Meanwhile, the tech industry has been buzzing with a flurry of corporate drama, highlighting both ambitious strategic plays and potential publicity stunts. Perplexity, an AI-powered search engine, made headlines with an audacious $34.5 billion offer to acquire Google Chrome, a move that, if successful, would reshape the browser landscape. Apple, a perennial fixture in legal news, found itself embroiled in multiple disputes: suing a chain of independent theaters called Apple Cinemas for trademark infringement, continuing its refusal to settle a long-standing patent dispute with medical tech company Masimo over blood oxygen monitoring in its Apple Watches (a feature Apple later reinstated), and facing a threat of lawsuit from Elon Musk, who accused the tech giant of rigging App Store rankings. These incidents collectively paint a picture of an industry where competition is fierce, and legal battles are as common as product launches.

The discourse also extended to the practicalities and pitfalls of emerging technologies, particularly smartwatches and the broader implications of artificial intelligence. The question of whether a smartwatch can truly replace a smartphone, particularly an LTE-enabled one, was explored, with one participant describing the experience as “humbling,” suggesting that while smartwatches offer convenience, they still fall short as a complete phone substitute. The conversation then shifted to the more profound concerns surrounding AI, specifically its trustworthiness. Instances where medical AI tools led doctors to misinterpret results or where Google’s healthcare AI fabricated a body part underscored the critical need for human oversight and skepticism. The inherent opaqueness of large language models, often referred to as chatbots not “telling their secrets,” raised further questions about their reliability and the potential for misinformation, highlighting that as AI becomes more integrated into critical fields, understanding its limitations and potential for error becomes paramount.

The current tech landscape, therefore, is a dynamic interplay of groundbreaking AI promises, the challenging realities of their implementation, intense corporate rivalries, and the ongoing evolution of personal devices, all set against a backdrop of increasing scrutiny over AI’s ethical and practical implications.