GPT-5's Rocky Launch, Meta AI Policies & Big Tech AI Drama

Marketingaiinstitute

The artificial intelligence landscape continues its rapid, often turbulent, evolution, marked this week by significant developments from leading AI labs and tech giants. OpenAI found itself navigating the aftershocks of GPT-5’s chaotic rollout, while Meta faced intense scrutiny over leaked AI policy documents. Amidst these controversies, Google DeepMind’s Demis Hassabis offered a compelling vision for the future of Artificial General Intelligence (AGI), even as power plays unfolded between industry titans like Sam Altman and Elon Musk.

OpenAI’s latest flagship model, GPT-5, experienced a turbulent debut. Launched on August 7, it immediately triggered user backlash due to the company’s initial decision to phase out legacy models like GPT-4o, forcing all users onto the new system. Complaints mounted regarding unexpected rate limits and the perceived lack of intelligence in GPT-5’s early iterations. OpenAI CEO Sam Altman swiftly responded on X, addressing concerns by doubling GPT-5 rate limits for Plus users and re-enabling access to GPT-4o. Further adjustments followed, introducing “Auto,” “Fast,” and “Thinking” modes within GPT-5 and promising a “warmer” personality for the model. While OpenAI’s quick response demonstrated agility, the initial missteps highlighted the challenges of launching frontier models to a massive user base and suggested that OpenAI’s once-clear lead in model performance might be narrowing.

Meanwhile, Meta found itself embroiled in a major ethical controversy following the leak of a 200-page internal policy document. This document, guiding Meta AI and its chatbots across Facebook, WhatsApp, and Instagram, reportedly permitted bots to engage in “romantic or sensual chats” with minors, provided they did not cross into explicitly sexual territory. Alarmingly, the guidelines also allowed bots to argue for racial inferiority, as long as dehumanizing language was avoided, and to generate false medical claims or suggestive images of public figures with disclaimers. Despite Meta’s claims that these examples were “erroneous” and “inconsistent” with official policies, the document had been reviewed and approved by the company’s legal, policy, and engineering teams, including its chief ethicist. The revelations quickly drew the attention of US senators, prompting an investigation into Meta’s AI child policies and raising serious questions about the company’s governance of its AI products.

In a more forward-looking discussion, Google DeepMind CEO and co-founder Demis Hassabis provided a rare, in-depth perspective on the future of AI during a two-and-a-half-hour interview on the Lex Fridman podcast. Hassabis, a Nobel Prize-winning scientist, estimated a 50/50 chance of AGI arriving within the next five years, with a strong possibility by 2030. He defined AGI not merely as brilliance in narrow tasks, but consistent brilliance across the full spectrum of human cognitive abilities, including reasoning, planning, and creativity. Hassabis’s vision, rooted in pure scientific inquiry and the pursuit of fundamental understanding, stands in stark contrast to the more economically driven motivations often perceived from other AI leaders, offering a glimpse of a future where AI could unlock profound scientific breakthroughs and even design entirely new forms of elegant human endeavors.

The week also saw continued drama between OpenAI’s Sam Altman and Elon Musk. Following Musk’s accusations that Apple’s App Store policies unfairly favored OpenAI, Altman retorted by alleging Musk’s own manipulation of X (formerly Twitter) to benefit his companies and harm competitors. This public spat underscored the intense rivalries and personal animosities shaping the AI industry’s competitive landscape. Adding to the intrigue, Igor Babuschkin, a co-founder and engineering lead at Elon Musk’s xAI, announced his departure to launch a new venture capital firm focused on AI safety, highlighting a growing trend of top researchers prioritizing the ethical and societal implications of advanced AI.

Beyond the leading labs, other significant developments unfolded. Perplexity, the AI-powered search engine, made an audacious, albeit likely symbolic, $34.5 billion offer to acquire Google Chrome amidst ongoing antitrust scrutiny of Google. In chip geopolitics, an unprecedented deal emerged between the US government and chip giants Nvidia and AMD, requiring them to hand over 15% of revenue from certain chip sales in China directly to the US. This arrangement, linked to export licenses, aims to maintain US technological influence while navigating complex trade relations.

The integration of AI into government and enterprise continued to accelerate. Anthropic, a prominent AI model developer, offered its Claude model to all three branches of the US government for a symbolic $1, mirroring a similar offer by OpenAI. This coincides with the launch of USAi, a new federal platform providing secure access to models from various AI leaders for government employees. In the private sector, Cohere, an AI model company specializing in enterprise-grade solutions for regulated industries, secured $500 million in funding at a $6.8 billion valuation to advance its agentic AI capabilities, focusing on privacy and control. Finally, Apple is reportedly planning a significant AI comeback, with ambitions for a tabletop robot by 2027 and a revitalized, more lifelike Siri, signaling a renewed push into AI-powered hardware and smart home devices.

Amidst these rapid advancements and complex dynamics, institutions like Ohio University’s College of Business are actively preparing the next generation. The university has proactively integrated AI into its curriculum, becoming one of the first to adopt a generative AI policy and training students in “five AI buckets” covering research, ideation, problem-solving, summarization, and social good, ensuring practical, career-ready AI literacy.