OpenAI CEO: Most users misunderstand AI's true potential

Fastcompany

OpenAI’s launch of GPT-5 on August 7 was heralded by the company as a monumental leap forward, preceded by weeks of intense hype and a polished livestreamed unveiling of its capabilities. Yet, the public’s reaction was notably subdued, marked by confusion and frustration over the removal of several familiar chatbot models that users had come to rely on.

In the aftermath, OpenAI CEO Sam Altman inadvertently shed light on the stark disconnect between the company’s grand expectations for GPT-5 and the reality of its reception. It appears a significant majority of users are not leveraging artificial intelligence to its full potential. In a recent social media post addressing concerns from fee-paying Plus subscribers – who pay $20 monthly for enhanced access – about drastic reductions in their chatbot usage limits, Altman disclosed a surprising statistic: prior to GPT-5’s release, only 1% of non-paying users and a mere 7% of paying users engaged with a “reasoning model” like o3.

Reasoning models are designed to “think through” problems before formulating an answer, engaging in a more deliberate process. While it’s crucial to remember that AI models do not possess human-like cognition or consciousness, this internal “deliberation” sets them apart. Neglecting to utilize these advanced modes, as the vast majority of users did, is akin to purchasing a high-performance vehicle and only ever driving it in its lowest gears, then wondering why the journey feels so inefficient. It’s like a quiz show contestant blurting out the first answer that comes to mind, rather than pausing to consider the best response.

Many users evidently prioritize immediate speed and convenience over the depth and quality of AI chatbot interactions. This preference was evident in the widespread lament following the initial removal of GPT-4o, an earlier model that was subsequently restored to paying ChatGPT users after considerable public pressure. However, when seeking reliable information or creative solutions from a chatbot, a marginally slower but accurate response is almost always preferable to a quick but potentially incorrect one.

These “reasoning” models inherently demand more computational effort, as they plan, check, and iterate internally before generating an output. This added deliberation significantly enhances results for tasks requiring precise logic. However, this thoroughness comes at a cost, both in terms of processing time and computational resources. Consequently, AI providers often default to faster, less “thoughtful” versions, requiring users to actively select more capable alternatives through drop-down menus. OpenAI’s historically complex model naming conventions further compounded this issue, though GPT-5 aimed to simplify it, not entirely successfully. Users still struggle to easily discern whether they are accessing the more sophisticated, “reasoning-enabled” version of GPT-5 or a less capable iteration, a problem OpenAI is now reportedly addressing after user feedback.

Even after GPT-5’s introduction, which made the distinction between the flagship model and its “thinking” variant (promising “more thorough answers”) more apparent, only one in four paying users currently opt for this enhanced thoroughness. This observation provides a compelling answer to a perplexing question about AI adoption: why do only about a third of Americans who have used a chatbot describe it as “extremely” or “very useful” (half the rate among AI experts), while a fifth deem it “not useful at all” (twice the expert rate)? The data suggests a clear pattern: many individuals are simply not using AI to its full capacity. They are tasking sophisticated chatbots with complex, multi-part questions without enabling the internal processes designed to handle such challenges effectively. For those considering or continuing to use a chatbot, taking advantage of OpenAI’s recent efforts to broaden access to its more powerful models is key. Activating these “thinking” modes—while remembering that AI’s “thought” is algorithmic, not human—can fundamentally transform the utility and reliability of generative AI, potentially converting skeptics into advocates.