OpenAI CEO: Most users misuse AI's full potential
The recent August 7 release of GPT-5, OpenAI’s latest iteration of ChatGPT, was met with a stark contrast in reactions. OpenAI itself had meticulously built weeks of anticipation, culminating in a glitzy livestreamed unveiling that heralded the model as a world-changing advancement. Yet, among social media users, the response was notably muted, characterized more by confusion and frustration over the removal of several key models that many had come to rely upon.
This significant disconnect between OpenAI’s high expectations and the public’s rather lukewarm reception was inadvertently illuminated by CEO Sam Altman in the aftermath. His explanation revealed a fundamental truth: a vast number of users are not engaging with artificial intelligence to its full capabilities. In a post explaining why OpenAI appeared to be shortchanging its fee-paying Plus users—who pay $20 monthly for access to a higher tier of the model—Altman disclosed that a mere 1% of non-paying users had queried a “reasoning model” like o3 before GPT-5’s launch. Among paying subscribers, this figure only slightly increased to 7%.
Reasoning models are designed to “think” through problems before formulating an answer. It is crucial to remember, however, that AI models are not human and do not possess human-like cognition, despite the helpful analogy of “thinking.” The overwhelming majority of users, both free and paid, were bypassing these capabilities. This oversight is akin to purchasing a high-performance car and only ever using its first and second gears, then wondering why the driving experience feels constrained, or participating in a quiz show and blurting out the first thought that comes to mind for every question, regardless of accuracy.
Many users, it seems, prioritize immediate speed and convenience over the quality and depth of responses from AI chatbots. This preference was evident in the widespread lament over the temporary loss of GPT-4o, a legacy model that was later restored to paying ChatGPT users following a concerted user campaign. However, when seeking answers from a sophisticated chatbot, accuracy and thoroughness are paramount. A slightly slower, more deliberate response that is correct is invariably more valuable than a rapid but erroneous one.
Reasoning models are engineered to expend greater computational effort on planning, checking, and iterating before delivering their final output. This extended deliberation significantly enhances results for tasks where logical precision is critical. Naturally, this process is both slower and more computationally intensive, making it costlier for providers. Consequently, AI developers often offer the more basic, “non-thinking” versions by default, requiring users to actively opt-in for the more capable alternatives via a dropdown menu. Adding to this complexity was OpenAI’s previously opaque model naming conventions—a problem GPT-5 attempted to address, albeit with limited success. Users still struggle to easily discern whether they are accessing the advanced “thinking” version of GPT-5 or a less capable variant. Acknowledging user feedback, the company is reportedly working on refining this aspect.
For some, waiting a minute for a comprehensive AI response rather than a second might seem negligible; one can simply initiate the query and attend to other tasks. Yet, for many, even this brief pause proves too long. Even after GPT-5’s release, where the distinction between the “flagship model” GPT-5 and “GPT-5 thinking”—which explicitly promises “more thorough answers”—is more apparent, only one in four paying users are choosing the deeper, more comprehensive option.
This revealing data offers a crucial insight into a broader question regarding AI adoption: Why do only about a third of Americans who have ever used a chatbot consider it extremely or very useful (a rate half that among AI experts), while one in five deem it not useful at all (twice the rate among experts)? The answer now appears clearer: most individuals are fundamentally misusing the technology. They are tasking advanced chatbots with complex, multi-part questions without leveraging the models’ capacity for thoughtful processing.
Therefore, to truly harness the power of generative AI, users should take advantage of OpenAI’s efforts to enhance model access. By consciously setting the AI to its “thinking” modes—while remembering that this is a computational process, not human thought—users are likely to find the experience far more valuable and compelling. This approach represents the most effective way to engage with modern AI.