OpenAI CEO Reveals Why Users Misuse ChatGPT's Full Potential

Fastcompany

The unveiling of GPT-5, OpenAI’s latest iteration of ChatGPT, on August 7 was met with a stark contrast in reactions. While the company heralded its release as a transformative moment, backed by weeks of fervent anticipation and a meticulously crafted livestreamed demonstration, social media users responded with a mix of confusion and frustration, largely due to the unexpected removal of several familiar and widely used models.

In the aftermath of this reception, OpenAI CEO Sam Altman inadvertently shed light on the significant disparity between the company’s grand expectations for GPT-5 and the public’s actual experience. His explanation, prompted by user complaints about drastically reduced rate limits for fee-paying Plus subscribers—who invest $20 monthly for access to a higher tier of the model—revealed a critical insight: a vast majority of users are not engaging with AI to its full potential. Specifically, before GPT-5’s release, a mere 1% of non-paying users and only 7% of paying subscribers utilized a “reasoning model” like o3.

Reasoning models are designed to meticulously process problems and deliberate before formulating an answer. It is crucial to remember, however, that these AI models do not possess human-like consciousness or thought processes. Yet, the analogy holds: failing to engage these advanced capabilities, as the overwhelming majority of users did, is akin to purchasing a high-performance vehicle but consistently driving it only in first or second gear, then wondering why the journey feels inefficient. Or, to use another comparison, it is like participating in a quiz show and blurting out the first answer that comes to mind, rather than taking a moment to consider the question.

This widespread preference for speed and immediate gratification over depth and quality in AI chatbot interactions explains why many users lamented the initial removal of GPT-4o, a previous model that was later reinstated for paying ChatGPT users following considerable public outcry. While quick responses might seem convenient, the true value of a chatbot lies in the accuracy and insightfulness of its answers. A slightly slower, more deliberate response that is correct almost invariably outweighs a rapid but erroneous one.

Reasoning models inherently require more computational effort, dedicating resources to planning, verifying, and refining their outputs before delivery. This enhanced deliberation significantly improves the quality of results, particularly for tasks where logical precision is paramount. However, this thoroughness comes at a cost, both in terms of processing time and operational expense. Consequently, AI providers typically offer faster, less “thoughtful” versions as the default, requiring users to actively select more capable alternatives via dropdown menus. OpenAI’s past, often opaque, model naming conventions further compounded this issue, making it difficult for users to discern which version offered superior reasoning capabilities. While GPT-5 aimed to simplify this, user feedback indicates that clarity remains a challenge, prompting the company to further refine its interface.

For many, waiting a minute for a comprehensive AI-generated response, rather than just a second, is a minor inconvenience that can be easily managed by multitasking. Yet, this brief pause appears to be a significant deterrent for others. Even after GPT-5’s launch, which made the distinction between the “flagship” GPT-5 and its more thorough, “thinking” variant more explicit, only one in four paying users opted for the in-depth answers.

This data offers a crucial explanation for a broader trend in AI adoption: why only about a third of Americans who have ever used a chatbot consider it extremely or very useful—a rate half that reported by AI experts—and why one in five find it not useful at all, double the rate among experts. The answer is now clearer: a substantial portion of users are approaching AI incorrectly. They are tasking chatbots with complex, multi-faceted queries without prompting the system to engage its more sophisticated, deliberative capabilities, much like offering a quick, unverified guess on a demanding game show.

To truly leverage the power of modern chatbots, users should embrace the more advanced reasoning modes. With OpenAI and other providers increasingly making these powerful options more accessible, it is an opportune moment to experiment. By instructing these models to “think”—while remembering they are not truly conscious—users can unlock a far richer, more valuable generative AI experience, potentially transforming their perception of its utility.