OpenAI CEO: Most users misuse ChatGPT's advanced features

Fastcompany

The recent launch of OpenAI’s GPT-5, heralded by the company as a world-changing advancement, met with a surprisingly muted reaction from the public. Weeks of intense hype and a polished livestreamed unveiling were followed not by widespread acclaim, but by confusion and even anger among social media users, many of whom lamented the removal of key features they had come to rely on. This disconnect between OpenAI’s grand expectations and the user experience quickly became apparent, and in the aftermath, CEO Sam Altman inadvertently shed light on the core issue: a significant portion of AI users are not leveraging the technology to its full potential.

In a post on X, Altman addressed concerns from fee-paying Plus subscribers, who contribute $20 monthly for access to a higher tier of the model, regarding a drastic reduction in their chatbot rate limits. His explanation revealed a telling statistic: prior to GPT-5’s release, only 1% of non-paying users and a mere 7% of paying users had ever queried a “reasoning model” like o3. These reasoning models are designed to “think” through problems before generating an answer. It is crucial to remember, however, that AI models do not possess human-like cognition; their “thinking” refers to a structured, computational process of planning, checking, and iterating to refine results.

The overwhelming majority of users, both free and paid, appear to prioritize speed and immediate responses over the depth and accuracy that reasoning models can provide. This tendency is akin to purchasing a high-performance car and only ever driving it in first or second gear, then wondering why it feels sluggish, or participating in a quiz show and blurting out the first thought that comes to mind without deliberation. Such a preference for immediacy explains why many users expressed disappointment over the initial absence of GPT-4o, a legacy model that was later reinstated for paying ChatGPT users following a vocal campaign. Yet, when seeking answers from a chatbot, quality should arguably take precedence. A slightly slower, more accurate response is generally far more valuable than a rapid but incorrect one.

Reasoning models inherently require more computational effort, dedicating resources to planning, verifying, and refining their output. This deliberate approach significantly enhances the quality of results, particularly for tasks where logical precision is paramount. However, this increased processing time also translates to higher operational costs for providers. Consequently, AI companies often default to offering faster, less “thoughtful” versions of their models, requiring users to explicitly opt-in for the more capable alternatives, typically via a dropdown menu. Furthermore, OpenAI’s historical tendency toward obscure model naming conventions has not helped, making it difficult for users to discern whether they are accessing the advanced “thinking” version of GPT-5 or a less capable variant. Following user feedback, the company is reportedly working to simplify this distinction.

For many, waiting an extra minute for a superior AI response is a minor inconvenience, easily managed by setting the model to work and attending to other tasks. However, this brief delay is evidently too long for some. Even post-GPT-5 launch, where the option for “more thorough answers” is more visibly presented, only one in four paying users are choosing to engage this more deliberate mode.

This data offers a crucial insight into a broader question surrounding AI adoption: Why do only a third of Americans who have used a chatbot consider it “extremely” or “very useful”—a rate half that of AI experts—while one in five find it “not useful at all,” twice the rate among experts? The answer now seems clearer: most people are simply not utilizing AI effectively. They are approaching complex, multi-part questions with the expectation of an instant, unrefined answer, much like shouting out “What is macaroni cheese” on The Price is Right or “$42” on Jeopardy! without pausing to consider the nuances.

To truly harness the power of generative AI, users should take advantage of OpenAI’s efforts to enhance access to its more advanced models. By engaging the “thinking” capabilities of these chatbots—while always remembering they are not truly thinking as humans do—users are more likely to experience the transformative potential that AI experts already recognize.