OpenAI CEO: Most users miss ChatGPT's advanced reasoning features
The highly anticipated launch of OpenAI’s GPT-5 on August 7 was met with a stark contrast in reactions. While the company heralded its latest ChatGPT iteration as a world-changing advancement, following weeks of intense hype and a polished livestreamed unveiling, social media users responded with a mix of confusion and frustration, largely due to the unexpected removal of several popular legacy models.
In the aftermath, OpenAI CEO Sam Altman inadvertently shed light on the significant gulf between the company’s expectations for GPT-5’s reception and the reality. It appears that a vast majority of users are not engaging with artificial intelligence to its full potential. Altman, in a post explaining adjustments to rate limits for fee-paying Plus users—who pay $20 monthly for access to a higher tier of the model—revealed a striking statistic: prior to GPT-5’s release, only 1% of non-paying users and a mere 7% of paying users actively queried a “reasoning model.”
Reasoning models are designed to “think through” problems before formulating an answer, engaging in a more deliberate computational process. This involves planning, checking, and iterating to refine results, particularly for tasks where logical accuracy is paramount. However, it’s crucial to remember that despite the term “thinking,” AI models do not operate with human cognition or consciousness. The overwhelming majority of users, both free and paid, eschewing these more capable models is akin to purchasing a high-performance car and consistently driving it only in first or second gear, then wondering about its perceived lack of efficiency. It’s like a quiz show contestant blurting out the first answer that comes to mind, regardless of accuracy.
Many users, it seems, prioritize immediate speed and convenience over the quality and depth of AI chatbot interactions. This preference was evident in the widespread lament over the temporary removal of GPT-4o, a legacy model that was later reinstated for paying ChatGPT users following a concerted user campaign. Yet, when seeking answers from a sophisticated AI, accuracy and thoroughness are often paramount. A slightly slower, more deliberate response is frequently superior to a quick but potentially incorrect one.
The inherent trade-off with reasoning models is that their enhanced deliberation demands more computational effort, making them slower and more costly to operate. Consequently, AI providers typically offer faster, less computationally intensive versions as default, requiring users to actively opt into the more thorough alternatives, often via a dropdown menu. OpenAI’s past, often opaque, model naming conventions further complicated this choice, making it difficult for users to discern whether they were accessing the more capable, “thinking” version. The company has since begun tweaking this in response to user feedback.
Even with GPT-5’s release, where the distinction between the flagship model and its “thinking” mode—offering “more thorough answers”—is more apparent, only one in four paying users currently choose to prioritize thoroughness. For many, the brief wait of a minute versus a second for an AI response is evidently too long, despite the possibility of multitasking while the model processes.
This user behavior provides a compelling answer to a significant question regarding AI adoption: why do only about a third of Americans who have used a chatbot consider it “extremely” or “very” useful (a rate half that of AI experts), while one in five find it “not useful at all” (twice the rate among experts)? The underlying issue is clear: a large segment of the public is fundamentally misusing AI. They are attempting to tackle complex, multi-part questions without allowing the AI to engage in the necessary computational “thought” process, effectively treating a sophisticated tool as a simple, instant answer machine.
To truly leverage generative AI, users should take advantage of providers’ efforts to make more capable models accessible. Engaging the AI in its “thinking” modes, while remembering that it’s not human thought, can unlock significantly more valuable and accurate results, fostering a more productive relationship with this evolving technology.