OpenAI CEO: Most Users Misunderstand AI Capabilities
The August 7 release of GPT-5, OpenAI’s latest iteration of its generative AI model, was met with contrasting reactions. While the company heralded it as a transformative leap, following weeks of intense hype and a polished livestreamed unveiling, social media users responded with a mix of confusion and frustration, particularly over the removal of several key models they had come to rely upon.
In the aftermath, OpenAI CEO Sam Altman inadvertently shed light on the significant disparity between the company’s expectations for GPT-5’s reception and the reality. It appears a vast majority of users are not leveraging AI to its full potential. Altman revealed that before GPT-5’s debut, a mere 1% of non-paying users, and only 7% of those subscribing to the $20-per-month Plus tier, had ever queried a “reasoning model” like o3. This disclosure came amidst explanations for drastically reduced access limits for fee-paying users, prompting questions about the value they were receiving.
Reasoning models are designed to meticulously process problems before formulating an answer. While it’s crucial to remember that these AI models do not possess human-like cognition, they simulate a deeper analytical approach. The overwhelming majority of users, both paying and non-paying, opting out of these models is akin to purchasing a high-performance car and only ever driving it in first or second gear, then wondering why it’s not performing optimally.
Many users, it seems, prioritize immediate gratification and convenience over the quality and depth of AI chatbot interactions. This preference was evident in the widespread lament over the temporary loss of GPT-4o, an earlier model that was eventually reinstated for paying users after public demand. However, when seeking answers from a chatbot, accuracy and thoroughness are paramount. A slightly slower, yet correct, response is invariably more valuable than a quick but erroneous one.
Reasoning models are engineered to expend greater computational effort on planning, cross-referencing, and refining their responses. This deliberate process significantly enhances the quality of results, especially for tasks where logical precision is critical. The trade-off, however, is increased processing time and higher operational costs. Consequently, AI providers often default to faster, less analytical versions, requiring users to actively select more capable alternatives via interface options. OpenAI’s past history of complex and often obscure model naming conventions further compounded this issue, making it difficult for users to discern the capabilities of different versions—a problem GPT-5 attempted to address, albeit with mixed success, prompting further user complaints and ongoing adjustments from the company.
For many, waiting an extra minute for a superior AI response is a minor inconvenience, easily managed by multitasking. Yet, for some, this brief delay appears to be a significant barrier. Even after the GPT-5 launch, where the distinction between the “flagship model” and the more thorough “GPT-5 thinking” option is more apparent, only one in four paying users actively request comprehensive answers.
This overlooked data point offers a compelling answer to a broader question about AI adoption: why do only about a third of Americans who have used a chatbot consider it extremely or very useful—a rate half that of AI experts—while one in five find it not useful at all, double the expert rate? The explanation now seems clear: a large segment of the public is underutilizing AI. They are posing complex, multi-part questions to chatbots without allowing for the deeper processing that yields truly insightful responses.
To truly harness the power of generative AI, users should take advantage of the broader access to advanced models that providers like OpenAI are increasingly offering. By opting for the more analytical, “thinking” modes—while remembering that AI does not truly think as humans do—users can unlock a more valuable and reliable experience, potentially transforming their perception of AI’s utility.