ChatGPT adds Auto, Fast, Thinking modes for GPT-5 control
OpenAI is granting ChatGPT users unprecedented manual control over their interactions with the advanced GPT-5 models, marking a significant departure from the automatic routing system that accompanied the initial rollout of its latest AI. Users can now manually toggle between “Auto,” “Fast,” and “Thinking” modes, alongside the reintroduction of the popular GPT-4o for paying subscribers.
This shift comes in direct response to a wave of criticism following GPT-5’s debut. Many users found the previous automatic model selection opaque and unreliable, particularly when established models like GPT-4o seemingly vanished without warning. Concerns also mounted that the routing system disproportionately steered resource-intensive queries towards less capable or cheaper models, limiting access to the full “reasoning” variant of GPT-5. Indeed, some paying customers initially reported sharp reductions in their quota for these advanced reasoning requests.
OpenAI CEO Sam Altman acknowledged these concerns, stating that while “most users will want Auto,” the manual options provide essential flexibility “for some people.” In a user interface tweak designed to enhance transparency, hovering over the “Regenerate” button in ChatGPT now reveals precisely which model generated the current response. Furthermore, in response to user feedback, OpenAI has temporarily increased the message limit for “GPT-5 Thinking” mode to 3,000 messages per week before automatically switching users to the smaller “GPT-5 Thinking mini” model, though the company notes this is a temporary adjustment.
For paying users, GPT-4o is now accessible once more via the model picker, and OpenAI has committed to providing notice should it decide to phase out the model again. A new “Show additional models” button in ChatGPT’s web settings also allows paid accounts to select from legacy models such as o3, 4.1, and GPT-5 Thinking mini. Notably, the highly capable GPT-4.5 remains exclusive to Pro users, a decision Altman attributes to the significant computational resources required.
Despite these updates aimed at user control and transparency, core criticisms of GPT-5 persist within the AI community. Prominent large language model skeptic Gary Marcus recently characterized GPT-5 as “overdue, overhyped and underwhelming,” asserting that it represents a rushed incremental update rather than a genuine technological breakthrough. Marcus points to observed errors in physics, chess, and image analysis. His critique is echoed by a recent study from Arizona State University, which found that “chain of thought” reasoning—a capability often highlighted as a strength of large language models—proves fragile and unreliable when applied outside its training data, a flaw Marcus also identifies in other models like Grok and Gemini. For Marcus, the launch of GPT-5 does not signify a milestone on the path to artificial general intelligence (AGI), but rather a moment that could sow doubt among even tech insiders regarding the industry’s aggressive scaling strategies.
Yet, even as these debates unfold, leading AI laboratories like OpenAI and Google DeepMind are deploying similar foundational technologies to tackle complex mathematical and logical problems that seemed insurmountable just months ago. While specific details remain under wraps, these developments suggest that despite their inherent flaws, today’s language models are capable of surprising feats, indicating vast unexplored territories within the technology.