GPT-5 Smarter After OpenAI Fixes Model Switcher Issues
OpenAI is actively addressing initial deployment challenges with its latest large language model, GPT-5, aiming to deliver immediate improvements in the model’s capacity, quality, and user interface. The company’s CEO, Sam Altman, recently confirmed via X that ChatGPT Plus subscribers will soon see their GPT-5 message limits doubled once the ongoing rollout is complete. This move comes as part of a broader effort to refine the user experience, while also allowing Plus users to continue accessing the previous GPT-4o model, with OpenAI monitoring its usage to inform future availability decisions.
Currently, ChatGPT Plus users are capped at 100 messages every three hours, with the system automatically defaulting to a smaller model if this limit is reached. The specialized GPT-5-Thinking variant, designed for more complex tasks, has its own weekly cap of 200 messages, though switching between the standard GPT-5 and GPT-5-Thinking does not contribute to this weekly total. Altman’s announcement suggests users should perceive an immediate uplift in performance, with GPT-5 expected to “seem smarter from today.” He attributed earlier inconsistencies, which led the model to occasionally appear “way dumber,” to initial problems with the core automatic model switcher—the system responsible for selecting the appropriate GPT-5 variant for each user prompt.
The comprehensive rollout for all users has proven more intricate and time-consuming than OpenAI initially anticipated. Altman characterized it as a “massive change at big scale,” highlighting that API traffic had approximately doubled within the preceding 24 hours. While some operational friction was expected given the simultaneous implementation of numerous changes, Altman conceded that the launch phase was “a little more bumpy” than the company had hoped.
Beyond immediate fixes, OpenAI is also refining its underlying model routing strategy, which determines how prompts are assigned to specific models. The goal is to ensure a more reliable match between the user’s request and the model best suited to handle it. Altman also indicated future plans to enhance transparency, making it clearer to users which model is currently active. Furthermore, users may eventually gain the ability to manually activate “Thinking” mode directly through the interface, though a timeline for these features was not provided.
These strategic adjustments signal a potential shift away from OpenAI’s earlier emphasis on fully automatic model routing. While automated routing simplifies the experience for new users who prefer not to manually select models, it can inadvertently lead to less reliable responses or the deployment of an ill-suited model for a given task, adding another layer of unpredictability to the already complex outputs of large language models. For users who prefer greater control, OpenAI has maintained an option to enable access to “legacy” models through ChatGPT’s settings, which then appear under “Other models.” However, the more advanced GPT-5 “Thinking” and “Pro” variants remain exclusively accessible through a Pro account.