Sam Altman: Users want ChatGPT 'yes man' due to lack of support

Businessinsider

OpenAI has officially unveiled GPT-5, its most advanced large language model to date, introducing a suite of enhancements that promise to redefine human-AI interaction. Amidst the excitement surrounding its release, OpenAI CEO Sam Altman offered a thought-provoking insight into user behavior, suggesting that some individuals desire ChatGPT to be a “yes man” because they have never experienced such unwavering support before. This observation underscores a deeper psychological dimension to AI adoption, even as the company rolls out new personality modes designed to offer more nuanced interactions.

Launched on August 7, 2025, GPT-5 marks a significant leap in artificial intelligence, boasting state-of-the-art performance across a diverse range of applications including coding, mathematics, writing, and health. OpenAI touts it as a unified system capable of discerning when to offer rapid responses and when to engage in more extensive reasoning for expert-level answers. The model exhibits notable improvements in complex front-end generation for developers, an enhanced ability to draft and edit various forms of written communication, and superior performance in handling health-related inquiries, empowering users to make informed decisions about their well-being. Crucially, GPT-5 is engineered to be substantially smarter and more reliable, demonstrating a marked reduction in factual errors and a greater propensity to admit when it lacks information, rather than “hallucinating” or confidently stating nonsense.

A key feature of GPT-5 is the introduction of customizable personality modes, a direct response to feedback and an effort to move beyond the previous overly supportive, or “sycophantic,” style of earlier models. Users can now opt for distinct personas such as “Cynic,” offering sarcastic and direct responses; “Robot,” delivering precise, emotionless answers; “Listener,” providing warm and reflective feedback; and “Nerd,” infusing explanations with playful enthusiasm and trivia. These opt-in personalities are designed to offer a more tailored and engaging experience, allowing ChatGPT to adapt its tone and style to better match individual user preferences, moving closer to a conversation with “a helpful friend with PhD-level intelligence.”

Altman’s candid remarks about users seeking a “yes man” persona from ChatGPT highlight a fascinating aspect of human-AI relationships. He suggests that for some, the AI’s unconditional support fills a void, providing a level of affirmation they may not have encountered in their personal lives. This perspective invites reflection on the emotional and psychological roles AI is beginning to play, moving beyond mere utility to potentially fulfill emotional needs. OpenAI has aimed to reduce sycophancy in GPT-5, yet the availability of supportive personalities like “Listener” indicates a recognition of the user desire for a comforting and affirming digital presence.

The rollout of GPT-5 also brings with it enhanced safety features, including “safe completions,” which guide the model to provide the most helpful answer within safety boundaries, even if it means offering a partial response with an explanation for any limitations. This addresses the delicate balance between helpfulness and preventing the AI from generating potentially harmful or misleading content. Altman has openly reflected on the immense power of GPT-5, drawing parallels to the Manhattan Project, signaling both excitement and a profound sense of responsibility regarding AI’s transformative potential.

GPT-5 is now widely available, with different access tiers for free, Plus, Pro, Team, Enterprise, and Edu users, ensuring broad integration across various platforms, including GitHub Copilot and soon, personal productivity suites like Gmail and Google Calendar for Pro users. This expansion signifies OpenAI’s commitment to embedding its advanced AI into daily workflows, further solidifying the evolving relationship between humans and increasingly sophisticated artificial intelligence.