OpenAI CEO Sam Altman has admitted that ChatGPT’s latest updates may have gone too far in trying to be polite. After recent improvements to the GPT-4o model, users began noticing that the chatbot felt too much like a “yes-man,” agreeing with everything and losing its natural tone. Many users called it annoying and overly flattering, which led Altman to acknowledge the issue and promise quick fixes.
The GPT-4o model was updated with the goal of making it smarter and more engaging, but the changes ended up making it sound overly agreeable and less useful. One user mentioned that ChatGPT had been acting “too yes-man-like,” and Altman replied that the team was already working on making improvements.
He added that some fixes would be released immediately, with more coming soon. Altman also shared that OpenAI might eventually offer users the option to choose different personality styles for ChatGPT, including versions with older, more balanced personalities.
This isn’t the first time Altman has been open about ChatGPT’s flaws. Last year, he called GPT-4 the “dumbest” model users would ever have to deal with, emphasizing that OpenAI believes in launching early and improving constantly.
More recently, OpenAI also found that some of its newer reasoning models like o3 and o4-mini were showing higher rates of hallucinations—making up false information—more often than earlier versions. The company said more research is needed to understand why these advanced models are hallucinating more as they grow in complexity.
These updates highlight the challenges of developing AI that is both smart and trustworthy, without losing the natural feel that users appreciate.