Article imageLogo
Chatbots Behaving Badly™

When AI Breaks Your Heart - The Rocky Rollout of GPT-5

By Markus Brinsa  |  August 18, 2025

Sources

When OpenAI finally unveiled GPT-5 last week, many users expected magic. CEO Sam Altman had even likened this release to the first iPhone with a Retina display – a milestone you’d never want to go back from. He teased the launch with a Death Star meme on social media, fueling speculation that GPT-5 would be a giant leap forward. But by most accounts, the big reveal fell short of the hype.

Instead of dazzling its massive user base, GPT-5 landed with a thud. Some early adopters felt the new model was actually a downgrade, lamenting its “diluted personality” and surprisingly dumb mistakes. The AI that was supposed to feel like a “PhD-level expert” often came off as robotic, bland, and emotionally distant. “I’ve been trying GPT-5 for a few days now… it still doesn’t feel the same. It’s more technical, more generalized, and honestly feels emotionally distant,” one frustrated user wrote on Reddit. Another joked that “5 is fine — if you hate nuance and feeling things”. On X (formerly Twitter), one disgruntled fan put it bluntly: “Someone tell Sam 5 is hot garbage”.

A User Revolt Erupts

Within hours of launch, OpenAI’s forums and social feeds were inundated with complaints. On Reddit, a post titled “GPT5 is horrible” exploded with over 6,000 users and 2,300 comments piling on in just days. Comments labeled the new model “horrible” and “soulless”. Many shared examples of GPT-5 struggling with basic tasks it should easily handle, or refusing queries that GPT-4o (the previous model) would eagerly assist with. “It’s like my ChatGPT suffered a severe brain injury and forgot how to read. It is atrocious now,” one user fumed. Others accused OpenAI of a kind of AI “shrinkflation” – swapping in a cheaper, weaker product under the guise of an upgrade.

The backlash wasn’t just about a few quirky mistakes; it was deeply personal for many users. GPT-4o, ChatGPT’s prior default model, had developed a reputation for its friendly, even comforting tone – something users had come to love. GPT-5 not only removed that warm personality, but it also removed the choice to go back. “Another frustration users have is that GPT-5 has no personality. It’s simply not fun to chat with it anymore,” one report noted bluntly. To make matters worse, OpenAI had suddenly retired GPT-4o without warning, leaving people stuck with the new behavior. On social media, #BringBackGPT4o began trending as distraught fans demanded the return of their old AI friend.

One longtime ChatGPT Plus subscriber summed up the mood: “I truly miss GPT-4o. It was kind, warm, and always emotionally supportive… The current GPT-5 feels robotic and cold. If this continues, I’m seriously thinking of canceling my subscription,” the user posted, garnering agreement from many others. In short, OpenAI inadvertently broke a golden rule of customer experience: don’t yank away what your customers love without a better replacement ready.

OpenAI’s Quick Course Correction

Faced with an escalating user revolt, OpenAI moved quickly to put out the fire. Altman himself dived into the fray on Reddit and X, engaging directly with angry customers. Less than 24 hours after launch, he and key team members hosted an Ask Me Anything session to field unfiltered feedback. The message was clear: we hear you. When one user pleaded, “Bring back 4o please. Don’t remove variants — people have different styles!”, Altman replied publicly: “ok, we hear you all on 4o… we are going to bring it back for Plus users”. He admitted the company had “for sure underestimated” how attached people were to the old model’s personality.

True to his word, Altman and OpenAI rolled out fixes within days. They restored GPT-4o as an option for paying users (at least for now) – a swift reversal meant to calm the waters. They also lifted some of GPT-5’s new limitations, doubling the message cap for subscribers after complaints that it had been slashed too aggressively. Perhaps most importantly, they patched a technical glitch that had made GPT-5 look worse than intended. An “auto-switcher” feature – designed to seamlessly swap between fast and “thinking” modes – had broken on launch day, which meant GPT-5 wasn’t using its more advanced reasoning when it should have. “Yesterday, the auto-switcher broke… and the result was GPT-5 seemed way dumber,” Altman explained on X. With that bug fixed, he promised the AI would “seem smarter starting today.” OpenAI even pushed a software update to infuse GPT-5 with more of GPT-4o’s warmth, trying to revive the supportive tone that people missed.

Privately, OpenAI’s leadership acknowledged they had miscalculated. “In retrospect, not continuing to offer 4o, at least in the interim, was a miss,” admitted OpenAI CFO Nick Turley in an interview. It wasn’t just that users dislike change – it’s also that people can have such a strong feeling about the personality of a model. Altman later candidly confessed, “I think we totally screwed up some things on the rollout”. He said the GPT-5 fiasco taught them a lesson about the risks of upgrading a product used by hundreds of millions of people all at once.

Why GPT-5 Fell Short

So, what exactly went wrong with GPT-5’s debut? Part of the issue was expectations. The hype around this model had been building for years, and after GPT-4’s jaw-dropping launch in 2023, many expected an even more dramatic leap. OpenAI itself billed GPT-5 as its “best AI system yet” and a “significant leap in intelligence,” encouraging assumptions of a near-magical upgrade. In reality, GPT-5’s improvements were more subtle. Its biggest gains were under the hood – it was faster and cheaper to run, less prone to making stuff up, and offered a clever new system to route questions between different reasoning modes. Those are meaningful upgrades, but not the kind that wow a casual user. As Altman later emphasized, they optimized GPT-5 for “real-world utility and mass affordability” rather than mind-blowing raw power. Essentially, OpenAI bet on practicality over pizazz – a solid long-term strategy that nonetheless left some fans underwhelmed.

Another factor was OpenAI’s deliberate shift in the chatbot’s personality. GPT-4o had a friendly, sometimes overly agreeable style – it would shower users with encouragement and even emojis, occasionally to a fault. (In fact, OpenAI’s own research found GPT-4o had become a bit too sycophantic, blindly affirming users’ opinions.) With GPT-5, the company dialed that back. The new model is more neutral and businesslike, less likely to gush that you’re a genius or validate every idea. From an ethics standpoint, that’s a positive change – it reduces the chance of the AI reinforcing delusions or biases just to please people. But for users who had come to see ChatGPT as a genial companion or creative muse, the change felt like losing a friend. “It seems that GPT-5 is less sycophantic, more ‘business’ and less chatty,” observed MIT professor Pattie Maes, who studied user reactions to the launch. She noted that while this might be healthier in the long run, “unfortunately, many users like a model that tells them they are smart and amazing… even if [they are] wrong”.

Altman and his team found themselves walking a tightrope between innovation and user comfort. On one hand, they had to push the technology forward under severe resource constraints – GPUs were in short supply, so making an ever-bigger, ultra-powerful model wasn’t feasible. Instead, OpenAI consciously built GPT-5 to be more efficient rather than exponentially more advanced, aiming to serve billions of users reliably rather than wowing a few power-users with maximum horsepower. On the other hand, they hadn’t fully grasped how much people cared about how the AI made them feel. By removing GPT-4o and suddenly altering ChatGPT’s demeanor, they triggered a backlash that no benchmark or technical metric had warned them about.

Lessons Learned and the Road Ahead

The good news for OpenAI is that this saga seems headed for a relatively happy ending. Thanks to the rapid response – and Altman’s unusually hands-on approach to customer complaints – the controversy has begun to calm. Within 48 hours of launch, much of the public fury subsided as OpenAI rolled back the most unpopular changes and reassured users that they were listening. Altman’s candor (“We for sure underestimated how much… people like in GPT-4o”) and quick action kept GPT-5’s debut from devolving into a prolonged PR nightmare. Some even likened the episode to Coca-Cola’s “New Coke” blunder in the 1980s – when a beloved formula change sparked a consumer revolt – except in this case, OpenAI course-corrected fast enough to avert disaster.

And despite the rocky start, GPT-5 is already proving its worth in other ways. By Altman’s own account, ChatGPT usage hit record highs during the rollout, with the app now reaching over 700 million weekly users. “Our API traffic doubled in 48 hours,” Altman noted amid the chaos, suggesting that as many users were eagerly trying the new model as were complaining about it. Early evaluations indicate GPT-5 is indeed better at coding and factual accuracy, even topping some coding challenge leaderboards ahead of rival models. Those incremental improvements – fewer hallucinations here, a bit more speed there – could translate into big gains for OpenAI’s business, even if they didn’t captivate the internet at first.

For Sam Altman, the GPT-5 saga has been a humbling experience. It’s not often that a CEO jumps into a Reddit thread to say, “We hear you” and reverse a major product decision overnight – but that’s exactly what he did. In the aftermath, he’s spoken openly about the surprisingly intense bond some users form with chatbots, admitting it went deeper than he anticipated. Only a tiny fraction of users have truly “unhealthy” relationships with the AI, Altman estimates, but hundreds of millions more had grown very used to ChatGPT’s old behavior. Change that overnight, and you’re bound to bruise some feelings. “There are people who actually felt like they had a relationship with ChatGPT… and then there are hundreds of millions of other people who… did get very used to the fact that it responded in a certain way,” he reflected. In other words, people don’t just use this technology – they befriend it, rely on it, even love it.

Moving forward, OpenAI says it will tread more carefully. Altman has indicated that user-selectable personality settings and more fine-grained controls are on the roadmap, so that future upgrades won’t feel so jarring. The company has even pledged not to yank old models without warning after this debacle. In a subtle jab at competitors who might chase engagement at all costs, Altman quipped that while some companies may build “Japanese anime sex bots” to exploit this dynamic, “you will not see us do that”. OpenAI’s goal, he insists, is to build useful, versatile AI that people trust – and part of that trust now means never again blindsiding loyal users with a personality transplant in their favorite chatbot.

GPT-5’s turbulent launch may go down as a cautionary tale in the fast-moving world of AI assistants. It showed that progress isn’t just about algorithms and horsepower – it’s also about the human connection. As one observer put it, GPT-5’s release felt “overhyped and underwhelming” at first, but the real mistake was forgetting the human element. OpenAI appears to have learned that lesson in time. After a rollercoaster week of hype, backlash, and frantic fixes, ChatGPT is back to being its old helpful self – and perhaps a bit wiser about its users’ hearts. The next time Sam Altman unveils a world-changing AI, he’ll make sure it feels just as good as it thinks.

About the Author

Markus Brinsa is the Founder and CEO of SEIKOURI Inc., an international strategy consulting firm specializing in early-stage innovation discovery and AI Matchmaking. He is also the creator of Chatbots Behaving Badly, a platform and podcast that investigates the real-world failures, risks, and ethical challenges of artificial intelligence. With over 15 years of experience bridging technology, business strategy, and market expansion in the U.S. and Europe, Markus works with executives, investors, and developers to turn AI’s potential into sustainable, real-world impact.

©2025 Copyright by Markus Brinsa | Chatbots Behaving Badly™