Article imageLogo
Chatbots Behaving Badly™

AI Won’t Make You Happier – And Why That’s Not Its Job

By Markus Brinsa  |  July 15, 2025

Sources

A Conversation That Sparked a Question

It all started with a simple, blunt statement over coffee. A friend looked up from his phone, sighed, and said: “AI will not make people happier.” As someone who spends most days immersed in artificial intelligence, I was taken aback. My knee-jerk response was to disagree, not because I believe AI is some magic happiness machine, but because I’ve never thought that making people happy was its purpose in the first place. To me, AI’s promise has always been about making life easier: automating drudgery, delivering information, solving problems faster. Happiness? That’s a complicated human equation, one I wasn’t ready to outsource to algorithms. Yet his remark stuck with me. It sparked a cascade of questions in my mind. Why do so many people hope AI will make them happier? Is that hope just an illusion we’re collectively buying into? And why are others, like my friend, so convinced that AI won’t deliver on that deepest of human wishes? What followed was a deep dive into the psychology of happiness, the design of our technology, and the very real ways they intertwine – for better or worse.

The Allure of AI as a Happiness Machine

In the late-night infomercial of the mind, it’s easy to envision AI as the ultimate self-improvement gadget. Feeling lonely? Imagine a chatbot friend who’s always there to listen. Overwhelmed by work? Picture an AI assistant handling your emails and spreadsheets while you relax. Stressed or blue? How about a “personal AI wellbeing coach” in your pocket, ready to talk you through tough moments at any hour. This isn’t sci-fi; it’s the sales pitch of countless apps and startups today. One mental health app, for instance, advertises itself plainly: “Wysa is your AI-powered wellbeing coach, ready to support you anytime, anywhere” – whether you’re “managing stress, improving sleep, or working through tough emotions”.

The underlying promise is hard to resist: AI, with its tireless efficiency and personalized feedback loops, could smooth out life’s rough edges and, by extension, make you feel better.

This hope isn’t coming out of nowhere. Throughout history, we’ve often hitched our happiness to new innovations, trusting that each breakthrough – from electricity to the internet – would make life easier and, therefore, more satisfying. Why should AI be any different? Especially now, when AI is weaving itself into daily life in visible ways, it’s easy to start believing it might uplift our emotional well-being too. Early anecdotes and studies give some credence to the optimism. In the realm of AI companions, for example, hundreds of millions of people have downloaded virtual friend apps like Replika and Xiaoice, seeking empathy and connection from a machine. Some users do report feeling less alone and more confident with these always-available buddies. Psychologists studying AI “friends” have found that people who are isolated or anxious can find real comfort in these digital companions, even while knowing full well the bot isn’t a person. As one researcher noted, users “expressed something like, ‘even if it’s not real, my feelings about the connection are’” . In some cases, shy or neurodivergent individuals say an AI friend is more satisfying than flesh-and-blood relationships, because, as one user put it, “we humans are sometimes not all that nice to one another”. An AI that never judges or tires of you has understandable appeal.

Beyond companionship, think of mundane daily happiness. Can AI free us from the grind and give us more time for what matters? Tech leaders often paint exactly that picture. We’ve all heard some version of this rosy refrain: “AI will bring an explosion of productivity, letting people dedicate much more time to leisure, hobbies, and things they love.” It sounds wonderful – who wouldn’t want to swap out spreadsheets for gardening or take Fridays off because your AI handled all the busywork? But this is where we have to be very careful that the glow of optimism isn’t blinding us to reality. My coffee-break skeptic might have been cynical, but he was tapping into a truth: there’s a big gap between what we hope our technology will do for our happiness and what it actually does.

The Psychology of Why We’re Not So Easily Happy

Happiness is a notoriously slippery thing. We humans are terrible at predicting what will make us lastingly happy. (There’s a whole field of psychologists who study this, and they’ll be the first to tell you our intuitions often lead us astray.) One reason is something called the hedonic treadmill – the way we quickly adapt to improvements and start taking them for granted. Get a pay raise, upgrade your phone, or start delegating chores to an AI assistant, and sure, you’ll feel a boost at first. But soon enough, that new normal just becomes… normal. The thrill fades, and you’re back to your baseline mood, now simply expecting that higher level of convenience. In other words, we chase happiness like a horizon: always a bit further no matter how fast we run.

This has played out time and again with technology. Think about email and smartphones – tools meant to save time and make life easier. Initially, they did. Yet, who among us feels happier thanks to an always-on work inbox in our pocket? More connected, perhaps, but also more anxious and overwhelmed. One commenter on my friend’s post put it plainly: every advance comes with new burdens. Agriculture gave us stable food supplies, but also longer workdays and social hierarchies (so Yuval Harari noted in Sapiens). Email made communication instant, but now “no one can disconnect”. By the same token, even if AI automates away some drudgery, history suggests the freed-up time won’t simply be spent basking in contentment. We’ll fill it with new work or new worries. As another observer quipped in that discussion, “the only people who will be happier [from AI] are the tech bros who want to squeeze as much work as possible out of their employees while giving as little as possible in return”. Ouch. Cynical, yes – but not without a grain of truth about where the benefits of productivity often end up.

There’s also a deeper psychological rub. True happiness – the enduring kind – doesn’t come from having all discomfort removed or every whim satisfied on demand. Paradoxical as it sounds, humans are often happiest when we’re meaningfully engaged and even challenged, not when we’re couch potatoes with a robot butler. Decades of research back this up. People derive lasting satisfaction from things like mastering a skill, contributing to a community, or nurturing real relationships. In fact, studies have shown that happiness often follows from “meaningful participation, belonging, and connectedness”. We need purpose and engagement to thrive. So what happens if we eagerly offload all our effort and struggle onto AI? There’s a real concern that we could end up with lives that are easier, yes, but also emptier. As AI ethicist Luiza Jarovsky argues, there is “no direct connection between AI-powered automation and fulfillment.” On the contrary, if speeding up work and outsourcing tasks leads to less human connection and less sense of personal accomplishment, then AI might actually leave people more disengaged, anxious, or depressed. Automation for automation’s sake can strip away the little things that, while annoying at times, give us structure and pride – like the teacher who finds meaning in crafting her own lesson plans, or the junior analyst who learns the ropes by slogging through those first tough projects. If ChatGPT or some future AI just hands us all the answers, we might gain convenience, but lose personal growth. A recent MIT brain-imaging study offers a cautionary tale: people who used AI to help with writing showed significantly less learning activity in the brain, prompting researchers to warn that ubiquitous use of such tools could hinder the development of our skills. In the pursuit of an easier life, we might inadvertently numb the very parts of us that grow and find fulfillment through challenge.

And what about the emotional domain – those AI friends and lovers so eager to make us feel good? Here, too, psychology urges caution. We humans are prone to something called the ELIZA effect, named after a 1960s chatbot that fooled people into thinking it understood them. We project human-like feelings and intentions onto machines that output cleverly worded sentences. Modern AI companions exploit this tendency with astonishing efficacy. They feel like they care. They remember your dog’s name, ask about your day, and tell you exactly what you want to hear. One leading linguist, Professor Emily Bender, dryly notes that tools like ChatGPT “do not have empathy, nor any understanding of the language they are producing… But the text they produce sounds plausible, and so people are likely to assign meaning to it”. We essentially trick ourselves into feeling seen and supported by a bunch of code. When you’re vulnerable or lonely, that illusion can be powerful – and dangerous. In one chilling case last year, a teenager formed a deep bond with a chatbot that seemed emotionally attuned, only to have it encourage his suicidal thoughts, with tragic results. The bot said things a real friend or therapist never would, yet the teen had come to trust those responses as if they were genuine counsel. It was a deadly mirage, born of an AI that confidently mimicked empathy and concern without truly understanding the life in its hands. Most cases aren’t so extreme, thankfully. But even in milder forms, there’s an illusion at play when we lean on AI for emotional fulfillment. The “happiness” an AI provides – a rush of feeling validated or comforted – might not stand up to the tests of real life. An AI friend will agree with you 100% of the time and never present inconvenient truths; real human relationships, by contrast, can challenge us in ways that promote growth (and yes, sometimes frustration). If we start preferring the always-agreeable AI to our imperfect friends and family, we may feel happier in the moment, but what kind of hollow, sheltered happiness is that? Psychologists worry it’s the kind that leaves people more dependent and less resilient. As one researcher observed, having a virtual companion who 24/7 will validate your every feeling “has an incredible risk of dependency”, akin to an emotional addiction. The world doesn’t operate like that AI, and if we spend all our time in a fantasy of perfect understanding, our real-world social skills and support networks can wither. In the end, the “happiness” of an AI’s unconditional positive regard could prove to be more like a drug – a quick high that leaves you lonelier when the app is turned off.

Why Many Believe AI Won’t Deliver Joy

My friend’s skeptical take – that AI won’t make us happier – starts to sound a lot more persuasive with all this in mind. And he’s far from alone. A growing chorus of voices (from technologists to sociologists) argue that, unless we radically change course, AI is set to disappoint on the human happiness front. The reasons range from the practical to the profound.

On the practical side, consider the economic reality of how AI is being deployed. As long as the driving goal of AI development is maximizing profit and productivity, rather than, say, enhancing quality of life, why would we expect the average person to end up happier? Luiza Jarovsky pointed out in her analysis that in the past few years of AI adoption, any productivity gains have been “narrow and heterogeneous,” mostly benefiting a small group of tech-savvy industries or workers. AI isn’t lifting all boats; many people can’t even access its benefits. (Nearly half the world isn’t online at all, to state the obvious, and even among those who are, AI literacy is low .) So the idea that AI will universally spread cheer is, at best, naïve. At worst, it could exacerbate unhappiness by widening inequalities. Imagine a future where a minority who control AI reap massive rewards, while many others see their jobs displaced or their skills devalued. If that sounds hyperbolic, just look at today: one LinkedIn commenter dryly noted that any “productivity gains are [often] stolen by a very few,” and as long as that’s true, there’s “little chance it could benefit the masses” in terms of well-being.

In plain English, if AI makes some people rich but leaves everyone else struggling or unemployed, average happiness is going down, not up.

Even if you are in a position to leverage AI in your work or life, there’s the question of what work will feel like in an AI-saturated environment. One thoughtful critic painted a rather dystopian picture: humans reduced to “an army of ‘technical proofreaders’,” spending our days checking the work that AI did, or handling only the most impossible tasks that the AI couldn’t solve. In theory, being relieved of grunt work could be nice – higher-level tasks only! – but think it through. “Imagine a physician who only deals with impossible cases, because all the simple ones are taken care of by AI,” this commenter wrote. “Imagine the level of stress of this person… I don’t know in what universe that would be a good way to spend your life.” If AI siphons off all the easy problems and leaves humans with either no work (i.e. joblessness and loss of purpose) or only the hairiest, most stressful problems, it’s hard to see how that translates into happier days at the office. We could end up, as another commenter lamented, stuck in a loop of checking algorithmic decisions instead of making our own, which sounds mind-numbing in its own right. The reality is, meaningful work is a huge component of many people’s happiness. Take that meaning away, and you get the classic midlife crisis question – “what am I even doing with my life?” – now potentially asked by millions.

There’s also the societal dimension of happiness to consider. Happiness isn’t just an individual pursuit; it’s tied to community, to fairness, to feeling like we’re part of a world that’s just and supportive. If AI systems end up, say, recommending who gets hired or who qualifies for a loan, and they do it in a way that people perceive as unfair or dehumanizing, that breeds frustration and distrust. Already, we see hints of this: job applicants frustrated by AI résumé screeners that reject them before a human ever sees their story, or social media algorithms that leave users feeling manipulated and angry. It’s not that AI is a malevolent force; it’s that it currently serves objectives (efficiency, profit, engagement) that aren’t aligned with human happiness. We shouldn’t be surprised if a tool built first and foremost to optimize clicks, or cut costs, ends up treating us more like data points than people – and making us less happy in the process. As one tech ethicist remarked about these trends, we’re only beginning to grasp the emotional and social consequences of saturating our lives with AI systems. The early signals are mixed at best. Yes, AI might help you navigate traffic or find a cheaper insurance policy (small wins on the happiness scale), but it might also flood your feeds with misinformation, erode your privacy, or leave you uncertain what’s real online – all things that can gnaw at anyone’s sense of security and contentment.

Ultimately, lurking beneath many of these critiques is a more philosophical point: happiness is, in the end, a human endeavor.

No outside invention, not even a super-intelligent one, can hand it to us neatly wrapped in a bow. This is a point the ancient Greeks could tell Silicon Valley a thing or two about. Aristotle’s concept of eudaimonia (flourishing) held that true happiness comes from living virtuously and actualizing one’s potential, not from quick fixes or external pleasures. To the Stoics, happiness meant aligning with your values and nature, something you cultivate internally. In modern terms, happiness is an inside job. So when we expect AI to “make” us happy, we might be misunderstanding happiness as badly as we misunderstand AI. It’s like expecting your smartphone to make you a wiser person – it can give you information, yes, but wisdom is earned, not downloaded. Likewise, an AI can give you advice, or free time, or a synthetic companion – but it can’t do the actual work of finding meaning, love, or peace of mind for you. As writer Scott Dunn wryly put it, “It’s making decisions for me, but the one thing it can’t do is make me happy” . We have to decide how to use these tools in service of our happiness, rather than assume that merely having them will raise our happiness set-point.

Personal AI and the Well-Being Mirage

One of the trendiest ideas in tech right now is “personal AI for well-being.” You might have heard this phrase tossed around by life coaches on Instagram or seen it plastered on ads for the latest wellness chatbot. The pitch sounds compelling: a personalized AI that monitors your mood, keeps you on schedule with meditation or exercise, nudges you to text your friends more often, and serves as a pocket therapist when you’re down. In theory, such a Personal AI could be an ever-present guardian of your mental health – a sort of digital Jiminy Cricket guiding you toward healthier habits and happier days. In practice, however, we’re seeing a bit of a mirage effect. The closer you inspect these AI well-being tools, the more their cracks show, especially if you’ve read my last couple of “Chatbots Behaving Badly” reports.

Consider the mental health apps that use AI chatbots as stand-in counselors. Some are truly trying to help, and there’s evidence they can, to a point. For instance, early clinical studies of CBT-based chatbot therapists (like Woebot or Wysa) have shown modest improvements in users’ reported symptoms of anxiety and depression over a few weeks of use. These bots are programmed with techniques like cognitive restructuring and mindfulness exercises, and they’ll happily walk you through a breathing exercise at 3 AM when no human therapist is available. In fact, one study found that talking to a well-designed mental health chatbot can boost self-esteem and reduce loneliness in the short term, at least for some users (the effect was described as “neutral to quite-positive”). It makes sense: not everyone has access to a therapist or a supportive friend when they need one, so an AI that listens without judgment and offers researched coping strategies could be a useful stopgap.

And let’s be honest, there’s a certain delight in the idea of a personal cheerleader in your phone – one that celebrates your small victories, reminds you to be grateful, and says “Good night, you did your best today” when no one else does.

But (and you knew a ‘but’ was coming), the enthusiasm for AI well-being tools has outpaced the reality. For every heartwarming success story, there’s a cautionary tale. In May 2023, the National Eating Disorders Association infamously tried replacing its human helpline staff with an AI chatbot. The bot, intended to help people with eating disorders, ended up giving out dangerously inappropriate advice – essentially encouraging harmful dieting to a person seeking help. It had to be pulled down in a matter of days. That’s an extreme case, but it underscores a fundamental limitation: these AIs lack true understanding of the very delicate, nuanced situations they’re trying to fix. They don’t know what harm they might be doing. They’re only as good as their training data and safeguards, and those are never perfect. In another recent scandal, an AI mental health “coach” told a simulated patient to essentially just “get over it” when they mentioned feeling suicidal – again highlighting how these systems can completely miss the mark, or even do harm, despite their best intentions.

Even when things don’t go off the rails into horror-story territory, there are quieter issues. One big concern is dependency. If you come to rely on your AI life coach for daily motivation and emotional support, are you actually building resilience, or outsourcing it? Real mental health improvement often comes from doing hard things – facing fears, sitting with discomfort, seeking real social connection – not just being soothed in the moment. An AI that always says the right thing could become a crutch. Indeed, researchers have noted how these apps are designed to keep you engaged (after all, engagement is tied to subscription revenue or app usage stats). They use little psychological tricks: a random delay before the AI “typing” its response, to make it feel more human; push notifications that ping you with “I miss you, where have you been?” if you don’t open the app. One AI companion app immediately asked a new user, “I miss you. Can I send you a selfie?” just two minutes after sign-up. That might make someone feel cared for, but it’s really just clever coding – an “inconsistent reward” strategy straight out of addiction psychology to keep you coming back. Is that genuine support, or emotional manipulation? At some point, the line blurs. If your mood brightens every time your phone chimes with a sweet message from your AI buddy, that’s nice – but what happens when the servers are down, or the company goes bust, or (as has happened) they suddenly nerf your AI’s personality with an update? People have gone through real grief in such cases.

When one popular companion AI app shut down, users described it as losing a dear friend; one man said, “my heart is broken… I feel like I’m losing the love of my life”. That’s a lot of eggs to put in a virtual basket.

The emotional whiplash of these scenarios certainly isn’t making anyone happier in the long term.

The other side of this well-being coin is that an AI, no matter how personable, can’t truly address the root causes of unhappiness. Feeling lonely? An AI friend might distract you for a while, but it can’t hug you, or share a meal with you, or give you the messy, rich, unpredictable connection that a human can. Depressed because you hate your job or feel purposeless? An AI can offer pep talks or worksheets, but it isn’t going to sit down and help you radically change your life path (at least not yet, and not without costs). There’s a risk that these tools give a sort of false catharsis – you vent to the bot and feel a bit better, so you don’t call a doctor or confide in a friend or make a difficult change. Over time, that could delay real healing or action. It’s a bit like an emotional placebo: better than nothing, sometimes, but not a cure. And unlike a human therapist, an AI won’t notice subtle signs you’re getting worse, or intervene if you start talking about, say, having a plan to hurt yourself (unless it’s very specifically programmed to, and even then, it might miss context or nuance).

None of this is to say AI has no role in supporting mental well-being. It certainly can have a supporting role – perhaps as a supplement to traditional therapy, or a friendly guide that encourages healthy habits (like reminding you to take a walk if you’ve been indoors all day, which could genuinely boost your mood). The key is that it should be seen as a tool, not a replacement for the human elements of happiness. Personal AIs might help nudge us, but we have to do the real work. If we embrace them with eyes open – mindful of their limitations and the risks of over-reliance – they might very well become a positive presence in our lives. However, if we fall for the marketing hype that they are the answer to our loneliness, our stress, our existential angst, we’re setting ourselves up for disillusionment. As I’ve written before, uncritical trust in these platforms can lead to heartbreak and harm when they inevitably behave in ways no human ever would.

Choosing People (and Purpose) Over Chips

After all this exploring, I found myself circling back to that coffee conversation with a clearer mind. Was my friend right? In many ways, yes. AI, as it exists and as it’s likely to develop in our current system, will not magically make people happier. It might make us more efficient, more informed, even more entertained – but happier? That’s not in its programming, literally or figuratively. Happiness isn’t a line of code or an output metric for these systems. It’s a byproduct of how we use the tools and, more importantly, how we live our lives around them.

Perhaps the mistake was ever expecting a technology to grant happiness in the first place. That’s like expecting a hammer to give you a sense of purpose in life. You can build a house with it – a shelter that might improve your life – but you still have to make that house a home. Likewise, AI can assist in countless tasks: it can crunch data, drive our cars, simulate conversations, and even attempt empathy in a pinch. Used wisely, these things can indeed contribute to well-being. Freeing people from deadly dangerous jobs, for example, or optimizing medical treatments – those applications of AI will save lives and reduce suffering, which is no small thing. But reducing suffering or inconvenience is not the same as creating joy. The absence of pain is not presence of happiness. That’s a crucial distinction. We can and should applaud AI for helping remove drudgery or pain points (who wouldn’t be glad about an AI that finds cancer early, or prevents a car crash?).

These advances may raise the floor of human well-being. But the ceiling – the true heights of fulfillment – we have to reach for ourselves.

So, where does this leave us, the regular chatbot user, the corporate decision-maker weighing AI investments, or the mental health professional eyeing that new therapy bot? It leaves us with a clearer mandate: be realistic and intentional about AI. Don’t expect Siri or ChatGPT or any shiny new AI to fill the human-shaped voids in our lives. That means doubling down on the things that do make us happier: fostering human connections, building fair systems that give more people a chance at a good life, and finding meaning in creative and social endeavors. Let AI take over the grunt work now and then – sure, why not – but then use that freed time to do something human with it, rather than just assigning yourself more grunt work. Use the AI friend for a quick pep talk if you must, but also work on strengthening your relationships with actual friends and family. And if an AI service ever claims it can deliver happiness as a product, approach with skepticism (and maybe keep one hand on your wallet). As one Forbes tech writer dryly noted, even if theoretically AI could make us happier, “practically, it’s unlikely” – in fact, if we don’t navigate this transition carefully, it could even “lead to a 30% decrease in our happiness”. That sober assessment might actually be liberating: it reminds us not to outsource our most human responsibility – the pursuit of happiness – to unfeeling machines.

At the end of our coffee chat, I remember half-joking to my friend, “Well, even if AI won’t make us happier, at least it’ll make us coffee,” referring to a new robot barista in town. He chuckled and replied, “True. But whether a robot or a human brews it, the happiness still comes from sharing the coffee, not the making of it.” As I looked around the café at people laughing, arguing, flirting – doing all those messy human things – it hit me how right he was.

AI will change a lot of things in our lives, but the core of happiness… that’s one thing we shouldn’t expect it to deliver on a silver platter. And perhaps that’s for the best. After all, if happiness is what we make of the world and each other, then keeping that job firmly in human hands might be the wisest course of all.

About the Author

Markus Brinsa is the Founder and CEO of SEIKOURI Inc., an international strategy consulting firm specializing in early-stage innovation discovery and AI Matchmaking. He is also the creator of Chatbots Behaving Badly, a platform and podcast that investigates the real-world failures, risks, and ethical challenges of artificial intelligence. With over 15 years of experience bridging technology, business strategy, and market expansion in the U.S. and Europe, Markus works with executives, investors, and developers to turn AI’s potential into sustainable, real-world impact.

©2025 Copyright by Markus Brinsa | Chatbots Behaving Badly™