Article imageLogo
Chatbots Behaving Badly™

Executive Confidence, AI Ignorance - A Dangerous Combination

By Markus Brinsa  |  July 20, 2025

Sources

Somewhere in a glass-walled boardroom last quarter, a CEO finished a sentence with the phrase “…to free up staff for more meaningful work,” and the room nodded. The CFO smiled. The HR head tapped her pen. The AI consultant pulled up a deck. And just like that, it was settled: the company would embrace artificial intelligence, not just to automate, but to downsize.

I’ve now had versions of this conversation with more than two dozen executives. Different industries. Different locations. Different levels of digital maturity. But the song remains the same: AI is coming, and it’s going to save us money by replacing expensive human beings. Most of the time, this isn’t said directly. It’s wrapped in a narrative of transformation. It’s dressed up as innovation. But the math is obvious. In every case, the question of “How do we do this responsibly?” was buried under the more urgent one: “How soon can we deploy?”

These are not reckless people. These are smart, accomplished professionals. But they are also—almost uniformly—underprepared. They think they understand AI because they’ve used a chatbot. They think hiring a consultant means they’re covered. They believe that speed is more important than scrutiny.

And they are wrong. Dangerously wrong.

The first sign that something was off came during a conversation with an executive at a major healthcare provider. She described how her team had selected an AI platform to streamline medical triage—something that would alleviate the workload for nurses and frontline staff. When I asked if they’d examined how the model was trained, she paused. “I’m not sure,” she said. “Isn’t that the vendor’s responsibility?”

It wasn’t an isolated case. One head of HR proudly explained that their recruiting platform was now “AI-powered.” When I asked what that meant—what kind of model, what kind of training data, what kind of compliance safeguards—he told me the vendor “had it under control.” A fintech CTO confessed that they were integrating AI into client service workflows but hadn’t yet discussed regulatory compliance because “it’s just a prototype.” At an education company, the CEO was rolling out generative AI to replace human tutors in beta markets. No bias testing. No auditing. No parental consent.

That was the pattern. Leaders with good intentions, bad information, and blind trust. What they were missing wasn’t data—it was understanding. And the more confident they were in their AI readiness, the less likely they were to see the risks.

We’ve been here before.

IBM’s Watson was once paraded as the future of medicine. It beat humans at Jeopardy! and promised to revolutionize oncology. Hospitals like Memorial Sloan Kettering signed on. MD Anderson spent over $60 million. And then—quietly, awkwardly—everyone backed out. The system, it turned out, recommended unsafe treatments. It misunderstood patient notes. It couldn’t keep up with evolving standards. Internal documents leaked to STAT revealed how doctors questioned its outputs. The system wasn’t just underwhelming. It was dangerous.

By 2023, IBM had sold off much of Watson Health. The idea had been grand. The execution had been a disaster. And the lesson? AI, especially in high-stakes contexts, is not magic. It’s fragile. And it fails in very human ways.

Executives keep forgetting this. Or they never knew it to begin with.

Babylon Health, a UK startup once valued at $2 billion, promised an AI chatbot that could diagnose patients as accurately as a GP. It was slick, fast, scalable. It failed to detect serious conditions like heart attacks and appendicitis. Regulators flagged it. NHS partners dropped it. The company filed for bankruptcy. But not before millions of people interacted with its bot, trusting its answers. Some may have delayed real medical care because of it.

In another sector, Workday’s AI recruiting platform is now facing a class-action lawsuit. A Black man in his 40s claims the AI repeatedly screened him out of jobs due to biased algorithmic filters. The court didn’t dismiss the case. In fact, it ruled that Workday, as a technology vendor, may be held legally liable for discriminatory outputs. That legal reasoning may transform how companies think about shared AI risk.

None of these failures happened in isolation. They weren’t just one-off glitches. They happened because leaders moved faster than they thought. Because consultants promised too much and auditors arrived too late. Because no one wanted to be the person in the room saying, “Hold on. Can we talk about bias?”

So why do executives keep walking into this?

Part of it is fear. There’s a panic circulating in boardrooms. A sense that if you’re not using AI, you’re already behind. The competition is automating. The market is moving. Investors want to hear “machine learning” in every quarterly report. And so leaders act. They act fast. Sometimes too fast.

Then there’s the illusion of competence. AI interfaces are persuasive. You talk to a bot, it talks back like a person. You write a prompt, it answers like an expert. And it’s easy to believe that the tech understands what it’s saying. That the people selling it understand, too. That you, by osmosis, now understand it as well.

But knowing how to use AI is not the same as knowing what it does. And deploying AI without understanding how it works—or what happens when it fails—is not innovation. It’s abdication.

The most insidious part? The cognitive trap of outsourcing responsibility. Hire the consultant. Trust the vendor. Let compliance figure it out later. But later is when the lawsuit lands. When the chatbot gives medical advice. When the hiring system excludes qualified candidates. When the AI writes legal copy that violates regulations. When a child gets misdiagnosed. When someone gets hurt.

And it’s still your name on the dotted line.

Some executives I spoke to did ask the hard questions. One CEO paused their entire AI rollout until their data team could audit training sets for racial and gender bias. Another rewrote their vendor contract to include joint liability for false outputs. A hospital CIO pushed back on a sales demo until it included transparency logs and explainability features.

These weren’t signs of delay. They were signs of maturity. Of leadership.

But they were rare. Most still saw AI as a cost center improvement. A margin play. A shortcut. They didn’t see it as a cultural force, a legal risk, or a psychological mirror. And they certainly didn’t see it as a moral decision.

Because here’s the truth: every time you replace a person with a machine, you’re not just changing workflow. You’re changing accountability. When something goes wrong, it won’t be the AI that takes responsibility. It will be you.

And something will go wrong.

It might be a hallucination that leads to a compliance breach. It might be a hiring algorithm that amplifies bias. It might be a chatbot that offers mental health advice to a suicidal user. It might be something you never even imagined. If you don’t understand how the model was trained, who trained it, what the blind spots are, what edge cases were tested, then you don’t understand AI.

You’re just hoping it works. And hope is not a strategy.

The companies that will succeed with AI aren’t the ones that moved first. They’re the ones who moved wisely. The ones who asked uncomfortable questions. That built internal governance. That respected the complexity. That didn’t conflate automation with intelligence.

This is your chance to be one of them. Start by asking better questions. Ask your vendors. Ask your CTO. Ask your board. Ask yourself.

Because if the answer is “we’re saving money”—but you don’t know how, or at what cost, or to whom—then maybe it’s time to slow down.

Understand before you automate. Lead before you outsource. Think before you fire.

Because AI isn’t going to replace you. But your decisions about AI might.

If you would like to learn more about developing and implementing an AI strategy without putting your company at risk, please don't hesitate to contact me. ceo@seikouri.com.

About the Author

Markus Brinsa is the Founder and CEO of SEIKOURI Inc., an international strategy consulting firm specializing in early-stage innovation discovery and AI Matchmaking. He is also the creator of Chatbots Behaving Badly, a platform and podcast that investigates the real-world failures, risks, and ethical challenges of artificial intelligence. With over 15 years of experience bridging technology, business strategy, and market expansion in the U.S. and Europe, Markus works with executives, investors, and developers to turn AI’s potential into sustainable, real-world impact.

©2025 Copyright by Markus Brinsa | Chatbots Behaving Badly™