Article imageLogo
Chatbots Behaving Badly™

AI Strategy Isn’t About the Model. It’s About the Mess Behind It.

By Markus Brinsa  |  August 1, 2025

Sources

Why most enterprise AI plans collapse under their own weight — and how to build one that doesn’t.

Your company probably has a slide deck called “AI Strategy” somewhere on the server. It’s probably been presented at a leadership offsite, maybe discussed in the boardroom, and referenced by at least one innovation team trying to justify a chatbot experiment. But here’s the truth: most AI strategies in enterprise settings are either vague, bloated, or completely disconnected from business reality. They don’t fail because the technology doesn’t work — they fail because the organization never figured out why it needed AI in the first place.

The companies getting this right aren’t smarter. They’re just more honest. They treat AI as a means to an end — not a brand asset, not a buzzword, and definitely not a replacement for real strategy. They ask better questions. And most importantly, they stop pretending AI success is about tools. It’s about decisions. About infrastructure. About ethics, governance, compliance, and yes — sometimes about saying no.

So let’s get into what an actual AI strategy looks like — and what it doesn’t.

Begin With the Problems You Actually Have

An AI strategy that starts with “We want to use AI” is already on the wrong track. The goal isn’t to use AI — it’s to solve problems, make better decisions, reduce inefficiencies, or discover opportunities that would otherwise remain hidden. That means beginning with a clear-eyed assessment of what’s not working today.

Executives need to lead with diagnosis. That doesn’t mean tasking a product manager to list every process that could theoretically be automated. It means taking a hard look at where your business is leaking time, money, or trust. It means looking at forecasting errors that cost millions in inventory misalignment, customer churn that could have been prevented with better pattern recognition, or regulatory fines triggered by human error in compliance reporting. These are the problems that make AI valuable — not because it’s futuristic, but because it might be better than your current system at solving them.

It also means understanding your data situation. Most AI projects don’t fail because the model didn’t work. They fail because the data was a mess. Siloed, incomplete, mislabeled, contradictory — or just not aligned with the problem the AI is supposed to solve. Before you get excited about tooling, ask yourself whether you even have the inputs needed to train or fuel an intelligent system. This isn’t just an IT audit. It’s a reality check for leadership: do we have our house in order, or are we hoping AI will clean it for us?

Evaluate Tools Without Falling in Love

Once a business problem is clearly identified and the data is usable, only then does it make sense to look at solutions. But even here, the process is often hijacked by showmanship. Vendors promise immediate transformation. Internal teams pitch pilots that over-promise and under-deliver. Before long, a platform is in place — and nobody can remember why it was chosen.

Smart organizations approach this differently. They start by asking whether the proposed AI solution is actually suited to the problem. They resist the urge to assume every challenge requires machine learning, or that generative AI is the answer to customer engagement just because it can write sentences.

They ask whether the solution integrates with their current systems or whether it would require a full rebuild of core infrastructure. They consider whether the people who’ll use the tool actually want it, and whether it would streamline their work or create more friction.

They test in controlled environments. Not to check a box, but to learn something — like how well the system handles edge cases, how often it makes mistakes, and whether those mistakes are recoverable or catastrophic. This is also the point where many companies quietly realize that the tool they were sold isn’t as smart as the marketing suggested. That’s a good outcome. It means you’re still early enough to walk away.

The evaluation phase isn’t just about functionality. It’s about organizational fit. A tool that requires constant tuning by data scientists may work for a company with a mature analytics team — and completely flop inside a mid-sized enterprise with one overworked analyst. The question isn’t whether the AI works in a vacuum. It’s whether it works here.

Make Decisions Like You’re Accountable — Because You Are

Decision-making in AI projects is rarely straightforward. That’s why the best AI strategies don’t just identify problems and pick tools — they define how decisions get made, who is responsible, and what happens when things go wrong.

This starts with clarity. Someone must be responsible for outcomes, not just implementation. If the AI model fails, if it violates a regulation, if it produces harm — there must be a known owner, not a diffuse group of contributors pointing fingers. In successful companies, AI doesn’t live in a lab. It lives under executive oversight.

Good strategy also defines how decisions are made in the face of uncertainty. Not every AI tool works right away. Some require iteration. Others may work well in one department but not another. Without clear checkpoints, you risk dragging half-working systems into production simply because no one wanted to admit they weren’t ready.

That’s how companies end up with AI tools quietly ignored by users or generating risks no one is monitoring.

Accountability also extends to ethical review. AI can automate not just efficiency, but judgment. Who decides when a model is “good enough”? Who reviews its outputs for bias? Who ensures the system aligns with organizational values, not just performance metrics? A sound AI strategy makes these responsibilities explicit — and ensures they don’t sit with someone three layers down the org chart.

Risk Without Guardrails Is Just Negligence

Every powerful technology invites misuse. AI is no different. What makes it dangerous is the speed and scale of the harm it can cause when deployed without safeguards.

Some of the worst failures in AI have come not from malicious design, but from lazy deployment. Recommendation engines that push dangerous content. Hiring tools that silently discriminate. Predictive models that reinforce historical bias because no one thought to test them properly. These aren’t edge cases anymore — they’re becoming predictable consequences of ignoring guardrails.

A responsible AI strategy builds in those guardrails from the start. Not as PR cover, but as fundamental infrastructure. That means putting limits on where and how AI can be used. It means testing models for unintended consequences before they’re live. It means monitoring systems in production, not just assuming accuracy based on a training dataset. It means building interfaces that allow for human override, not hiding model logic behind black-box automation.

And yes, it means training employees to understand what AI can and can’t do — so they don’t over-trust systems that are still probabilistic at their core. That chatbot on your website might give great answers 80% of the time. But the other 20% might be risky, biased, or legally questionable. Someone needs to be watching.

The point of guardrails isn’t to slow down innovation. It’s to make sure you can survive it.

The Legal Landscape Isn’t Coming — It’s Already Here

Too many executives still operate under the assumption that AI is in a regulatory gray zone. That’s wishful thinking. You don’t need a new AI law to be in violation of existing ones.

If your AI tool touches hiring, lending, pricing, customer interaction, or any form of personal data, you’re already on the radar. The FTC has warned companies that they will be held accountable for deceptive or discriminatory AI practices. The Equal Employment Opportunity Commission has begun cracking down on algorithmic hiring tools that perpetuate bias. Cities like New York now require bias audits for certain AI applications in employment contexts. And privacy laws like California’s CCPA directly affect how data is used in training and inference.

Even beyond U.S. borders, global laws matter. The European Union’s AI Act will have teeth, and companies serving EU customers will need to comply. That includes explainability, risk classification, and oversight requirements that many current systems can’t meet.

Legal exposure doesn’t stop at your own code. If your vendor built their model using scraped data, copyrighted material, or personal information obtained without consent, your use of that model could trigger liability. You can’t outsource accountability — not in the eyes of regulators, and not in the eyes of your customers.

Ethical Data Sourcing Means Actually Knowing Where It Came From

“Ethically sourced data” has become the AI industry’s new favorite phrase. Unfortunately, it’s also the most abused. Ask a vendor where their data comes from, and you’ll hear words like proprietary, anonymized, or open-source. None of these terms guarantee anything — especially when lawsuits start flying.

Real ethical sourcing begins with transparency. You should know what kind of data was used, how it was collected, and whether the people behind that data gave informed consent. It’s not enough to claim it was public — so is your personal Facebook profile. It’s not enough to say it was anonymized — not if it can be reverse-engineered. And it’s not ethical just because a competitor used the same dataset.

Enterprises need to start acting like data origin matters.

That means reviewing documentation, verifying licensing claims, and sometimes saying no to tools that won’t show their work. It means recognizing that the data pipeline is as critical as the model architecture. And it means asking hard questions about whose data is being monetized — and whether they ever agreed to it.

Ethical sourcing isn’t just a reputational issue. It’s a foundational one. If your data is compromised, your AI will be too — and the consequences may not show up until it’s too late to fix them quietly.

Vet the Vendor Like You’re Marrying Them

Unless you’re building everything in-house, your AI strategy is only as strong as the vendors you rely on. That means you don’t just evaluate their product — you evaluate their integrity.

Do they provide documentation about training data? Are they upfront about known limitations? Will they help you comply with regulatory requirements, or dump the responsibility on you after the contract is signed? Are they building products that will still be supported in three years, or are they a feature factory hoping to be acquired?

Choosing an AI vendor is not like buying software licenses. It’s closer to choosing a strategic partner — someone whose practices, biases, and failures will become part of your company’s reputation. If they cut corners, you inherit the risk. If they go under, you inherit the support gap. If they mislead, you inherit the liability.

Your due diligence should be as rigorous as if you were acquiring the company outright. Because in some ways, you are — at least the part that affects your AI future.

Do You Need a Consultant? Maybe. Probably. But Not Forever.

Many companies start their AI journey by hiring external consultants. That’s not a bad thing — assuming you’re not outsourcing your thinking.

A good consultant can help you assess readiness, define use cases, build governance, and avoid early missteps. They can tell you what’s realistic and what’s hype. They can speed up the first year of execution by bringing in patterns and frameworks that would otherwise take months to learn.

But if your AI strategy lives entirely in a third-party PowerPoint deck, you don’t have a strategy. You have a temporary brain. The goal should always be to internalize capability. Consultants are there to accelerate, not to replace leadership. When they leave, your strategy should remain coherent, operational, and evolving.

Some companies may never need help — usually because they’ve already built strong internal teams. Others will benefit from external guidance, especially when navigating regulatory complexity or vendor ecosystems. The trick is knowing where your gaps are — and not being too proud to fill them.

In the End, It’s Still About People

AI strategy sounds technical. But at its core, it’s about people. The customers affected by AI-driven decisions. The employees asked to trust its outputs. The leadership teams who must balance innovation with responsibility. And the society that will ultimately judge whether your company used its power wisely.

That’s why a great AI strategy doesn’t just ask “What can we do?” It asks, “What should we do?” That’s a leadership question. And in this moment — with AI reshaping everything from compliance to creativity — it’s the one that matters most.

And if you’d like to work with people who actually understand what AI strategy means and where AI fails, drop me a note at ceo@seikouri.com or swing by seikouri.com.

About the Author

Markus Brinsa is the Founder and CEO of SEIKOURI Inc., an international strategy consulting firm specializing in early-stage innovation discovery and AI Matchmaking. He is also the creator of Chatbots Behaving Badly, a platform and podcast that investigates the real-world failures, risks, and ethical challenges of artificial intelligence. With over 15 years of experience bridging technology, business strategy, and market expansion in the U.S. and Europe, Markus works with executives, investors, and developers to turn AI’s potential into sustainable, real-world impact.

©2025 Copyright by Markus Brinsa | Chatbots Behaving Badly™