I was in a meeting with a new client to discuss their AI strategy. About 15 minutes in—after I’d asked a few pointed questions about their current tools, their data readiness, and whether anyone on the team actually understood how the system made decisions—the CEO leaned back in his chair, narrowed his eyes, and asked, “Are you for or against AI?”
It wasn’t hostile, but it wasn’t neutral either. It had the unmistakable tone of “choose your camp.” And in that moment, I didn’t answer directly. Instead, I asked, “What’s your definition of AI?”
He blinked. Paused. “You know… artificial intelligence.”
Exactly.
So I gave him mine: AI is like a super-speed intern who never sleeps. It doesn’t know anything the way humans do—but it’s been trained on mountains of data to predict what comes next, whether that’s the next word, next move, or next big decision. It’s not magic, it’s math—with confidence.
That got a laugh. And more importantly, it broke the spell.
Because the truth is, asking whether someone is “for or against AI” is a little like asking if they’re for or against electricity. Are we lighting up an operating room or powering an electric fence in a shark tank? Are we using it to make life better—or just to make life faster, cheaper, and slightly more confusing?
We crave simplicity. Certainty. Clear answers. And nothing scratches that itch like a good binary: us vs. them, this or that, friend or enemy. Our brains weren’t designed to elegantly weigh the trade-offs of probabilistic models, vector embeddings, and GPU scaling. They were designed to decide quickly whether that rustle in the grass was the wind… or a tiger. And then—crucially—to remember what team the tiger was on.
Which is why something as complex, multifaceted, and rapidly evolving as AI feels so uncomfortable. It refuses to sit still long enough for us to decide what label to slap on it. One day it’s helping radiologists spot tumors. The next it’s making up fake case law for a junior attorney who didn’t fact-check his chatbot. It writes poems and phishing emails, automates workflows and generates nightmares. It’s simultaneously awe-inspiring and deeply unsettling. And that’s exhausting.
So we fall back on what we know. Are you for it or against it?
That way, we can sort everyone into neat little groups. The optimists. The fearmongers. The innovators. The Luddites. And conveniently forget that some of the most vocal AI critics work in AI research, while some of the loudest cheerleaders couldn’t program a toaster.
This instinct—to flatten complexity into sides—is deeply psychological. Psychologists call it binary bias, and it’s the mental shortcut that tells us the world is made up of either/or decisions. Left or right. On or off. Yes or no. In the context of AI, it means we’re often more interested in declaring allegiance than in understanding the technology.
The media doesn’t help. Headlines scream about machines taking over jobs, generating misinformation, or achieving superintelligence in six months. What we don’t see as often are the nuances: how narrow most AI really is, how heavily human-in-the-loop most systems remain, or how implementation is 80% organizational pain and 20% algorithmic wizardry.
And corporations? They love a bandwagon—until something goes wrong. Suddenly, it’s not “our AI strategy,” it’s “that rogue tool.” One moment it’s innovation; the next it’s plausible deniability wrapped in marketing jargon.
So let’s go back to that meeting room.
After I explained my intern analogy and we shared a few laughs, I followed up with a better question: “What do you want AI to do for your business—and what are you afraid it might do by accident?”
That cracked open a real conversation. Not about sides, but about stakes. Not about slogans, but about systems, processes, people. The things that actually matter when you bring AI into an organization.
The brain’s relentless urge to collapse complexity into just two opposing camps. It’s a cognitive shortcut rooted in our evolutionary wiring. Faced with ambiguity, the human brain prefers a tidy mental box. Friend or foe. Safe or dangerous. True or false. Pro or con. It’s faster, easier, and feels safer—especially when we’re under pressure to form an opinion or make a decision.
In evolutionary terms, it served a purpose. Early humans didn’t have the luxury of nuance when deciding whether to run from a shadow in the bushes. Ambiguity meant risk, and risk meant death. Better to assume the worst and survive, even if it meant overreacting. Over time, this split-second reflex baked itself into the human operating system. And in modern life, it’s everywhere—from politics to pop culture to product reviews.
The problem is, technology—especially AI—doesn’t care about our craving for clarity. It’s inherently probabilistic, not deterministic. It works in gradients and gray areas, trained on fuzzy data full of contradictions. When we try to stuff it into moral or ideological binaries—good or bad, safe or dangerous, for or against—we’re not just oversimplifying. We’re setting ourselves up to misunderstand how it actually works.
Even worse, binary bias makes us cling harder to our chosen “side” once we’ve picked it. Studies in cognitive psychology show that once we make a categorical judgment, we’re less likely to revise it, even in the face of conflicting evidence. This is part of what’s known as motivated reasoning—we unconsciously seek out information that supports our original stance and ignore anything that challenges it. So once someone decides they’re “against AI,” for instance, every news article about a chatbot fail becomes validation. Likewise, once someone’s “all in,” they’ll overlook fundamental flaws in favor of hype and hope.
And when business leaders or policymakers fall into this pattern, things get dangerous. They either ban AI outright without understanding what they’re banning, or they greenlight it everywhere without preparing for the unintended consequences. Both approaches are driven less by strategy and more by psychological comfort: it’s easier to take a firm position than to live with uncertainty.
Because here’s the real answer to the original question: I’m not for or against AI. I’m for humans using it wisely. And if that sounds like a cop-out, maybe take a breath and consider this: no one asks if you’re for or against language, or fire, or the internet. Those are tools. And just like AI, they’ve been used to uplift and to exploit, to build and to destroy. The difference is, we’ve had millennia to figure out how to use fire responsibly. With AI, we’ve had… maybe five years of real-world chaos?
We’re still figuring it out.
But that doesn’t mean we have to pick a side. It means we have to pick better questions. Like: What kind of world do we want AI to help build? Who gets to decide that? Who gets left out? What happens when we trust it too much? And what happens when we trust it not at all? Those questions don’t have simple answers. But at least they’re real.