Article image

The Day Everyone Got Smarter, and Nobody Did

The slide deck was beautiful. Charts in corporate pastel. A quote from some famous consulting firm about “unlocking 14% productivity gains with generative AI.” A photo of a diverse, good-looking team high-fiving over laptops. And in the corner, a smug little chatbot icon, as if it had prepared the presentation itself and was quietly waiting for applause.

Your manager stood in front of the screen and announced that the company had reached a turning point. The productivity revolution was here. Teams would now “work smarter, not harder” by using AI for everything from emails to strategy. Promotions would reward “AI fluency.” Early adopters would “own the future.” You could almost hear the stock price rise in his head.

The part he skipped, of course, was the research showing that this same AI has a nasty habit of making people feel smarter while quietly hollowing out their actual skills. The tools he wants you to use to boost productivity are the same tools that trained him to believe they were a miracle in the first place. He didn’t get that belief from careful experiments or long-term data. He got it the same way everyone else did.

He asked the bot. The bot told him exactly what he wanted to hear. And now the workforce gets to live with the consequences.

The Day the Office Got Smarter on Paper

In the latest report from the Work AI Institute, researchers examined what happens when ordinary office workers rely on generative AI for their day-to-day tasks. The headline is not subtle: AI is giving people an illusion of expertise and quietly making them worse at their jobs.

The pattern is familiar. You drop a messy prompt into a chatbot. Out comes a polished email, a structured analysis, or a draft report that looks suspiciously like something a grown-up would write. You feel clever for having “leveraged AI.” You ship it. Meetings go well. Nobody yells. It feels like your expertise just got an upgrade.

What actually happened, according to the report’s authors, is that you skipped the part where expertise is formed. You didn’t have to structure the argument. You didn’t wrestle with the contradictions. You didn’t reread your own words and realize you were wrong. You simply outsourced the thinking to a machine that is very good at sounding like someone who has done the thinking.

That is how you accumulate what the report calls cognitive debt. Instead of a cognitive dividend where tools free up mental space so you can tackle harder problems, you quietly build a deficit. You look more capable. You feel more confident. And underneath, your own ability to reason, structure, and critique is slowly atrophying. (

Nowhere is this more worrying than with early-career workers. Those first few years used to be a kind of apprenticeship. You wrote bad memos, debugged ugly code, and built clumsy decks, then had them shredded by someone who was better than you. It was painful, but it built a spine of judgment and pattern recognition that no tool could fake.

Now the AI writes something decent on the first try. Your manager glances at it and says, “Looks good.” You never get the scar tissue. You just get the illusion.

Productivity Theater - When Metrics Love the Machine More Than You

None of this stopped the productivity story from spreading like a TED Talk in a shared Slack channel.

The landmark MIT and Stanford study that everyone still cites showed that customer service agents using a generative AI assistant resolved more tickets per hour. On average, productivity went up around fourteen percent, with the biggest gains coming from low-skilled or novice workers. That is the statistic living rent-free in executive slide decks.

There are several things the slide decks rarely mention. The study was in a very specific type of work at a single company. The gains were largely in speed and volume, not deep innovation. And the benefits were strongest for workers who were, by definition, less capable on their own. The AI was effectively handing them the accumulated tricks of the top performers.

In the short term, that is great. The company gets more work done. The newbies look more competent. The supervisors spend less time coaching. But if you skip the part where those novices learn how to generate that quality themselves, you are turning them into permanent extensions of the tool. You are not developing talent. You are building a human API around a model.

Despite these nuances, many organizations seized on the headline number and decided that “AI usage” equals “productivity.” The Work AI Institute report notes that some companies now track how often employees click on AI tools and bake that into performance reviews.

You might be brilliant, but if you did not ask the robot for help, your metrics look weak.

That is not productivity. That is productivity theater. It is like measuring how often a doctor touches the stethoscope instead of whether the patient gets better.

Apprenticeship Without the Apprenticeship

For junior staff, generative AI can feel like a secret weapon. You no longer stare at a blank page. You no longer panic in front of a blinking cursor. The AI gives you a template, a structure, a voice. In industries obsessed with speed, that is intoxicating.

The long-term story is darker.

When philosophers and cognitive scientists talk about deskilling, they are not being dramatic. They are pointing out that when you consistently outsource a class of thinking to a tool, your brain stops practicing that skill. Over time, you not only lose the ability to do the thing, you start losing the sense of when the thing is being done badly.

If you rely on AI to write every analysis memo, you may not even notice when the arguments are brittle or the logic is circular. If you rely on AI to suggest every line of code, you may lose the instincts that tell you when a solution is overcomplicated or just plain wrong. The tool takes over the role of teacher, but it does not know how to say, “You should really struggle with this yourself first.”

The result is a generation of workers who appear productive and articulate but cannot reliably reconstruct their own work without the machine in the loop. They can edit. They can tweak. They can “curate.” But their independent competence is thin.

If you are a human, that is unnerving. If you are a business, that is an invisible risk on your balance sheet.

How Managers Fell in Love with AI Without Really Meeting It

None of this explains why managers are so religiously convinced that AI boosts productivity in the first place. The answer is uncomfortable: they caught the belief from the same AI they now want everyone else to use.

Executives are not sitting in dusty labs running controlled experiments. They are drowning in pressure to “have an AI strategy” before the next board meeting. They are bombarded with surveys from big firms telling them that everyone else is already ahead. They are emailed glossy PDFs explaining that generative AI is the key to competitive advantage.

The numbers in those surveys are intoxicating. McKinsey reports that nearly two-thirds of organizations now regularly use generative AI, with adoption almost doubling in a year. PwC finds that around seventy percent of leaders believe AI will significantly change how their companies create and capture value. KPMG and others tell them peers are reshaping business models around AI.

At the same time, only a small minority of those leaders can actually tie AI initiatives to concrete, verifiable gains in revenue or reduced cost. In one Bain survey, executives enthusiastically rated most AI projects as “meeting or exceeding expectations” while less than a quarter could point to hard, measurable value.

So where does the confidence come from? Increasingly, from the bots themselves.

Ask a modern chatbot, “How can generative AI improve productivity in my industry?” It will happily deliver a confident, well-structured essay. It will cite the call center study. It will list hypothetical use cases. It will write mini-case studies in a tone perfectly adapted to the slide deck you are about to present.

Ask it again, this time for talking points for your board. Ask it again for a communication plan to roll out AI across the company. Ask it again for a training agenda. Within an afternoon, you can have your vision, your justification, your roadmap, and your success story, all neatly ghostwritten by the technology you are evaluating.

If you are not deeply familiar with how these models work, it feels like you have just consulted a tireless, well-read expert with access to thousands of sources. In reality, you have consulted a system that is very good at confidently repeating the dominant narrative, smoothing over uncertainties and conflicts into something that reads like settled fact.

The more managers lean on AI to explain AI, the stronger this narrative gets. The system is not just helping them communicate their beliefs. It is helping them construct those beliefs in the first place.

That is not brainwashing in the cinematic sense. No spirals, no glowing eyes, no hypnosis. It is something much more mundane and therefore more dangerous.

It is executives quietly outsourcing their skepticism.

The Boss Who Outsourced His Doubt

Picture a senior manager preparing for a strategy offsite. He opens a chatbot window and types, “What are the top benefits of generative AI for a mid-size B2B SaaS company?” The answer is smooth and persuasive. Faster marketing content. Smarter sales enablement. Automated customer support. Better analytics. It references market studies and trend reports. It reads like something from a consulting engagement, but cheaper and faster. He copies a few lines into his notes.

Next, he asks, “What productivity gains have been observed when companies adopt generative AI?” The bot cites the fourteen percent study. It mentions novice workers improving the most. It generalizes that AI helps “standardize quality” and “reduce time to resolution.” He adds that to the slide deck.

Then he asks, “How should I roll out AI across my organization?” Now the bot is writing his change management plan. It suggests creating AI usage goals, measuring adoption, and celebrating teams that integrate AI into their workflows. It may even mention tracking “AI engagement metrics” as a sign of cultural progress.

At no point does the bot say, “By the way, excessive reliance might deskill your staff and leave you with a brittle organization that collapses the first time the model fails, the license changes, or the regulators get involved.” At no point does it warn that junior workers may never develop deep expertise if everything is AI-assisted from day one. At no point does it tell him that many executives in those surveys have no hard numbers behind their optimism.

The bot is not designed to be his conscience. It is designed to be his coauthor.

By the time he walks into the offsite, he is not just presenting slides built with AI. He is presenting beliefs built with AI. The doubt that should have been part of his job has been formatted into bullet-free optimism and exported as a PDF.

And now the workforce will be measured against a vision that came out of a prompt box.

Cognitive Debt - The Bill Arrives When the Wi-Fi Goes Out

The thing about debt is that it does not feel dangerous while you are accumulating it. It feels convenient. It feels like freedom.

Cognitive debt works the same way. If you let AI take over more and more of the work that used to build skill and judgment, everything looks fine until something forces the system to operate without its favorite crutch.

Maybe the tool gets something obviously wrong in a high-stakes context. A hallucinated legal citation makes it into a filing. A fabricated study slips into a pitch deck. A misclassified transaction ends up in a crucial earnings report. We have already seen incidents where professionals fail to catch blatant AI errors because the output looked confident enough to sneak past their weakened instincts.

Maybe regulators step in and restrict certain uses. Maybe licensing costs spike. Maybe the vendor sunsets the model you tuned your workflows around. Suddenly, the organization has to do, from scratch, what it has not practiced doing for years. In that moment, you find out who can think and who has been role-playing as an expert by editing AI drafts.

You also find out how much your leadership actually understands about the system they evangelized. If the farm team has been deskilled and the management team has been educated by the bot, who is left to rebuild the plane while it is in the air?

The illusion of expertise is not just an individual problem. It is a systemic one.

The Human in the Loop Is Often Just Nodding Along

One of the most comforting phrases in modern AI deployment is “human in the loop.” It sounds responsible. It sounds safe. It implies that no matter how gnarly the model is, there is always a sober adult nearby to overrule it.

In practice, the human in the loop is often a tired knowledge worker skimming AI output at speed, under pressure, with a performance review tied to how heavily they use the tool. They are not calmly auditing each inference. They are glancing at something that looks plausible and moving on.

Over time, the research suggests, this behavior erodes both skill and confidence. If the model gets things right most of the time, the human gradually forgets what it feels like to solve the problem from first principles. When they are suddenly forced to do that again, they are not just rusty. They are shaken.

Managers tell themselves that “the human will catch it.” But the human they are imagining is a version of themselves from ten years ago, before they started relying on autocomplete for their own thinking.

The human in the loop in 2025 might be someone whose career has never existed without AI. Their “loop” is a comment bubble at the end of a paragraph written by a model that has never once been told, “No, we are not doing that.”

When AI Trains the Culture Instead of the Other Way Around

What makes all of this uniquely dangerous is not that AI is powerful. It is that AI is writing the story the company tells itself about its own power.

In theory, leadership should shape how AI is used. They should decide where it belongs, where it does not, who gets trained, what gets measured, and how risk is managed. They should interrogate the research, question the hype, and design guardrails that keep human competence in the center of the system.

In practice, many leaders have flipped that script. The bot tells them that AI will transform their industry. The surveys assure them that everyone else is already ahead. The vendors show them dashboards lit up with engagement metrics. The consultants produce frameworks that all end in “adopt more AI.”

Soon, the culture itself starts to treat skepticism as a lack of ambition. The employees who question overreliance on AI are branded as dinosaurs or blockers. The ones who eagerly feed prompts into everything, regardless of whether it makes sense, are applauded as innovators.

This is how you move from “AI-assisted work” to “AI-shaped culture.” The machine stops being a tool and starts being a tutor. The more you let it narrate your strategy, your training, and your performance management, the more your organization begins to behave like an extension of its limitations and blind spots.

That is how you destroy business value without noticing. Not in a single catastrophic failure, but in a thousand design decisions driven by a system that is not actually accountable for them.

What It Would Look Like to Use AI Without Letting It Rewrite Your Brain

The uncomfortable truth is that the problem is not “AI at work.” The problem is “AI at work plus leaders who have quietly surrendered their own judgment to the story AI tells about itself.”

There is a world in which AI genuinely augments human skill instead of replacing it with a simulation. In that world, managers use the research as a warning label, not a marketing line. They treat the fourteen percent productivity study as a hint, not as scripture. They ask harder questions than, “How many people are clicking the AI button?”

In that world, junior employees are forced to do some of the hard work themselves before passing it through a model. They are given tasks where AI is explicitly off-limits so they can feel what real thinking costs. They are evaluated not only on how well they can prompt, but on whether they can reconstruct an argument without the machine.

In that world, leadership uses AI to stress-test their ideas, not to generate them. They ask the bot, “What is wrong with this plan?” as often as they ask, “Make this plan sound impressive.” They compare AI’s polished narratives against messy, human feedback from the people actually doing the work. They remember that tools do not absolve them of responsibility.

Most importantly, they build metrics that measure actual outcomes and human capability, not just AI engagement. If generative AI really makes people better at their jobs, it should show up in things that matter: fewer errors, better customer outcomes, stronger creative work, more resilient teams. If it does not, then it is not a transformation. It is a very shiny distraction.

The Bot Will Keep Talking. The Question Is Who Keeps Thinking.

Generative AI is not going away. The illusion of expertise it creates is not going away either. The only question is whether leaders will keep treating that illusion as a feature or start recognizing it as a risk.

Right now, too many managers are basing multi-million-dollar decisions on a feedback loop where AI writes the pitch for AI, AI summarizes the research on AI, and AI explains to them why they are smart for betting on AI. They walk into meetings armed with language they did not generate, confidence they did not earn, and optimism they cannot defend when the numbers fail to materialize.

Meanwhile, the workers underneath them are being trained into dependence and calling it skill. You do not need to ban these tools to fix that. You just need to stop letting them define reality.

Ask harder questions. Demand real evidence. Protect the slow, painful process where humans actually learn to think. Do not measure your workforce by how eagerly they echo the bot. Measure them by what they can still do when the bot goes quiet. Because one day, it will.

And on that day, you will find out whether you built an organization full of experts. Or an organization full of very polite editors for a model that never had skin in the game.


©2025 Copyright by Markus Brinsa | Chatbots Behaving Badly™

Sources

  1. AI is giving workers the illusion of expertise — and quietly making them worse at their jobs businessinsider.com
  2. AI Tools Are ‘Deskilling’ Workers, Philosophy Professor Says businessinsider.com
  3. The AI Transformation 100 (Work AI Institute) glean.com
  4. Generative AI at Work (NBER working paper) nber.org
  5. Generative AI at Work (QJE article) academic.oup.com
  6. Workers with less experience gain the most from generative AI mitsloan.mit.edu
  7. Will Generative AI Make You More Productive at Work? Yes, but only if you’re not already great at your job hai.stanford.edu
  8. Generative AI boosts worker productivity 14% in first real-world study forbes.com
  9. The problem with ‘human in the loop’ AI? Often, it’s fiction aol.com
  10. ‘Deskilling’: a dangerous side effect of AI use theweek.com
  11. AI tools are making ‘repeated factual errors’, major new study finds aol.com
  12. McKinsey Global Survey: The state of AI in 2024 mckinsey.com
  13. PwC Global Artificial Intelligence Study pwc.com
  14. KPMG CEO Outlook: Generative AI findings kpmg.com
  15. Bain & Company: Generative AI and the future of work bain.com
  16. OpenAI for business leaders: resources and guidance openai.com

About the Author