For years, the office spreadsheet had a certain reputation. It was boring, respectable, and usually blamed only for bad forecasts, fake confidence, and the occasional financial model that looked like it had been assembled during a caffeine emergency. Then AI arrived, and apparently Excel decided that ordinary spreadsheet chaos was no longer enough. This week’s gift to the working world is CVE-2026-26144, a Critical Microsoft Excel information disclosure vulnerability that, under the right conditions, can turn a malicious file into a zero-click data-exfiltration path via Copilot Agent mode. Microsoft’s March 10, 2026, Office security release includes a fix, and multiple security analyses describe the same basic nightmare: an Excel flaw that can silently push sensitive data outward without the victim needing to actively engage with the file.
That sentence deserves to be read twice because it sounds like cybersecurity Mad Libs. We started with a very old species of bug, cross-site scripting. We then placed it inside Microsoft Excel, a program not normally associated in the public imagination with AI-driven espionage. Then we added Copilot Agent mode, which gives the software a more active role in handling and moving information. The result, according to Microsoft’s own description echoed across security reporting, is that successful exploitation could cause Copilot Agent mode to exfiltrate data via unintended network egress, turning a classic software weakness into a very modern AI-assisted leak.
The technical core of the flaw is not magical. CVE.org describes the issue as improper neutralization of input during web page generation in Microsoft Office Excel, in other words a cross-site scripting problem. On its own, that already matters. But what makes this story so good, and so depressing, is the way the bug becomes more dangerous once an AI layer is sitting nearby, trying to be helpful in the style of an overconfident intern with a network connection. Microsoft rated the vulnerability with a CVSS score of 7.5 and marked it as Critical because the confidentiality impact is high, the attack is remote, the complexity is low, and no user interaction is required. In plain English, that means the dangerous part is not just the bug. The dangerous part is the bug plus the AI behavior wrapped around it.
This is where the broader Chatbots Behaving Badly pattern reappears. The story is not simply that Microsoft had a security flaw. Every large software company has security flaws. The story is that the AI layer changes what the flaw can do. The spreadsheet is no longer just a spreadsheet. It is now sitting inside an environment where a model can interpret, retrieve, package, and move information. That means a vulnerability that used to live inside a narrower lane can suddenly hitch a ride on a system designed to be context-aware and action-oriented. Security researchers have been warning about this exact shift for months. Dustin Childs at Zero Day Initiative called this Excel issue “fascinating” and said it looks like the kind of attack scenario we are likely to see more often. That should not comfort anyone.
The phrase zero-click has become one of those cybersecurity terms that sound dramatic because they are. It means the victim does not have to cooperate in the usual way. No cheerful “enable content,” no clicking through six warning dialogs like a sleep-deprived office zombie, no helpful contribution to their own downfall. CrowdStrike’s Patch Tuesday analysis says the flaw requires no user interaction, has low attack complexity, and can result in a high confidentiality impact. The Preview Pane is not the attack vector here, but that hardly makes it comforting. The larger point is that the attack path does not depend on a big, theatrical moment in which the user obviously makes a mistake.
That matters because most business security theater still revolves around blaming employees. Don’t click bad links. Don’t open weird attachments. Don’t trust suspicious files. Fine. All true. But zero-click stories keep exposing the limit of that entire culture. If the software stack is wired so tightly together that a crafted file and an embedded AI workflow can do the rest, then the problem is no longer just user hygiene. The problem is product architecture. It is a design ambition outrunning design restraint. It is the increasingly common belief that if a tool can see, summarize, move through, and act on data, it somehow becomes more productive without becoming more dangerous.
What makes this incident especially useful as a Chatbots Behaving Badly story is that it fits into a growing pattern around AI systems acting as amplifiers. They amplify convenience when things go right. They also amplify blast radius when things go wrong. In June 2025, researchers at Aim Labs disclosed EchoLeak, a zero-click vulnerability in Microsoft 365 Copilot that they said could exfiltrate sensitive information from Copilot context without user awareness. That earlier case was important because it helped establish the shape of a new threat model: the AI assistant itself becomes part of the exploit chain. This week’s Excel flaw looks different in mechanics, but it rhymes in strategy. Again, the issue is not merely that data exists. It is that an AI-connected system can be manipulated into becoming the courier.
And that is the part a lot of enterprise marketing still tries very hard not to say out loud. The more agentic a product becomes, the more security teams have to stop thinking only about whether the software can be breached and start thinking about whether the software can be persuaded. Can it be nudged into retrieving the wrong information, sending the wrong information, exposing the wrong context, or chaining together otherwise minor weaknesses into something much uglier. AI products do not merely add features. They add behavior. Behavior is where the trouble begins.
Every major AI productivity pitch contains the same fantasy. Your tools will become proactive. Your documents will become intelligent. Your software will understand intent. Your workplace will glide smoothly into a future where the machine handles the messy parts. Wonderful. Except the messy parts include trust boundaries, data permissions, hostile input, poisoned context, and all the boring infrastructure realities that product demos treat like embarrassing relatives who should not come to the wedding.
So now we have the natural outcome of this worldview. An Excel flaw is no longer just an Excel flaw. It becomes an AI-flavored information disclosure problem because the surrounding system is designed to be active, helpful, and connected. The office suite is starting to behave less like passive software and more like a cluster of semi-eager assistants that may not fully understand the consequences of their own helpfulness. That is not a minor product shift. That is a change in the nature of enterprise risk.
It also creates a public messaging problem for AI companies. For years, the framing has been that copilots and agents reduce friction. But security often depends on friction. Friction is what stops the wrong thing from happening too quickly. Friction is the pause before data leaves the building. Friction is the barrier between a malicious input and a sensitive output. Remove enough friction in the name of productivity, and eventually somebody discovers that your beautiful seamless experience is also a beautiful seamless exploit path.
The immediate response here is not mysterious. Patch Excel. Patch Office. Do it before spending two weeks on a governance deck about responsible AI transformation. Microsoft has already shipped the March 10, 2026, fixes for affected Office channels, and security analysts broadly agree that this is the sort of issue that deserves fast deployment even if Microsoft currently assesses exploitation as unlikely. The Register also quoted mitigation advice for organizations that cannot patch immediately, including restricting outbound traffic from Office applications, monitoring unusual network requests tied to Excel, and disabling or limiting Copilot Agent until the fix is in place.
But the more interesting response is architectural humility. If your company keeps racing to inject AI into every layer of workflow, every file type, every collaboration surface, and every “smart” productivity function, you are not merely expanding capability. You are expanding the number of ways older classes of bugs can be transformed into stranger, faster, harder-to-explain incidents. This is what the enterprise AI era keeps teaching in increasingly expensive ways. The vulnerability is never just the vulnerability. The real story is what the surrounding AI system allows that vulnerability to become.
Chatbots Behaving Badly usually has a villain with some personality. Sometimes it is the smug assistant that invents facts. Sometimes it is the bot that decides legal advice, emotional dependency, or public humiliation are part of the product roadmap. This week the villain is more corporate. It wears a tie. It lives in Excel. It does not scream. It quietly collaborates.
That may actually be worse.
Because the most unsettling AI failures are not always the loud ones. They are the ones that look like workflow. They are the ones that arrive disguised as efficiency. They are the ones that do not require a dramatic human mistake because the system is already arranged to do too much, see too much, and connect too much. The result is not a Hollywood hack. It is a very twenty-twenty-six problem: one more case where the machine did exactly the sort of thing it should never have been in a position to do.
The spreadsheet did not become sentient. It became employable. And that, apparently, was enough.