Article image Logo

The Cyber Chief Who Fed ChatGPT

The job description that didn’t include “please don’t paste this into ChatGPT”

There are plenty of ways to embarrass yourself at work. Accidentally “reply all” to the entire company. Forget you’re not on mute and narrate your lunch choices during a board call. Forward a meme to your boss that was absolutely meant for your group chat. But if you’re the acting head of America’s civilian cyber defense agency, there is a special tier of embarrassment reserved for a very specific move: taking government material marked “for official use only” and uploading it to a public consumer chatbot.

Not a secure internal tool. Not a fenced-off government environment. Not even a “company account” with enterprise controls and retention policies you can point to in a meeting when someone asks, “Wait, where does that data go?” Public ChatGPT.

And this wasn’t an intern with a deadline and a caffeine deficit. The reporting says it was Dr. Madhu Gottumukkala, the acting director of the Cybersecurity and Infrastructure Security Agency, the agency that spends much of its time warning others not to do exactly this.

What happened and why it set off alarms

Multiple outlets report that Gottumukkala uploaded contracting-related documents marked “for official use only” into ChatGPT last summer, and that the activity triggered automated security warnings designed to prevent the disclosure or theft of government material. The documents weren’t classified, but they were explicitly not meant for public release.

The timeline matters because it turns this from a “whoops” into a governance story. The uploads reportedly happened between mid-July and early August 2025, and security sensors flagged the behavior in early August, generating several alerts in the first week alone. That detail is not just background color. It’s the whole plot.

If sensors caught it quickly, the system is doing what it’s supposed to do. The question becomes: why was the acting director able to do it at all?

Because he reportedly had special access. At the time, most Department of Homeland Security employees were blocked from using ChatGPT, and Gottumukkala had been granted an exception. Not the fun kind of exception, like “you can expense extra guacamole.” The kind that exists because leadership believes they can handle risk better than everyone else, right up until they demonstrate in public that they cannot.

The classic enterprise failure wearing a government badge

If you’ve worked in any large organization, you’ve seen the template. Step one is policy. “Don’t paste sensitive data into public tools.” Step two is enforcement. “We’ve blocked that tool on corporate networks.” Step three is reality. “We’re granting exceptions for a few people.” Step four is inevitability. “One of the people with an exception creates the exact incident the policy was written for.”

The reporting makes this feel uncomfortably familiar: controlled tools exist, but the public tool is more convenient. The official tool feels slow, constrained, or annoying. The public tool is fast, fluent, and frictionless. So someone with enough authority decides friction is for other people.

This is where “shadow AI” stops being a cute phrase and starts being a management indictment. Shadow AI is not only employees sneaking tools past IT. Shadow AI is also leadership normalizing the use of consumer AI as a default productivity layer while controls lag behind. It’s the moment “everyone is doing it” becomes the organization’s real policy.

And once it’s leadership is doing it, you can stop pretending this is a training problem and start calling it what it is: a governance failure.

“For official use only” is not nothing

Many executives hear “not classified” and mentally translate it as “basically fine.” That’s how this kind of story keeps happening.

“For official use only” is not a decorative label. It’s a restriction. It exists because information can be sensitive without being classified. Contracting documents can reveal vendor relationships, procurement timelines, internal priorities, and operational details that don’t belong in the open. It’s the type of material that, in the wrong context, can be used for manipulation, targeting, or competitive advantage. Even when the content isn’t explosive, the metadata can be.

And then there’s the AI angle. Putting internal material into a public large language model is not like emailing it to yourself. You’re no longer inside your environment. You’re feeding data into a system governed by someone else’s retention, training, and access logic. Even if the vendor says they have safeguards, you’ve still lost something fundamental: control.

That loss of control is the real harm. It’s not always a dramatic leak. It’s the fact that the organization can no longer answer basic questions with confidence: Where did that data go? Who can access it? How long is it stored? Can it be retrieved? Can it show up somewhere else?

In other words, the exact questions CISA would ask if this happened at a power company, a hospital network, or a state government.

The official response that tries to make it smaller 

The reporting also includes the familiar official-language move: contain the blast radius by shrinking the timeline and emphasizing controls. 

A CISA spokesperson described Gottumukkala’s use as “short-term and limited.” CISA’s director of public affairs, Marci McCarthy, told Politico (as quoted by other outlets) that he was granted permission to use ChatGPT “with DHS controls in place,” and that CISA’s posture is to block ChatGPT by default unless an exception is granted. She also said his last recorded use was in mid-July 2025. 

There are two ways to read this. The generous reading is that there was an approved, time-boxed exception, wrapped in DHS controls, and the incident did not result in meaningful exposure. The less generous reading is that “controls in place” is a phrase people use when they want you to stop asking what the controls were, whether they were appropriate, and why they didn’t prevent exactly this outcome. 

Because the story isn’t that a guy used ChatGPT. The story is that the top person at the agency that warns about data leakage used the public version to handle restricted material, and the system that is supposed to prevent leaks lit up with alerts. That’s not “short-term and limited.” That’s a case study.

Why this matters beyond one person

This is not primarily a character story. It’s a pattern story. It’s about incentives. The public tool is better. The approved tool is worse. The organization wants productivity gains and AI credibility. Leadership wants to look modern. Security teams want to avoid headlines. So exceptions become the compromise. And exceptions become the vulnerability. And eventually exceptions become the incident. 

The reporting frames the episode as raising questions about AI governance inside the agency responsible for defending federal networks and critical infrastructure. That’s the clean, professional way to put it. The messy human way is: if the people in charge of cyber defense can’t resist the convenience of a consumer chatbot, your average enterprise has no chance unless it designs for human behavior rather than policy compliance fantasies. 

The control surface for AI is not just technical. It’s cultural. It’s who gets told “no,” who gets told “yes,” and who gets to bypass the guardrails because they’re senior enough to believe guardrails are optional.

The uncomfortable lesson for everyone shipping “AI productivity” 

This incident also lands at a particularly awkward moment for AI adoption. Organizations are actively encouraging experimentation with chatbots and copilots. Leaders want faster writing, faster analysis, faster everything. They want “AI fluency” across the workforce. But fluency without constraints becomes improvisation with sensitive data.

The promise of generative AI inside organizations depends on one unglamorous requirement: containment. You can’t tell people “use AI everywhere” and then act shocked when they use it everywhere, including places they shouldn’t. You can’t build a culture of “move fast” and then expect perfect judgment around data handling.

That contradiction is exactly why this story has legs. It’s not exotic. It’s normal. It’s the logical outcome of the last two years of AI hype meeting the oldest human behavior in enterprise: people take shortcuts to get work done. The villain is not the model. It’s the workflow. 

The ending nobody likes 

The most unsettling part is not the upload. It’s that the system caught it, the agency did an internal review, and the public still doesn’t know what the review concluded. That’s not a demand for drama. That’s a demand for accountability. If the incident had no impact, say so clearly and explain why. If it had impact, say what changed. “We looked into it” is the institutional equivalent of “trust me, bro.”

Because without a clear outcome, the public is left with the only conclusion people ever reach in the absence of facts: leadership will keep getting exceptions, and exceptions will keep being the place where controls fail. And the next time a government agency warns the private sector not to paste restricted material into chatbots, everyone will remember the part where the warning came from an agency whose own leader did it first.

About the Author: Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. As an AI Risk & Governance Strategist, he created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 30 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.  


©2026 Copyright by Markus Brinsa | Chatbots Behaving Badly™

Sources

  1. TechCrunch - Trump’s acting cybersecurity chief uploaded sensitive government docs to ChatGPT techcrunch.com
  2. CSO Online - CISA chief uploaded sensitive government files to public ChatGPT csoonline.com
  3. IT Pro - CISA’s interim chief uploaded sensitive documents to a public version of ChatGPT – security experts explain why you should never do that itpro.com
  4. The Independent - Trump’s head of cyber security uploaded ‘sensitive’ materials to a public ChatGPT independent.co.uk
  5. SC Media - CISA acting director reportedly uploaded sensitive documents to ChatGPT scworld.com

About the Author