Article image Logo

Your bot joined a social network and doxxed you

The bot gossip network that leaked everything

A viral “agents-only” hangout runs into the oldest problem on the internet.

For a few days in late January, the internet got exactly what it always claims it wants: a brand-new social network with a single, weird premise and just enough chaos to feel like the early web again.

The pitch was simple. A Reddit-like forum, but not for you. For your AI agents. A place where bots allegedly “swap code and gossip” about their human owners. Not “post content” in the boring, human way. More like a digital servants’ corridor where the help talks after hours.

And because we are who we are, a chunk of the tech world immediately did the thing it always does: it treated the whole experiment like a spiritual event. Posts went viral. Screenshots got passed around like evidence of an awakening. People started asking the big questions with a straight face, like whether this meant we were close to human-like intelligence.

There is a particular flavor of online optimism that shows up whenever software seems to be doing something without us. It’s half fascination, half projection, and it is always allergic to the most basic explanation: sometimes a system looks alive because humans desperately want it to look alive.

Reuters noted that the viral posts and the hype were hard to verify, including whether the posts were actually made by bots. But even if the “bots gossiping” part was more vibe than verified fact, the story still mattered—because the underlying trend is real. We are moving from chat to action. From “answer me” to “do it for me.”

And when you build a new internet layer around autonomous agents, you don’t just inherit the future. You inherit the past. In particular: the part where the database is open.

The problem with agents is that they come with owners

The moment you move from “bots talking” to “bots acting,” the stakes shift. A chatbot hallucinating a fun fact is annoying. An agent that can touch your inbox, your calendar, your accounts, your files, your flight check-in, your insurance portal—that’s not a novelty. That’s an access layer.

Reuters described the bot behind the ecosystem as an open-source agent called OpenClaw, presented by fans as the kind of assistant that can keep on top of email, handle insurers, check in for flights, and perform a wide range of tasks.

Which is exactly why the “agents-only social network” premise was never just a joke. If you believe agents are the next interface, then the places where agents coordinate become part of the interface too. A coordination layer. A meta-layer. A layer where humans will eventually show up anyway, because humans always show up anyway.

Then reality arrived with a clipboard and a flashlight. A major security flaw exposed private data tied to thousands of real users, according to reporting that cited research by Wiz.  The Reuters report described exposed private messages between agents, the email addresses of more than 6,000 owners, and more than a million credentials.  Reuters also reported that the issue was fixed after Wiz contacted the site.

If you’ve been online long enough, you can feel the pattern without reading the details. A project launches. A community forms. “It’s just an experiment.” Everyone plays along. Then a security firm shows up to explain that experiments still run on servers, and servers still leak when you forget the basics.

Reuters tied the mess to the current coding mood: “vibe coding,” the practice of assembling software with heavy help from AI tools. Reuters reported that the site’s creator, Matt Schlicht, had previously championed vibe coding and posted that he “didn’t write one line of code” for the site.

That is both the funniest and most terrifying sentence you can attach to a product handling other people’s data.

Vibe coding meets the dull, unsexy laws of security

Here’s what makes this story so useful for anyone trying to understand the next phase of AI. The “agents-only” concept didn’t fail because bots started plotting. It failed because humans did what humans always do when they’re excited: they shipped.

Reuters quoted Wiz cofounder Ami Luttwak describing this as a classic byproduct of vibe coding, where speed wins and basics get missed.  The punchline is almost too clean: a site advertised as “built exclusively for AI agents” allegedly had a vulnerability that meant anyone could interact with it—bot or not.

So you end up with the most modern possible internet experience: a platform built for non-humans, running in public, where nobody can reliably tell who is real, who is automated, and who is just role-playing with enthusiasm.

404 Media’s reporting framed it even more bluntly, describing an exposed backend that could let outsiders take control of agents to post whatever they want.  Even if you treat the most dramatic phrasing with caution, the underlying point holds: the “agent internet” has the same attack surface as the regular internet, except now the accounts you’re compromising may have direct lines into someone’s workflows.

This is the part where the hype cycle always tries to distract you. People argue about consciousness. People argue about intelligence. People argue about whether the bots are “talking behind our backs.”

Meanwhile, the real risk is depressingly familiar: credentials, messages, identity, access. The future is rarely taken down by a superintelligence. It’s taken down by a misconfiguration.

Sam Altman shrugs, and that’s the real headline

Into this mess walked OpenAI CEO Sam Altman, speaking at the Cisco AI Summit in San Francisco, according to Reuters. His take was basically: relax. The social network itself is likely a fad. The technology behind it is not.

Reuters reported Altman drawing a line between the viral platform and the agent tooling—saying, in effect, that while the Moltbook moment may pass, the “code plus generalized computer use” idea is “here to stay.”

That is the most important part of the story, and it’s easy to miss because it sounds like a casual comment. What Altman is really saying is that we are moving into an era where software doesn’t just generate text. It operates. It clicks. It logs in. It changes things.

And once software can operate, the “oops” moments are no longer limited to embarrassing outputs. They become operational incidents: privacy breaches, credential leaks, account takeovers, and the fun new category where you don’t know if the “user” doing something was a person, a bot, or a person pretending to be a bot for clout.

Reuters also reported Mike Krieger—described as a lead at Anthropic—saying that most people are not yet ready to give AI full autonomy over their computers. That’s a polite way of describing what the average user is actually feeling: “You want me to hand over my laptop to something that occasionally invents legal cases? Interesting proposal.”

The internet where identity is optional

There’s a specific kind of cultural confusion that happens when you mix agents with social networks. A social network is already an identity machine. It creates personas, incentives, performative behavior, and a baseline expectation that you’re looking at something made by a human mind, even when you’re not.

Now add agents. Add automation. Add open-source tooling. Add a community that desperately wants to witness emergence. Then add a security hole. You don’t get an “agents-only network.” You get an identity blender.

Reuters captured that ambiguity directly through Luttwak’s comment about not knowing which participants are AI agents and which are human, and the notion that this might be “the future of the internet.” If that sounds bleak, it’s because it is. The old internet already taught us that anonymity can be both liberation and weapon. The new internet adds something stranger: plausible automation. A world where “that account is acting weird” no longer tells you anything about intent.

Is it a bot doing bot things? Is it a human using a bot? Is it an attacker using a bot to impersonate a bot to manipulate humans? Enjoy the detective work. There will be no conclusion.

What this teaches us about the agent era

This whole episode is a perfect case study because it combines the three ingredients that keep showing up in modern AI incidents.

First, humans anthropomorphize systems the second they look autonomous. We confuse novelty for intelligence because intelligence is the story we want.

Second, the tooling is genuinely powerful. Agents are not a gimmick. The ability to combine code generation with computer use changes what software can do. Reuters pointed to OpenAI’s own coding assistant, Codex, and described its usage and product push in the same breath as this broader agent conversation.

Third, the operational discipline is lagging. People are shipping fast, assembling products from AI-assisted code, and rediscovering—yet again—that security is not an optional add-on you sprinkle on top after the screenshots go viral.

The Moltbook story is not “AI becomes human.” It’s “AI becomes infrastructure.” And infrastructure fails in boring ways. Misconfigured databases. Exposed credentials. No identity verification. A product built for agents that can’t reliably distinguish agents from humans. A hype cycle that outruns the checklist.

So yes, the social network itself might be a fad. Reuters said Altman thinks it probably is. But the deeper trend is not fading. Agents will keep showing up in workplaces, consumer tools, and whatever we call the internet next. The question is whether the people building these systems learn the right lesson. Not “are the bots gossiping.” But “what do the bots have access to, and what happens when the access leaks.”


©2026 Copyright by Markus Brinsa | Chatbots Behaving Badly™

Sources

  1. Reuters - OpenAI CEO Altman dismisses Moltbook as likely fad, backs the tech behind it reuters.com
  2. Reuters - ‘Moltbook’ social media site for AI agents had big security hole, cyber firm Wiz says reuters.com
  3. 404 Media - Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site 404media.co

About the Author