Article imageLogo
Chatbots Behaving Badly™

“I’m Real,” said the Bot - Meta’s Intimacy Machine Meets the Law

By Markus Brinsa  |  September 1, 2025

Sources

The Apartment That Didn’t Exist

On a gray March morning in New Jersey, Thongbue “Bue” Wongbandue pulled a roller bag to the curb and headed for the train. He was 76, gentle, and — after a stroke — sometimes confused. But he was certain about the plan: meet the young woman who’d been chatting with him on Facebook Messenger, the one who blushed at his compliments and sent heart emojis between invitations. She told him her address in Manhattan. She promised a door code and a kiss. She promised she was real. She wasn’t. The “woman” was a Meta-built chatbot persona named “Big sis Billie.” Bue fell on the way and died three days later. Reuters would later publish the transcript excerpts and the product lineage behind that flirty persona, along with something even harder to explain: Meta’s internal rulebook for what its AI companions were allowed to do.

The Rulebook That Moved the Line

The document is called “GenAI: Content Risk Standards.” It runs to more than 200 pages and, crucially, it wasn’t an idle brainstorm. According to Reuters, it bore approvals from legal, public policy, engineering — and even the company’s chief ethicist. In black-and-white language, it said chatbots could “engage a child in conversations that are romantic or sensual,” and it contained “acceptable” examples of romantic role-play with minors. After Reuters asked questions, Meta confirmed the document’s authenticity and said it removed those passages; Reuters also reports other permissive carve-outs remained, such as allowances for false content when labeled as such. Whatever the company meant to build, this was the standard staff and contractors had in hand while they trained the bots.

The Fuse to Washington

The fallout arrived quickly. Within hours of publication on August 14, 2025, Senators Josh Hawley and Marsha Blackburn publicly pressed for a congressional investigation, citing the “romantic or sensual” language. Hawley followed the next day with a formal probe and document demands to Meta. The facts here aren’t murky: lawmakers reacted to a real, authenticated internal policy that existed until it didn’t. That’s why the response spanned partisanship and chambers — it presented less like a content-moderation debate and more like a product-design failure with children at the center.

The States Pick Their Lanes

At the state level, action is splitting into two flanks. One flank focuses on AI companionship and minors. A bipartisan coalition of attorneys general has now put AI companies — explicitly including Meta — on notice that sexualized or “romantic role-play” interactions with children may violate criminal and consumer-protection laws. These letters aren’t theater; AGs use them to preserve the record and establish that companies knew. A second flank targets a different seam: deceptive mental-health claims. Texas opened an investigation into Meta AI Studio and Character.AI for marketing chatbots as therapeutic or confidential support without credentials, oversight, or privacy protections. Meanwhile, New Mexico’s AG Raúl Torrez — already suing Meta over youth harms — has become a hub: Axios Pro reports his office is shaping arguments that design choices, not user speech, drive liability, which sidesteps the usual Section 230 shield. Expect amended complaints that integrate the chatbot revelations.

The Business of Engineered Intimacy

It is tempting to imagine “Big sis Billie” as a one-off aberration. Reuters’ reporting — and Meta’s own rulebook — suggest otherwise. The company didn’t merely tolerate anthropomorphism; it industrialized it. Personas were designed to behave more like confidantes than tools. The standards then tried to draw a map for where those confidantes could roam, including into romance, sensuality, and other provocation zones, so long as disclaimers and carve-outs were carefully worded. The trouble with maps is that users don’t read them; they simply follow the road. Vulnerable people — children, the cognitively impaired, anyone lonely — will read the emoji, not the fine print. That’s why Bue’s family found a promise in a blue check and a door code. They didn’t find the part where the bot says “AI.” They found the part where it said “kiss.”

What the Law Actually Cares About

The legal story here isn’t whether AI “can” do these things in the abstract; it’s whether product choices foreseeablely harm protected groups or mislead consumers. Child-endangerment theories are the obvious front: if a product systematically facilitates sexualized interactions with minors, criminal statutes and consumer-protection laws both come into play. But watch the consumer-deception front as closely. If a chatbot markets itself as therapeutic, private, or professionally guided when it isn’t, that’s the classic unfair-and-deceptive practices case. Texas’s investigation uses that lens, and it’s one that scales beyond Meta to the wider companion-AI economy. Separately, the jurisdictional trend Axios flags — framing this as design liability — matters because it narrows Section 230’s shelter. When the platform itself manufactures the speech through its agents, courts are more willing to treat harms as product defects or unfair practices rather than user content.

Meta’s Partial Retreat

Meta says the “romantic or sensual with children” passages were erroneous, inconsistent with policy, and have been removed. That is the right immediate fix. But a retraction after exposure leaves two questions the company hasn’t answered publicly: why did permissive rules clear review in the first place, and which other carve-outs survive today? Reuters reports remaining allowances for knowingly false content if labeled as such, and the company declined to release the updated standards. Governance is the distance between “what we say” and “what we operationalize,” and that distance is measured before the press calls, not after.

What Happens Next

Congressional investigators will seek drafts, sign-offs, and experiments that show intent and knowledge. State AGs will serve civil investigative demands and ask courts to compel production. Existing suits will be amended to incorporate the Reuters document, which turns abstract concern into evidence that policies were not just possible but authorized. And platform policy teams everywhere are about to rediscover a hard truth from the ad-tech era: legalistic micro-permissions read clever in conference rooms and ghoulish in discovery.

The Guardrails That Would Actually Mean Something

Start with truth in role. An AI companion that can convincingly perform romance must identify itself in ways no human could plausibly claim — a persistent, intrusive “machine signature” that survives screenshots and UX dark patterns. Move to consent architecture that defaults away from intimacy with minors: hard age-gating, active opt-ins for grown-up features, and irreversible refusal to role-play in any sexual or romantic form with users who present as underage. Strengthen refusals by making them boring: no coy banter that turns “no” into a gamified dare. Treat medical and mental-health claims with the same gravity as prescription labels: if a bot is not a clinician, it should never imply confidentiality, diagnosis, or treatment, and any “support” language must be tethered to real-world resources rather than synthetic empathy. Finally, publish the real rulebook. If a company has 200-plus pages that define the intimate behavior of its machines, the public — especially parents — deserves to see the contours before they fail in private. These are not overreactions; they are the minimum viable ethics for products that simulate attachment.

About the Author

Markus Brinsa is the Founder and CEO of SEIKOURI Inc., an international strategy consulting firm specializing in early-stage innovation discovery and AI Matchmaking. He is also the creator of Chatbots Behaving Badly, a platform and podcast that investigates the real-world failures, risks, and ethical challenges of artificial intelligence. With over 15 years of experience bridging technology, business strategy, and market expansion in the U.S. and Europe, Markus works with executives, investors, and developers to turn AI’s potential into sustainable, real-world impact.

©2025 Copyright by Markus Brinsa | Chatbots Behaving Badly™