Article imageLogo
Chatbots Behaving Badly™

Engagement on Steroids, Conversation on Life Support

By Markus Brinsa  |  August 23, 2025

Sources

The inbox where nobody’s home

Open your email and you’ll see the future blinking back at you. Gmail now suggests drafts with a cheerful “Help me write,” Outlook’s Copilot triages threads and proposes tone-tuned replies, and across the enterprise stack, the same pitch repeats: let the AI handle it. On the other end of that thread, the recipient’s stack is running identical routines. Your bot drafts; their bot summarizes. Your bot follows up; their bot files it away. The message traveled, the metrics look healthy, but did anyone actually show up?

Bots talking to bots isn’t sci-fi anymore

In sales and service, “autonomous agent” has moved from a keynote buzzword to a product SKU. Salesforce touts Agentforce, a platform for agents that analyze data, make decisions, and execute tasks across sales, marketing, and service. Intercom’s Fin claims to resolve the bulk of inbound support on its own. Zendesk is rolling out AI agents across channels, including email and voice. Meanwhile, prospecting tools promise AI SDRs that assemble lists, draft sequences, and send messages without human hands. Stitch those together and you don’t just automate a task—you automate both sides of the conversation.

Engagement without humans

The social side isn’t spared. Brands wire up Meta’s Business Suite to auto-respond in Messenger, Instagram, and WhatsApp, while customer-care platforms layer AI to suggest or send replies at scale. A comment posted at 3:07 a.m. gets a response at 3:07:01. Look closer, and the tone is polished, a little too consistent, suspiciously tireless. Now imagine that the post it’s responding to was written by an AI content assistant, scheduled by an AI scheduler, and boosted by an AI media buyer. The loop closes.

Reach is cheap. Resonance isn’t.

If wider reach is the prize, AI has already delivered it—at the cost of signal-to-noise. The web is filling with machine-written posts, reviews, and listicles; search engines and platforms are scrambling to filter the slop. In newsfeeds and search results, the sheer volume of plausible-but-empty content dulls attention. Even Medium has wrestled publicly with AI-generated spam, while regulators chase fake reviews supercharged by generative tools. We’re measuring success by how many bots can talk to how many bots, faster. That’s not a market; that’s an echo.

People can tell—and they care (mostly)

Consumers are getting better at spotting machine-made language, and many want it labeled. Deloitte found strong support for mandatory disclosure. Adobe reported growing skepticism, and media/marketing surveys show people disengage when they suspect content is synthetic. At the same time, early research from Stanford HAI suggests labeling AI text doesn’t necessarily reduce its persuasiveness—transparency improves, but the message may still move people. That contradiction is the hazard of AI-to-AI communications: scale without trust can still shape behavior, especially when no one can tell who—or what—said what first.

Why do we still send the email?

Because email, like social, is more than a channel—it’s a ledger of intent. A calendar invite carries obligation. A written reply stakes a position. For teams, the inbox is discovery, negotiation, and escalation all at once. The minute you remove humans from the loop, you preserve the ledger but erase the intent. Your AI can book the meeting; their AI can politely decline; everybody “touched base,” and nobody learned anything. The KPI went up. The relationship didn’t.

The hidden costs of letting the machines mingle

There’s a second-order risk: model collapse. As more systems train on outputs from other systems, the feedback loop can degrade the next generation’s quality—like photocopying a photocopy until only static remains. Platforms and policymakers are already pushing for transparency and provenance, not just to protect elections and discourse, but to protect the training commons itself. Europe’s AI Act phases in transparency obligations for providers; the FTC is cracking down on deceptive AI claims and fake-review markets. Notice how quickly the conversation shifts from “can a bot write this?” to “can we still trust anything?” That shift is the bill for automation at both ends.

What survives when the bots take over

Paradoxically, the more automated our exchanges become, the more valuable the unmistakably human ones get. The comments that land are messy, specific, time-stamped with a person’s presence. The emails that matter are the ones that take a stand, ask a hard question, or share context no model could infer. Brands that treat AI as armor—deflecting contact, smoothing every edge—will feel efficient and empty. Brands that treat AI as scaffolding—triaging the routine to make room for real conversation—will feel alive.

So, what happens to human interaction when AIs talk to each other? It doesn’t disappear. It concentrates. Automation moves the trivial to the machines and throws the meaningful into sharper relief, if you let it. Use Gemini and Copilot to draft the boring parts. Let Agentforce, Fin, and Zendesk’s bots resolve the obvious tickets and route the rest. Wire Meta’s automations for FAQs and store hours. But mark the thresholds where a person steps in, and make those thresholds visible. Put your name on the things that count. Say “I.” Leave fingerprints.

The internet can sustain infinite AI conversations that no one reads. Your job, if you want a business instead of a botnet, is to make fewer, truer exchanges that someone remembers. In the end, that’s why we still send the email and still comment on the post: not to fill a pipeline, but to cross a distance. And when distance matters, a human voice still carries farther than any machine.

About the Author

Markus Brinsa is the Founder and CEO of SEIKOURI Inc., an international strategy consulting firm specializing in early-stage innovation discovery and AI Matchmaking. He is also the creator of Chatbots Behaving Badly, a platform and podcast that investigates the real-world failures, risks, and ethical challenges of artificial intelligence. With over 15 years of experience bridging technology, business strategy, and market expansion in the U.S. and Europe, Markus works with executives, investors, and developers to turn AI’s potential into sustainable, real-world impact.

©2025 Copyright by Markus Brinsa | Chatbots Behaving Badly™