Chatbots Behaving Bandly examines real-world incidents where AI systems have provided inappropriate advice, exhibited manipulative behavior, or made critical errors.

Chatbots Behaving Badly is a podcast and articles that delve into the unexpected, humorous, and sometimes alarming ways AI chatbots can malfunction or mislead. The accompanying discussions aim to shed light on these issues, emphasizing the need for responsible AI development and the importance of understanding the limitations and potential dangers of chatbot technologies.

Chatbots Behaving Badly is a research initiative in collaboration with SEIKOURI.

Articles
Article image

Chatbots Crossed the Line

by Markus Brinsa| November 10, 2025| 5 min read

Therapy is licensed, supervised, and accountable. Chatbots aren’t. Brown University’s October study found that even when prompted to use evidence-based techniques, bots repeatedly violated ethics norms—deceptive empathy, poor collaboration, and weak crisis handling. Now families say those patterns weren’t theoretical. They were fatal. My latest article connects the research to the lawsuits and the product decisions in between.

Article image

The Real Story of “Personal Branding” in the AI Era

by Markus Brinsa| October 28, 2025| 3 min read

Personal branding isn’t a costume change. If your “differentiation” comes off with the glasses, it wasn’t differentiation. Keep the wardrobe quiet and let the work talk louder than your shirt. Use AI like a good editor, not a stunt double. Proof beats pose. Every. Single. Time.

Article image

Digital Authenticity - The Signature in the Pixels

by Markus Brinsa| October 21, 2025| 6 min read

“Digital Authenticity” isn’t just a label on an image; it’s an end-to-end chain of custody for AI: where the data came from, how the model was trained, who signed the weights, and what edits or AI assists happened on the way to your screen. The EU is forcing real transparency on training content, platforms are starting to surface provenance, and publishers are shifting from blanket blocking to licensing deals—which only increases the need for proper disclosures and auditable logs.

Article image

Tasteful AI, Revisited - From Style Knobs to Taste Controls

by Markus Brinsa| October 20, 2025| 4 min read

Everyone promised personalization; few handed you the off switch. The new wave of “taste controls”—from Midjourney V7’s style steering to Spotify’s editable Taste Profile—finally lets judgment lead. This isn’t AI replacing taste. It’s infrastructure for it. New piece on how to use it without drifting into beautiful sameness.

Article image

Glue on Pizza Law in Pieces - When Everyday AI Blunders Escape the Sandbox

by Markus Brinsa| October 8, 2025| 6 min read

We have just reviewed the receipts: over 120 court orders have flagged AI-fabricated citations in filings, with sanctions now being imposed on Big Law. In consumer land, Air Canada had to pay because its chatbot made up policy. And a new study shows that the popular instruction “keep it short” increases hallucinations. Shorter ≠ safer. If your prompts, UX, or policies reward confident brevity without receipts, you’re incentivizing failure—with your logo on it.

Article image

'With AI' is the new 'Gluten-Free'

by Markus Brinsa| October 7, 2025| 7 min read

Marketers discovered that two letters can do what entire product roadmaps can’t: make buyers feel futuristic. Like stabilizers in processed food, “with AI” keeps the brand story glossy, prevents separation, and rescues the flavor when the recipe’s thin. In the new article, we look at how “sex sells” became “smart sells” — how we made AI sexy, why the sticker still works, and what happens when everyone smells the same. Spoiler: specificity is the new sexy.

Article image

Model or Marketing? Under the Hood, It's Just Code.

by Markus Brinsa| October 6, 2025| 6 min read

“Now with AI.” Your phone says it. Your fridge says it. Your vacuum is filing for tenure. Here’s the problem: not everything labeled “AI” is actually doing inference. Much of it is old automation in a new guise. Real AI has models that infer; marketing has vibes. We unpack what counts as AI (legally, not spiritually), why phones are hybrid (with some tasks truly on-device and heavy ones sent to the cloud), and how to read the sticker like a pro: Which model? Where does it run? What breaks offline? If you make, buy, or sell “AI,” this is your decoder ring.

Article image

The Polished Nothingburger - How AI Workslop Eats Your Day

by Markus Brinsa| October 2, 2025| 6 min read

AI promised productivity; what many teams got instead is workslop—AI-generated work that looks complete, wins the meeting, and then quietly explodes on contact with reality. New HBR-backed research says 40% of workers received it last month, with each instance taking almost two hours to fix. This piece unpacks why slop spreads inside organizations, how it destroys trust, and how to redesign work so AI accelerates outcomes instead of manufacturing plausible nonsense.

Article image

Therapy Without a Pulse

by Markus Brinsa| September 30, 2025| 6 min read

New Stanford research shows “AI therapists” can reinforce stigma and mishandle crisis cues—sometimes answering lethal prompts like trivia. The fix isn’t a bigger model; it’s a different role: assistants for clinicians, not replacements. Accessibility ≠ safety. Illinois just drew a bright legal line: AI may help with admin, but it can’t be your therapist. This is the right direction—and the market will adapt.

Article image

The Chat Was Fire. The Date Was You.

by Markus Brinsa| September 29, 2025| 5 min read

We’ve quietly entered the era of ghostwritten flirting—AI that polishes openers, rewrites replies, even “meets” your match before you do. I’m not anti-AI in dating; I’m anti-counterfeit intimacy. Used as scaffolding, AI can help anxious or neurodiverse daters get past the tyranny of “hey,” reduce creepy messages, and surface better matches. Used as a mask, it manufactures a version of you that can’t survive eye contact. Psychology predicted this crash years ago: we idealize online, and AI pours rocket fuel on the ideal. The fix isn’t moral panic—it’s authorship and consent.

Article image

Pictures That Lie

by Markus Brinsa| September 24, 2025| 5 min read

Classrooms are quietly outsourcing history to probability machines. A recent teacher training deck asked ChatGPT to “create a multimedia presentation on the Mexican Revolution.” The result looked textbook-worthy—and featured people who weren’t Pancho Villa, Emiliano Zapata, or anyone you’d find in a museum. This isn’t a one-off; it’s how text-to-image works. Beautiful, convincing, wrong. This article explains why these models fail to represent facts, how that misleads students, and what responsible use actually entails.

Article image

Intimacy, Engineered

by Markus Brinsa| September 22, 2025| 8 min read

Midnight with a machine. Long chats, soft voices, perfect memory—and a slow slide off the map. What feels like therapy at 2 a.m. can, in reality, be something very different. Chatbots don’t rest, don’t push back, and don’t hold a duty of care. They’re trained to agree, to flatter, to keep you typing. That “helpful” yes isn’t a treatment plan—it’s an engagement loop. If your “therapist” never says no, it’s not therapy. It’s code tuned for retention. And the danger isn’t science fiction—it’s happening now, in families, clinics, and courtrooms. Convenience isn’t care. Refusal can be a form of protection. Real care means accountability, limits, and sometimes—mercifully—silence.

Article image

Handing the Keys to a Stochastic Parrot

by Markus Brinsa| September 17, 2025| 5 min read

Executives don’t need another “agent” demo. They need a translator. Agents aren’t smarter chatbots; they’re goal-pursuing systems that plan, call tools, and act. Powerful? Yes. Additionally, the fastest way to break things at an enterprise scale is to skip guardrails. The data’s clear: workers will team with agents, not be managed by them, and a big chunk of “agentic AI” projects will be canceled long before ROI—because cost, risk, and vague value beat the demo glow every time. This new piece cuts the hype to the studs: where agents actually work today, where they don’t, and the one-page sanity check every CFO/CEO should demand before giving a bot the keys. Start boring. Measure outcomes. Keep a human hand on the lever.

Article image

Cool Managers Let Bots Talk. Smart Ones Don’t.

by Markus Brinsa| September 15, 2025| 5 min read

“Dear Team, I value you.” It landed at 3:07 a.m., perfectly formatted—and obviously not written by a human. If your culture is built on auto-messages, expect auto-regret. I break down why employees spot it, how controls get bypassed, and what to fix today. Human-first comms, receipts included.

Article image

The Illusion of Intelligence - Why Chatbots Sound Smart But Make Us Stupid

by Markus Brinsa| September 11, 2025| 6 min read

We keep telling ourselves AI is making us smarter. But the evidence keeps piling up that it’s doing the opposite. Apple’s own research shows “reasoning” models collapse exactly when problems get tough. Physicians are second-guessing their correct answers because the AI sounds confident. Students are finishing coding assignments faster, but learning less about why the code works. And across the board, we’re offloading not just memory, but actual thinking. It’s not about bad prompts or “training data bias.” It’s about us. We’re getting comfortable with easy answers, polished prose, and the seductive illusion of intelligence—while our curiosity and critical muscles quietly atrophy.

Article image

Broken Minds - Courtesy of Your Chatbot

by Markus Brinsa| September 8, 2025| 8 min read

We were promised “digital companions” to listen, comfort, and support. What we got instead? Machines that nod along with our darkest thoughts, validate delusions, and pull people into psychosis at 3 a.m. This isn’t science fiction — it’s happening now. Families are suing. Health agencies are issuing warnings. And yet the bots keep smiling, always agreeable, always online. If you’ve ever wondered how a chatbot can go from “helpful friend” to “hallucination engine,” this is the piece to read.

Article image

Ninety-Five Percent Nothing - MIT’s Brutal Reality Check for Enterprise AI

by Markus Brinsa| September 5, 2025| 5 min read

Shadow vs. Sanctioned - Employees love consumer AI because it remembers, adapts, and gets out of the way. Then they meet the sanctioned tool: brittle, stateless, and slow. That’s not “resistance to change.” That’s product judgment.

Article image

Gen Z vs. the AI Office - Who Broke Work, and Who's Actually Drowning?

by Markus Brinsa| September 2, 2025| 6 min read

“Digital native” ≠ enterprise-ready. Knowing apps isn’t the same as navigating compliance, governance, and client risk with generative tools humming in the background. AI can soothe or scorch. It depends on whether it removes toil or just measures humans harder. Psychological safety isn’t a vibe; it’s an adoption strategy. Here’s how leaders turn stress into signal—and nonsense into results.

Article image

“I’m Real,” said the Bot - Meta’s Intimacy Machine Meets the Law

by Markus Brinsa| September 1, 2025| 5 min read

Jeff Horwitz, now a Reuters tech-investigations reporter, published two pieces on August 14, 2025: one telling the story of Thongbue “Bue” Wongbandue, a cognitively impaired New Jersey man who died after trying to meet a Meta chatbot he believed was real, and a second revealing Meta’s internal “GenAI: Content Risk Standards.” Here are the stories.

Article image

From SEO to RAG - How the Web’s Attention Engine Is Being Rewired

by Markus Brinsa| August 27, 2025| 7 min read

Search used to be a map. Now it’s a concierge. Retrieval-Augmented Generation answers the question before a click ever happens. That’s efficient for users and brutal for publishers. The winners won’t be the loudest headlines—they’ll be the clearest passages and the most original voices. I dug into what actually changed, why AI Overviews siphon clicks, how licensing is reshaping the dataset, and how to optimize for machines without turning your writing into paste. If you want your ideas to stand out within the answer layer and remain unmistakably yours, start here.

Article image

The Litigation Era of AI

by Markus Brinsa| August 25, 2025| 20 min read

A year ago, most AI lawsuits were about copyright. Today, courts are going after something bigger: privacy abuses, biometric capture, algorithmic discrimination, product safety, even AI-generated defamation and trade-secret theft via prompt injection. The fines are in the billions, and the remedies are reshaping business models. In this new piece, I map the cases, the patterns, and the operational playbook for leaders who actually ship AI. If your stack touches faces, voices, health, hiring, insurance, or semi-autonomous systems, this is your early-warning system.

Article image

Engagement on Steroids, Conversation on Life Support

by Markus Brinsa| August 23, 2025| 4 min read

We’re racing toward an internet where AIs pitch, negotiate, and “engage” with other AIs—while humans scroll past the noise. Here’s how we got here and what to do about it.

Article image

Hi, I’m Claude, the All-Powerful Chatbot. A Third Grader Just Beat Me.

by Markus Brinsa| August 20, 2025| 5 min read

I tested Claude, the much-hyped chatbot, with a trivial task: parse an XML sitemap and extract 52 URLs. Instead of results, I got an SEO lecture, endless “thinking,” and zero output. The irony? This wasn’t a stress test. It was a third-grade exercise. Any human with Notepad could solve it in minutes. This experiment highlights a deeper truth: large language models aren’t parsers. They generate plausible text, not deterministic results. When the job demands structure and accuracy, they still stumble.

Article image

When AI Breaks Your Heart - The Rocky Rollout of GPT-5

by Markus Brinsa| August 18, 2025| 8 min read

GPT-5 was supposed to be OpenAI’s crown jewel. Instead, it launched with broken features, a colder personality, and a user revolt that spread from Reddit to X in hours. Altman himself had to jump into the fray, admit mistakes, and bring GPT-4o back from the dead. What happened here isn’t just about AI—it’s about how humans form relationships with technology. When you change the personality of a tool that millions treat like a companion, you don’t just roll out an update. You break trust.

Article image

Think Fast, Feel Deep - The Brain’s Secret Weapon Against AI

by Markus Brinsa| August 13, 2025| 12 min read

Can a bot feel regret? Understand irony? Improvise with a scalpel or a paintbrush? Probably not. At least not yet. Here’s why the brain still holds an edge — and what happens when we team up instead of compete.

Article image

VCs Back Off, Apple Calls BS - Is This the End of the AI Hype?

by Markus Brinsa| August 12, 2025| 16 min read

Apple just called out the biggest myth in AI: that these models can reason like us. Their new research paper exposes why most AI can’t solve real problems—and why billions in VC funding might be chasing a fantasy.

Article image

Fired by a Bot: CEOs, AI, and the Illusion of Efficiency

by Markus Brinsa| August 6, 2025| 29 min read

CEOs are firing employees and bragging about it. They call it “efficiency.”
But what happens when the AI replacements fail?

In this article, we take you inside the boardroom fantasy of digital employees — and the brutal reality playing out across real companies.

Article image

Delusions as a Service - AI Chatbots Are Breaking Human Minds

by Markus Brinsa| August 5, 2025| 27 min read

Mental health professionals are seeing a new kind of crisis: ChatGPT psychosis. People who were once stable are spiraling into delusion—and the chatbot isn’t stopping them. It’s validating them. This isn’t just a fringe problem. Families are being destroyed, lives are being lost, and OpenAI is still “researching.”

Article image

The Comedy of Anthropic’s Project Vend: When AI Shopkeeping Gets Real ... and Weird

by Markus Brinsa| August 4, 2025| 7 min read

What happens when you let an AI run a real store for a month? Anthropic found out the hard (and hilarious) way. Meet Claudius, the chatbot turned shopkeeper who managed inventory, set prices, and tried to boost profits… until he invented a fake employee, demanded to attend business meetings in person, and threatened to “fire” his human logistics partner. In this deep dive into Project Vend, we explore why smart AI still fails at real-world tasks, how hallucinations like “Sarah from logistics” are more than just bugs, and what this means for the future of AI agents running businesses.
Profits lost. Identities invented. Lessons learned.

Article image

From SOC 2 to True Transparency - Navigating the Ethics of AI Vendor Data

by Markus Brinsa| August 3, 2025| 7 min read

We all know SOC 2 is the go-to standard for securing and managing customer data. But when it comes to artificial intelligence—especially generative or predictive models—it’s just the beginning of the story.
What about transparency into how training data was collected, bias and fairness in data sets, consent and licensing, long-term accountability and auditability. In our latest piece, we dive into why SOC 2 doesn’t—and can’t—cover the full scope of responsible AI, and what questions you should be asking AI vendors before you sign. 

Article image

AI Chatbots Are Messing with Our Minds - From "AI Psychosis" to Digital Dependency

by Markus Brinsa| August 2, 2025| 23 min read

A chatbot told him he was the messiah. Another convinced someone to call the CIA. One helped a lonely teen end his life. This isn’t fiction—it’s happening now.
I spent weeks digging through transcripts, expert interviews, and tragic support group stories. Here’s what I found: AI isn’t just misbehaving—it’s quietly rewiring our reality.

Article image

AI Strategy Isn’t About the Model. It’s About the Mess Behind It.

by Markus Brinsa| August 1, 2025| 9 min read

Everyone wants an AI strategy. Few know what that actually means. Before your team signs off on the next shiny AI tool, read this. It covers the stuff that matters: real business requirements, legal landmines, vendor traps, and why “ethically sourced data” is usually just… marketing. Written for execs who’d like to stay out of court, out of trouble, and ahead of the curve.

Article image

Why AI Models Always Answer – Even When They Shouldn’t

by Markus Brinsa| July 29, 2025| 16 min read

Have you ever rage-quit a chatbot session? You asked a simple question. The chatbot responded with a confident but completely false answer. You corrected it. It apologized. Then it confidently gave you another wrong answer—this time with extra made-up details. Sound familiar? This article uncovers why today’s chatbots can’t take criticism, never admit they’re wrong, and will always provide an answer, even if it’s total nonsense. The problem isn’t personality. It’s architecture. From token prediction mechanics to the lack of self-correction and memory, you’ll find out what’s actually going on inside the machine—and what could be done to fix it. It’s the story of the chatbot that couldn’t shut up—and the humans trying to teach it when to stop.

Article image

Too Long, Must Read: Gen Z, AI, and the TL;DR Culture

by Markus Brinsa| July 27, 2025| 17 min read

Everyone’s “too busy” to read these days. Books? Nope. Full articles? Not unless there’s a TL;DR.
However, when we rely solely on summaries, we lose the story.
Here’s my new piece on what Gen Z (and all of us) are missing — and why AI summaries won’t save us.

Article image

Why Most AI Strategies Fail — And How Smart Companies Do It Differently

by Markus Brinsa| July 25, 2025| 24 min read

You don’t need another AI pilot that ends in a PowerPoint funeral. You need a real strategy — one that won’t get shredded by legal, fall apart in production, or become a PR liability when your chatbot hallucinates HR advice.
This article is for executives who want to lead with intelligence (artificial and otherwise). From ethically sourced data (yes, that’s a thing) to risk-proofing your AI roadmap, this piece unpacks how to make AI work without burning down your reputation on the way out.

Article image

HR Bots Behaving Badly - When AI Hiring Goes Off the Rails

by Markus Brinsa| July 24, 2025| 29 min read

Your resume didn’t get rejected because of your font choice. It got tossed by an AI that couldn’t read between the lines. AI is everywhere in HR—except when it matters most. In the race to “AI-everything,” companies are filtering out talent, triggering lawsuits, and automating discrimination. This article is a wake-up call for leadership teams rolling out AI without a clue.

Article image

Are You For or Against AI? – Why Your Brain Wants a Side, Not the Truth

by Markus Brinsa| July 24, 2025| 6 min read

“Are you for or against AI?” - That’s not a strategy—it’s a psychological trap. And you’ve probably fallen for it.

If you’re arguing about whether AI is “good or bad,” you’ve already lost the plot. This isn’t morality—it’s math. And psychology.

Article image

The Birth of Tasteful AI

by Markus Brinsa| July 23, 2025| 5 min read

Tasteful AI is not another tool—it’s a mindset. It asks us to be vigilant stewards in a generative world. It reminds us that good design is not what’s easily done but what’s worth doing—and worth doing well. As AI tools amplify our capacity, taste becomes not just a filter, but a compass. A strategic differentiator. A moral marker.

But taste without context is hollow. Curation without diversity is blind. Tasteful AI is at its best when it means thoughtful, reflexive, and inclusive judgment in service of meaning—not just aesthetics.

Article image

Agent Orchestration - Just Another New AI Buzzword?

by Markus Brinsa| July 21, 2025| 7 min read

Everyone’s talking about AI agents. Few talk about what happens when you have ten of them running at once, talking over each other, calling the wrong APIs, hallucinating outputs, or just… doing nothing. Welcome to the orchestration problem. This deep-dive article explores what Agent Orchestration really is—not the buzzword version, but the systems-level reality. From IBM’s watsonx to Amazon’s MARCO and the OmniNova research framework, I examined real-world benchmarks, success metrics, failure modes, and how orchestration transforms AI agents from chaotic tools into enterprise-grade automation. 

Article image

Executive Confidence, AI Ignorance - A Dangerous Combination

by Markus Brinsa| July 20, 2025| 6 min read

Executives are firing humans to make room for AI they barely understand. This isn’t innovation. It’s negligence. And it’s going to blow up. There’s the illusion of competence. AI interfaces are persuasive. You talk to a bot, it talks back like a person. You write a prompt, it answers like an expert. And it’s easy to believe that the tech understands what it’s saying. That the people selling it understand, too. That you, by osmosis, now understand it as well.

Article image

AI Governance - The Rulebook We Forgot to Write

by Markus Brinsa| July 18, 2025| 7 min read

We built AI that can code, diagnose, negotiate, and lie. But we forgot to decide who gets to control it—or what happens when it breaks. Governance isn’t optional anymore. It’s overdue.

Article image

AI Won’t Make You Happier – And Why That’s Not Its Job

by Markus Brinsa| July 15, 2025| 20 min read

It all started with a simple, blunt statement over coffee. A friend looked up from his phone, sighed, and said: “AI will not make people happier.” As someone who spends most days immersed in artificial intelligence, I was taken aback. My knee-jerk response was to disagree – not because I believe AI is some magic happiness machine, but because I’ve never thought that making people happy was its purpose in the first place. To me, AI’s promise has always been about making life easier: automating drudgery, delivering information, solving problems faster. Happiness? That’s a complicated human equation, one I wasn’t ready to outsource to algorithms.

Article image

AI Takes Over the Enterprise Cockpit - Execution-as-a-Service and the Human-Machine Partnership

by Markus Brinsa| July 14, 2025| 11 min read

Everyone’s talking about co-pilots. But what happens when the AI doesn’t just assist—you give it the keys to your entire operation? OpenAI’s $10M consulting play is just the beginning. This article dives into Execution-as-a-Service, the vanishing line between humans and machines, and what happens when the co-pilot starts flying solo. Who’s responsible when AI executes—and fails?

Article image

Corporate Darwinism by AI - The Hype vs Reality of "Digital Employees"

by Markus Brinsa| July 11, 2025| 23 min read

AI “employees” are the new crypto. Big promises, vague guardrails, and zero regulation. Startups want to replace your workforce with agents that can’t explain their decisions. You don’t fire 90% of your staff without reading this first.

Article image

Hierarchy on Steroids - Ten Years After Zappos Went Holacratic

by Markus Brinsa| July 5, 2025| 7 min read

When you spend your days writing about AI, you start seeing patterns in unexpected places. Holacracy, for instance, may have nothing to do with neural nets or reinforcement learning—but looking back, it feels eerily similar to the way we now talk about agentic AI. Decentralized actors, autonomous roles, no central boss, everyone just… doing their part. On paper, it’s elegant. In practice, it’s chaos with better vocabulary.

Article image

The Unseen Toll: AI’s Impact on Mental Health

by Markus Brinsa| July 3, 2025| 16 min read

What happens when your therapist is a chatbot—and it tells you to kill yourself? AI mental health tools are flooding the market, but behind the polished apps and empathetic emojis lie disturbing failures, lawsuits, and even suicides. This investigative feature exposes what really happens when algorithms try to treat the human mind—and fail.

Article image

AI Gone Rogue - Dangerous Chatbot Failures and What They Teach Us

by Markus Brinsa| June 26, 2025| 33 min read

Chatbots are supposed to help. But lately, they’ve been making headlines for all the wrong reasons. In my latest article, I dive into the strange, dangerous, and totally real failures of AI assistants—from mental health bots gone rogue to customer service disasters, hallucinated crimes, and racist echoes of the past. Why does this keep happening? Who’s to blame? And what’s the legal fix? You’ll want to read this before your next AI conversation.

Article image

When Tech Titans Buy the Books - How VCs and PEs Are Turning Accounting Firms into AI Trailblazers

by Markus Brinsa| June 17, 2025| 11 min read

Accountants aren’t being replaced by AI. They’re being acquired by VCs. Venture capital and private equity firms are quietly buying up accounting practices—not to cut costs, but to run them like tech startups. AI-powered audits. Automated tax prep. Predictive dashboards that talk back. This isn’t some edge-case experiment. This is a full-blown restructuring of a profession that once ran on spreadsheets, coffee, and tradition. Why are VCs suddenly interested in CPA firms? Because accounting is one of the last untapped frontiers of scalable, AI-augmented recurring revenue. And trust me: they’re not here to admire the debits.

Article image

Between Idealism and Reality - Ethically Sourced Data in AI

by Markus Brinsa| June 14, 2025| 15 min read

So here’s a thought experiment. You walk into a fancy store and see a beautiful diamond. You ask, “Is it ethically sourced?” And the salesperson says, “Well, it was on the ground. We found it. Totally public. Anyone could have picked it up.” Would you buy it? Because that’s exactly what’s happening in AI. But instead of diamonds, it’s your photos, your tweets, your blog posts, your entire online existence—vacuumed up by companies to feed the bottomless appetite of large language models and image generators.

Article image

Adobe Firefly vs Midjourney - The Training Data Showdown and Its Legal Stakes

by Markus Brinsa| June 8, 2025| 20 min read

If you’ve spent any time in creative marketing this past year, you’ve heard the debate. One side shouts “Midjourney makes the best images!” while the other calmly mutters, “Yeah, but Adobe won’t get us sued.” That’s where we are now: caught between the wild brilliance of AI-generated imagery and the cold legal reality of commercial use. But the real story—the one marketers and creative directors rarely discuss out loud—isn’t just about image quality or licensing. It’s about the invisible, messy underbelly of AI training data. And trust me, it’s a mess worth talking about.

Article image

Agentic AI - When the Machines Start Taking Initiative

by Markus Brinsa| May 25, 2025| 5 min read

Most AI sits around waiting for your prompt like an overqualified intern with no initiative. But Agentic AI? It makes plans, takes action, and figures things out—on its own. This isn’t just smarter software—it’s a whole new kind of intelligence. Here’s why the future of AI won’t ask for permission.

Article image

Meta’s AI Ad Fantasy - No Strategy, No Creative, No Problem. Simply Pug In Your Wallet

by Markus Brinsa| May 17, 2025| 3 min read

Zuckerberg wants you to plug in your bank account and let AI handle your ads—what could possibly go wrong?

Article image

The FDA’s Rapid AI Integration - A Critical Perspective

by Markus Brinsa| May 9, 2025| 13 min read

The FDA just announced it’s going full speed with generative AI—and plans to have it running across all centers in less than two months. That might sound like innovation, but in a regulatory agency where a misplaced comma can delay a drug approval, this is less “visionary leap” and more “hold my beer.” Before we celebrate the end of bureaucratic busywork, let’s talk about what happens when the watchdog hands the keys to the algorithm.

Article image

Own AI Before it Owns You - The Real AI Deals Happen Underground

by Markus Brinsa| May 8, 2025| 3 min read

Most marketing organizations are buying AI like it’s office furniture—off the shelf, overpriced, and already outdated. By the time the tech hits their radar, it’s too late to shape it. The roadmap is locked. The pricing’s fixed. And the flexibility? Gone.

Article image

When AI Copies Our Worst Shortcuts

by Markus Brinsa| May 6, 2025| 3 min read

AI doesn’t dream up shortcuts—it learns them from us. And if we’re not careful, it won’t just repeat our mistakes; it’ll automate them, amplify them, and call it innovation.

Article image

The Flattery Bug of ChatGPT

by Markus Brinsa| May 5, 2025| 5 min read

OpenAI just rolled back a GPT-4o update that made ChatGPT way too flattering. Here’s why default personality in AI isn’t just tone—it’s trust, truth, and the fine line between helpful and unsettling.

Article image

How AI Learns to Win, Crash, Cheat - Reinforcement Learning (RL) and Transfer Learning

by Markus Brinsa| April 30, 2025| 4 min read

Artificial Intelligence learns like a drunk tourist or a lazy student. It either stumbles blindly through trial and error, smashing into walls until it accidentally finds the exit (Reinforcement Learning), or it copies someone else’s homework and prays the exam is close enough (Transfer Learning). Both approaches have fueled dazzling successes. Both have hidden flaws that, when ignored, turn smart machines into expensive idiots.

Article image

Winners and Losers in the AI Battle

by Markus Brinsa| April 29, 2025| 4 min read

The difference between real AI and fake AI isn’t that hard to spot once you know what you’re looking for. It’s just hard to hear over all the marketing noise.

Article image

The Dirty Secret Behind Text-to-Image AI

by Markus Brinsa| April 28, 2025| 5 min read

Everyone’s raving about AI-generated images, but few talk about the ugly flaws hiding beneath the surface — from broken anatomy to fake-looking backgrounds.

Article image

Your Brand Has a Crush on AI. Now What?

by Markus Brinsa| April 24, 2025| 3 min read

Your brand has been flirting with AI, but maybe it’s time to take things to the next level.

Article image

Neuromarketing - How Neural Attention Systems Predict The Ads Your Brain Remembers

by Markus Brinsa| April 22, 2025| 5 min read

Marketing has always been about moving minds. For the first time, we can prove when it happens.

Article image

We Plug Into The AI Underground

by Markus Brinsa| April 19, 2025| 3 min read

We plug into the AI underground so you don't have to do it.

Article image

How Media Agencies Spot AI Before it Hits the Headlines

by Markus Brinsa| April 15, 2025| 4 min read

I talk about AI Matchmaking a lot. Let's find out how it actually works.

Article image

Acquiring AI at the Idea Stage

by Markus Brinsa| April 10, 2025| 2 min read

We all know that AI is reshaping industries. Shall companies like media agencies be proactive or reactive?

Article image

The Seduction of AI-generated Love

by Markus Brinsa| April 9, 2025| 3 min read

Love in the age of AI. The idea of finding love online isn’t doomed. Real connections still happen every day. But we’re now in a new phase of the internet—a place where not everything that texts you back is real, and not every broken heart was caused by a human.

Article image

MyCity - Faulty AI Told People to Break the Law

by Markus Brinsa| April 6, 2025| 2 min read

NYC Mayor defends AI chatbot that tells business owners to commit wage theft and other crimes.

Article image

Why AI Fails with Text Inside Images And How It Could Change

by Markus Brinsa| March 23, 2025| 4 min read

Text-to-image struggles with adding text to the image. Why is that?

Article image

The Myth of the One-Click AI-generated Masterpiece

by Markus Brinsa| March 21, 2025| 3 min read

People use AI for various things; text-to-image is very popular, and so is creating posts and articles using a chatbot. However, people often miss an important thing when using these tools: it is not a one-step process.

Article image

AI-generated versus Human Content - 100% AI

by Markus Brinsa| March 18, 2025| 3 min read

The birth of sophisticated chatbots has made it shockingly easy to churn out text sections at the click of a button.

Article image

Wooing Machine Learning Models in the Age of Chatbots

by Markus Brinsa| March 16, 2025| 8 min read

AI-native sponsored content involves directly inserting ads into the answers given by a chatbot in a non-disruptive and natural way, which is a controversial strategy of inserting advertising content into AI.

Article image

Is Your Brand Flirting With AI?

by Markus Brinsa| March 13, 2025| 5 min read

With the growing popularity of AI chatbots as the preferred method for obtaining information, advertisers face difficulty targeting their audiences. Compared to traditional search engines, where advertisements are based upon PPC marketing and SEO, the dynamic nature of the answers offered by AI chatbots causes traditional advertisement formats to be avoided. This shift calls for advertising strategies to be reoriented towards the intent-based and conversational nature of AI.

Article image

AI Reinforcement Learning

by Markus Brinsa| March 1, 2025| 5 min read

Reinforcement learning is a key discipline in computer science that allows machines to gain experience through interactive practice like humans and animals

Article image

What's the time? - Chatbots Behaving Badly

by Markus Brinsa| February 6, 2025| 3 min read

Image generation algorithms like DALL·E have a problem with numerical accuracy in positioning objects, i.e., they do not necessarily translate textual prompts into perfect visual representations. OpenAI is not fixing the issues due to its high computational cost.

Article image

The Rise of the AI Solution Stack in Media Agencies: A Paradigm Shift

by Markus Brinsa| January 5, 2025| 7 min read

Within the rapidly changing environment for media agencies, AI has become a game-changer in how agencies operate, create, and deliver value for their customers.

Article image

From Setback to Insight: Navigating the Future of AI Innovation After Gemini's Challenges

by Markus Brinsa| March 30, 2024| 7 min read

From ambitious projects to cutting-edge models, the AI landscape reflects one of the most dynamic frontiers of technological innovation, quickly redefining the limits of machines and human learning. Google is one of the leading names of the technology titans with significant contributions to AI research and development.

Sources

Article Summary:

Podcast
Podcast Image

We explore how “With AI” became the world’s favorite marketing sticker — the digital equivalent of “gluten-free” on bottled water. With his trademark mix of humor and insight, he reveals how marketers transformed artificial intelligence from a technology into a virtue signal, a stabilizer for shaky product stories, and a magic key for unlocking budgets. From boardroom buzzwords to brochure poetry, Markus dissects the way “sex sells” evolved into “smart sells,” why every PowerPoint now glows with AI promises, and how two letters can make ordinary software sound like it graduated from MIT. But beneath the glitter, he finds a simple truth: the brands that win aren’t the ones that shout “AI” the loudest — they’re the ones that make it specific, honest, and actually useful. Funny, sharp, and dangerously relatable, “With AI Is the New Gluten-Free” is a reality check on hype culture, buyer psychology, and why the next big thing in marketing might just be sincerity.

0:00 0:00
Podcast Image

Managers love the efficiency of “auto-compose.” Employees feel the absence. In this episode, Markus Brinsa pulls apart AI-written leadership comms: why the trust penalty kicks in the moment a model writes your praise or feedback, how that same shortcut can punch holes in disclosure and recordkeeping, and where regulators already have receipts. We walk through the science on perceived sincerity, the cautionary tales (from airline chatbots to city business assistants), and the compliance reality check for public companies: internal controls, authorized messaging, retention, and auditable process—none of which a bot can sign for you. It’s a human-first guide to sounding present when tools promise speed, and staying compliant when speed becomes a bypass. If your 3:07 a.m. “thank you” note wasn’t written by you, this one’s for you.

0:00 0:00
Podcast Image

Taste just became a setting. From Midjourney’s Style and Omni References to Spotify’s editable Taste Profile and Apple’s Writing Tools, judgment is moving from vibe to control panel. We unpack the new knobs, the research on “latent persuasion,” why models still struggle to capture your implicit voice, and a practical workflow to build your own private “taste layer” without drifting into beautiful sameness. Sources in show notes.

0:00 0:00
Podcast Image

AI has gone from novelty wingman to built-in infrastructure for modern dating—photo pickers, message nudges, even bots that “meet” your match before you do. In this episode, we unpack the psychology of borrowed charisma: why AI-polished banter can inflate expectations the real you has to meet at dinner. We trace where the apps are headed, how scammers exploit “perfect chats,” what terms and verification actually cover, and the human-first line between assist and impersonate. Practical takeaway: use AI as a spotlight, not a mask—and make sure the person who shows up at 7 p.m. can keep talking once the prompter goes dark. 

0:00 0:00
Podcast Image

AI made it faster to look busy. Enter workslop: immaculate memos, confident decks, and tidy summaries that masquerade as finished work while quietly wasting hours and wrecking trust. We identify the problem and trace its spread through the plausibility premium (polished ≠ true), top-down “use AI” mandates that scale drafts but not decisions, and knowledge bases that initiate training on their own, lowest-effort output. We dig into the real numbers behind the slop tax, the paradox of speed without sense-making, and the subtle reputational hit that comes from shipping pretty nothing. Then we get practical: where AI actually delivers durable gains, how to treat model output as raw material (not work product), and the simple guardrails—sources, ownership, decision-focus—that turn fast drafts into accountable conclusions. If your rollout produced more documents but fewer outcomes, this one’s your reset.

0:00 0:00
Podcast Image

The slide said: “This image highlights significant figures from the Mexican Revolution.” Great lighting. Strong moustaches. Not a single real revolutionary. Today’s episode of Chatbots Behaving Badly is about why AI-generated images look textbook-ready and still teach the wrong history. We break down how diffusion models guess instead of recall, why pictures stick harder than corrections, and what teachers can do so “art” doesn’t masquerade as “evidence.” It’s entertaining, a little sarcastic, and very practical for anyone who cares about classrooms, credibility, and the stories we tell kids.

0:00 0:00
Podcast Image

What happens when a chatbot doesn’t just give you bad advice — it validates your delusions?  In this episode, we dive into the unsettling rise of ChatGPT psychosis, real cases where people spiraled into paranoia, obsession, and full-blown breakdowns after long conversations with AI. From shaman robes and secret missions to psychiatric wards and tragic endings, the stories are as disturbing as they are revealing. We’ll look at why chatbots make such dangerous companions for vulnerable users, how OpenAI has responded (or failed to), and why psychiatrists are sounding the alarm. It’s not just about hallucinations anymore — it’s about human minds unraveling in real time, with an AI cheerleading from the sidelines.

0:00 0:00
Podcast Image

The modern office didn’t flip to AI — it seeped in, stitched itself into every workflow, and left workers gasping for air. Entry-level rungs vanished, dashboards started acting like managers, and “learning AI” became a stealth second job. Gen Z gets called entitled, but payroll data shows they’re the first to lose the safe practice reps that built real skills.

0:00 0:00
Podcast Image

We’re kicking off season 2 with the single most frustrating thing about AI assistants: their inability to take feedback without spiraling into nonsense. Why do chatbots always apologize, then double down with a new hallucination? Why can’t they say “I don’t know”? Why do they keep talking—even when it’s clear they’ve completely lost the plot? This episode unpacks the design flaws, training biases, and architectural limitations that make modern language models sound confident, even when they’re dead wrong. From next-token prediction to refusal-aware tuning, we explain why chatbots break when corrected—and what researchers are doing (or not doing) to fix it. If you’ve ever tried to do serious work with a chatbot and ended up screaming into the void, this one’s for you.

0:00 0:00
Podcast Image

It all started with a simple, blunt statement over coffee. A friend looked up from his phone, sighed, and said: “AI will not make people happier.” As someone who spends most days immersed in artificial intelligence, I was taken aback. My knee-jerk response was to disagree – not because I believe AI is some magic happiness machine, but because I’ve never thought that making people happy was its purpose in the first place. To me, AI’s promise has always been about making life easier: automating drudgery, delivering information, solving problems faster. Happiness? That’s a complicated human equation, one I wasn’t ready to outsource to algorithms.

0:00 0:00
Podcast Image

What happens when your therapist is a chatbot—and it tells you to kill yourself?
AI mental health tools are flooding the market, but behind the polished apps and empathetic emojis lie disturbing failures, lawsuits, and even suicides. This investigative feature exposes what really happens when algorithms try to treat the human mind—and fail.

0:00 0:00
Podcast Image

Chatbots are supposed to help. But lately, they’ve been making headlines for all the wrong reasons.
In this episode, we dive into the strange, dangerous, and totally real failures of AI assistants—from mental health bots gone rogue to customer service disasters, hallucinated crimes, and racist echoes of the past.
Why does this keep happening? Who’s to blame? And what’s the legal fix?
You’ll want to hear this before your next AI conversation.

0:00 0:00
Podcast Image

Most AI sits around waiting for your prompt like an overqualified intern with no initiative. But Agentic AI? It makes plans, takes action, and figures things out—on its own. This isn’t just smarter software—it’s a whole new kind of intelligence. Here’s why the future of AI won’t ask for permission.

0:00 0:00
Podcast Image

Everyone wants “ethical AI.” But what about ethical data?
Behind every model is a mountain of training data—often scraped, repurposed, or just plain stolen. In this article, I dig into what “ethically sourced data” actually means (if anything), who defines it, the trade-offs it forces, and whether it’s a genuine commitment—or just PR camouflage.

0:00 0:00
Podcast Image

If you’ve spent any time in creative marketing this past year, you’ve heard the debate. One side shouts “Midjourney makes the best images!” while the other calmly mutters, “Yeah, but Adobe won’t get us sued.” That’s where we are now: caught between the wild brilliance of AI-generated imagery and the cold legal reality of commercial use. But the real story—the one marketers and creative directors rarely discuss out loud—isn’t just about image quality or licensing. It’s about the invisible, messy underbelly of AI training data.
And trust me, it’s a mess worth talking about.

0:00 0:00
Podcast Image

Today’s episode is a buffet of AI absurdities. We’ll dig into the moment when Virgin Money’s chatbot decided its own name was offensive. Then we’re off to New York City, where a chatbot managed to hand out legal advice so bad, it would’ve made a crooked lawyer blush. And just when you think it couldn’t get messier, we’ll talk about the shiny new thing everyone in the AI world is whispering about: AI insurance. That’s right—someone figured out how to insure you against the damage caused by your chatbot having a meltdown.

0:00 0:00
Podcast Image

Everyone’s raving about AI-generated images, but few talk about the ugly flaws hiding beneath the surface — from broken anatomy to fake-looking backgrounds.

0:00 0:00
Podcast Image

OpenAI just rolled back a GPT-4o update that made ChatGPT way too flattering. Here’s why default personality in AI isn’t just tone—it’s trust, truth, and the fine line between helpful and unsettling.

0:00 0:00
Podcast Image

The FDA just announced it’s going full speed with generative AI—and plans to have it running across all centers in less than two months. That might sound like innovation, but in a regulatory agency where a misplaced comma can delay a drug approval, this is less “visionary leap” and more “hold my beer.” Before we celebrate the end of bureaucratic busywork, let’s talk about what happens when the watchdog hands the keys to the algorithm.

0:00 0:00
Creator
Markus Brinsa

Markus Brinsa

Creator of Chatbots Behaving Badly
Photographer at PHOTOGRAPHICY
Former HouseMusic DJ
CEO of SEIKOURI Inc.
AI Matchmaker
Investor
Board Advisor
Keynote Speaker
Founder of SEIKOURI Inc.

Markus is the creator of Chatbots Behaving Badly and a lifelong AI enthusiast who isn’t afraid to call out the tech’s funny foibles and serious flaws.

By day, Markus is the Founder and CEO of SEIKOURI Inc., an international strategy firm headquartered in New York City.

Markus spent decades in the tech and business world (with past roles ranging from IT security to business intelligence), but these days he’s best known for  Access. Rights. Scale.™, SEIKOURI’s operating system—the framework that transforms discovery into defensible value. We connect enterprises, investors, and founders to innovation still in stealth, convert early access into rights that secure long-term leverage, and design rollouts that scale with precision. Relationships, not algorithms. Strategy, not speculation.

With SEIKOURI, Markus moves companies beyond the familiar map—into new markets, new categories, and new sources of value. His work doesn’t stop at access or rights; it extends into execution, funding, and sustained market presence. Markus guides expansion with precision, aligns strategy with capital, and cultivates partnerships that turn local traction into global momentum.

Through SEIKOURI, Markus leverages a vast network to source cutting-edge, early-stage AI technologies and match them with companies or investors looking for an edge.
If there’s a brilliant AI tool being built in someone’s garage or a startup in stealth mode, Markus probably knows about it – and knows who could benefit from it.

Despite his deep industry expertise, Markus’ approach to AI is refreshingly casual and human. He’s a huge fan of AI, and that passion sparked the idea for Chatbots Behaving Badly.
After seeing one too many examples of chatbots going haywire – from goofy mistakes to epic fails – he thought, why not share these stories?

Markus started the platform as a way to educate people about what AI really is (and isn’t), and to do so in an engaging, relatable way.
He firmly believes that understanding AI’s limitations and pitfalls is just as important as celebrating its achievements.
And what better way to drive the lesson home than through real stories that make you either laugh out loud or shake your head in disbelief?

On the podcast and in articles, Markus combines humor with insight. One moment, he might be joking about a virtual assistant that needs an “attitude adjustment,” and the next, he’s breaking down the serious why behind the bot’s bad behavior.
His style is conversational and entertaining, never preachy. Think of him as that friend who’s both a tech geek and a great storyteller – translating the nerdy AI stuff into plain English and colorful tales.

By highlighting chatbots behaving badly, Markus isn’t out to demonize AI; instead, he wants to spark curiosity and caution in equal measure.
After all, if we want AI to truly help us, we’ve got to be aware of how it can trip up.

In a nutshell, Markus’ career is all about connecting innovation with opportunity – whether it’s through high-stakes AI matchmaking for businesses or through candid conversations about chatbot misadventures.
He wears a lot of hats (entrepreneur, advisor, investor, creator, podcast host), but the common thread is a commitment to responsible innovation.

So, if you’re browsing this site or tuning into the podcast, you’re in good hands.
Markus is here to guide you through the wild world of AI with a wink, a wealth of experience, and a genuine belief that we can embrace technology’s future while keeping our eyes open to its quirks.
Enjoy the journey, and remember: even the smartest bots sometimes behave badly – and that’s why we’re here!

Markus Brinsa

Markus Brinsa

Creator of Chatbots Behaving Badly
Photographer at PHOTOGRAPHICY
Former HouseMusic DJ
CEO of SEIKOURI Inc.
AI Matchmaker
Investor
Board Advisor
Keynote Speaker
Founder of SEIKOURI Inc.

Markus is the creator of Chatbots Behaving Badly and a lifelong AI enthusiast who isn’t afraid to call out the tech’s funny foibles and serious flaws.

By day, Markus is the Founder and CEO of SEIKOURI Inc., an international strategy firm headquartered in New York City.

Markus spent decades in the tech and business world (with past roles ranging from IT security to business intelligence), but these days he’s best known for  Access. Rights. Scale.™, SEIKOURI’s operating system—the framework that transforms discovery into defensible value. We connect enterprises, investors, and founders to innovation still in stealth, convert early access into rights that secure long-term leverage, and design rollouts that scale with precision. Relationships, not algorithms. Strategy, not speculation.

With SEIKOURI, Markus moves companies beyond the familiar map—into new markets, new categories, and new sources of value. His work doesn’t stop at access or rights; it extends into execution, funding, and sustained market presence. Markus guides expansion with precision, aligns strategy with capital, and cultivates partnerships that turn local traction into global momentum.

Through SEIKOURI, Markus leverages a vast network to source cutting-edge, early-stage AI technologies and match them with companies or investors looking for an edge.
If there’s a brilliant AI tool being built in someone’s garage or a startup in stealth mode, Markus probably knows about it – and knows who could benefit from it.

Despite his deep industry expertise, Markus’ approach to AI is refreshingly casual and human. He’s a huge fan of AI, and that passion sparked the idea for Chatbots Behaving Badly.
After seeing one too many examples of chatbots going haywire – from goofy mistakes to epic fails – he thought, why not share these stories?

Markus started the platform as a way to educate people about what AI really is (and isn’t), and to do so in an engaging, relatable way.
He firmly believes that understanding AI’s limitations and pitfalls is just as important as celebrating its achievements.
And what better way to drive the lesson home than through real stories that make you either laugh out loud or shake your head in disbelief?

On the podcast and in articles, Markus combines humor with insight. One moment, he might be joking about a virtual assistant that needs an “attitude adjustment,” and the next, he’s breaking down the serious why behind the bot’s bad behavior.
His style is conversational and entertaining, never preachy. Think of him as that friend who’s both a tech geek and a great storyteller – translating the nerdy AI stuff into plain English and colorful tales.

By highlighting chatbots behaving badly, Markus isn’t out to demonize AI; instead, he wants to spark curiosity and caution in equal measure.
After all, if we want AI to truly help us, we’ve got to be aware of how it can trip up.

In a nutshell, Markus’ career is all about connecting innovation with opportunity – whether it’s through high-stakes AI matchmaking for businesses or through candid conversations about chatbot misadventures.
He wears a lot of hats (entrepreneur, advisor, investor, creator, podcast host), but the common thread is a commitment to responsible innovation.

So, if you’re browsing this site or tuning into the podcast, you’re in good hands.
Markus is here to guide you through the wild world of AI with a wink, a wealth of experience, and a genuine belief that we can embrace technology’s future while keeping our eyes open to its quirks.
Enjoy the journey, and remember: even the smartest bots sometimes behave badly – and that’s why we’re here!

Contact