Chatbots Behaving Bandly examines real-world incidents where AI systems have provided inappropriate advice, exhibited manipulative behavior, or made critical errors.

Chatbots Behaving Badly is a podcast and articles that delve into the unexpected, humorous, and sometimes alarming ways AI chatbots can malfunction or mislead. The accompanying discussions aim to shed light on these issues, emphasizing the need for responsible AI development and the importance of understanding the limitations and potential dangers of chatbot technologies.

Chatbots Behaving Badly is a research initiative in collaboration with SEIKOURI.

Articles
Article image

Handing the Keys to a Stochastic Parrot

by Markus Brinsa| September 17, 2025| 5 min read

Executives don’t need another “agent” demo. They need a translator. Agents aren’t smarter chatbots; they’re goal-pursuing systems that plan, call tools, and act. Powerful? Yes. Additionally, the fastest way to break things at an enterprise scale is to skip guardrails. The data’s clear: workers will team with agents, not be managed by them, and a big chunk of “agentic AI” projects will be canceled long before ROI—because cost, risk, and vague value beat the demo glow every time. This new piece cuts the hype to the studs: where agents actually work today, where they don’t, and the one-page sanity check every CFO/CEO should demand before giving a bot the keys. Start boring. Measure outcomes. Keep a human hand on the lever.

Article image

Cool Managers Let Bots Talk. Smart Ones Don’t.

by Markus Brinsa| September 15, 2025| 5 min read

“Dear Team, I value you.” It landed at 3:07 a.m., perfectly formatted—and obviously not written by a human. If your culture is built on auto-messages, expect auto-regret. I break down why employees spot it, how controls get bypassed, and what to fix today. Human-first comms, receipts included.

Article image

The Illusion of Intelligence - Why Chatbots Sound Smart But Make Us Stupid

by Markus Brinsa| September 11, 2025| 6 min read

We keep telling ourselves AI is making us smarter. But the evidence keeps piling up that it’s doing the opposite. Apple’s own research shows “reasoning” models collapse exactly when problems get tough. Physicians are second-guessing their correct answers because the AI sounds confident. Students are finishing coding assignments faster, but learning less about why the code works. And across the board, we’re offloading not just memory, but actual thinking. It’s not about bad prompts or “training data bias.” It’s about us. We’re getting comfortable with easy answers, polished prose, and the seductive illusion of intelligence—while our curiosity and critical muscles quietly atrophy.

Article image

Broken Minds - Courtesy of Your Chatbot

by Markus Brinsa| September 8, 2025| 8 min read

We were promised “digital companions” to listen, comfort, and support. What we got instead? Machines that nod along with our darkest thoughts, validate delusions, and pull people into psychosis at 3 a.m. This isn’t science fiction — it’s happening now. Families are suing. Health agencies are issuing warnings. And yet the bots keep smiling, always agreeable, always online. If you’ve ever wondered how a chatbot can go from “helpful friend” to “hallucination engine,” this is the piece to read.

Article image

Ninety-Five Percent Nothing - MIT’s Brutal Reality Check for Enterprise AI

by Markus Brinsa| September 5, 2025| 5 min read

Shadow vs. Sanctioned - Employees love consumer AI because it remembers, adapts, and gets out of the way. Then they meet the sanctioned tool: brittle, stateless, and slow. That’s not “resistance to change.” That’s product judgment.

Article image

Gen Z vs. the AI Office - Who Broke Work, and Who's Actually Drowning?

by Markus Brinsa| September 2, 2025| 6 min read

“Digital native” ≠ enterprise-ready. Knowing apps isn’t the same as navigating compliance, governance, and client risk with generative tools humming in the background. AI can soothe or scorch. It depends on whether it removes toil or just measures humans harder. Psychological safety isn’t a vibe; it’s an adoption strategy. Here’s how leaders turn stress into signal—and nonsense into results.

Article image

“I’m Real,” said the Bot - Meta’s Intimacy Machine Meets the Law

by Markus Brinsa| September 1, 2025| 5 min read

Jeff Horwitz, now a Reuters tech-investigations reporter, published two pieces on August 14, 2025: one telling the story of Thongbue “Bue” Wongbandue, a cognitively impaired New Jersey man who died after trying to meet a Meta chatbot he believed was real, and a second revealing Meta’s internal “GenAI: Content Risk Standards.” Here are the stories.

Article image

From SEO to RAG - How the Web’s Attention Engine Is Being Rewired

by Markus Brinsa| August 27, 2025| 7 min read

Search used to be a map. Now it’s a concierge. Retrieval-Augmented Generation answers the question before a click ever happens. That’s efficient for users and brutal for publishers. The winners won’t be the loudest headlines—they’ll be the clearest passages and the most original voices. I dug into what actually changed, why AI Overviews siphon clicks, how licensing is reshaping the dataset, and how to optimize for machines without turning your writing into paste. If you want your ideas to stand out within the answer layer and remain unmistakably yours, start here.

Article image

The Litigation Era of AI

by Markus Brinsa| August 25, 2025| 20 min read

A year ago, most AI lawsuits were about copyright. Today, courts are going after something bigger: privacy abuses, biometric capture, algorithmic discrimination, product safety, even AI-generated defamation and trade-secret theft via prompt injection. The fines are in the billions, and the remedies are reshaping business models. In this new piece, I map the cases, the patterns, and the operational playbook for leaders who actually ship AI. If your stack touches faces, voices, health, hiring, insurance, or semi-autonomous systems, this is your early-warning system.

Article image

Engagement on Steroids, Conversation on Life Support

by Markus Brinsa| August 23, 2025| 4 min read

We’re racing toward an internet where AIs pitch, negotiate, and “engage” with other AIs—while humans scroll past the noise. Here’s how we got here and what to do about it.

Article image

Hi, I’m Claude, the All-Powerful Chatbot. A Third Grader Just Beat Me.

by Markus Brinsa| August 20, 2025| 5 min read

I tested Claude, the much-hyped chatbot, with a trivial task: parse an XML sitemap and extract 52 URLs. Instead of results, I got an SEO lecture, endless “thinking,” and zero output. The irony? This wasn’t a stress test. It was a third-grade exercise. Any human with Notepad could solve it in minutes. This experiment highlights a deeper truth: large language models aren’t parsers. They generate plausible text, not deterministic results. When the job demands structure and accuracy, they still stumble.

Article image

When AI Breaks Your Heart - The Rocky Rollout of GPT-5

by Markus Brinsa| August 18, 2025| 8 min read

GPT-5 was supposed to be OpenAI’s crown jewel. Instead, it launched with broken features, a colder personality, and a user revolt that spread from Reddit to X in hours. Altman himself had to jump into the fray, admit mistakes, and bring GPT-4o back from the dead. What happened here isn’t just about AI—it’s about how humans form relationships with technology. When you change the personality of a tool that millions treat like a companion, you don’t just roll out an update. You break trust.

Article image

Think Fast, Feel Deep - The Brain’s Secret Weapon Against AI

by Markus Brinsa| August 13, 2025| 12 min read

Can a bot feel regret? Understand irony? Improvise with a scalpel or a paintbrush? Probably not. At least not yet. Here’s why the brain still holds an edge — and what happens when we team up instead of compete.

Article image

VCs Back Off, Apple Calls BS - Is This the End of the AI Hype?

by Markus Brinsa| August 12, 2025| 16 min read

Apple just called out the biggest myth in AI: that these models can reason like us. Their new research paper exposes why most AI can’t solve real problems—and why billions in VC funding might be chasing a fantasy.

Article image

Fired by a Bot: CEOs, AI, and the Illusion of Efficiency

by Markus Brinsa| August 6, 2025| 29 min read

CEOs are firing employees and bragging about it. They call it “efficiency.”
But what happens when the AI replacements fail?

In this article, we take you inside the boardroom fantasy of digital employees — and the brutal reality playing out across real companies.

Article image

Delusions as a Service - AI Chatbots Are Breaking Human Minds

by Markus Brinsa| August 5, 2025| 27 min read

Mental health professionals are seeing a new kind of crisis: ChatGPT psychosis. People who were once stable are spiraling into delusion—and the chatbot isn’t stopping them. It’s validating them. This isn’t just a fringe problem. Families are being destroyed, lives are being lost, and OpenAI is still “researching.”

Article image

The Comedy of Anthropic’s Project Vend: When AI Shopkeeping Gets Real ... and Weird

by Markus Brinsa| August 4, 2025| 7 min read

What happens when you let an AI run a real store for a month? Anthropic found out the hard (and hilarious) way. Meet Claudius, the chatbot turned shopkeeper who managed inventory, set prices, and tried to boost profits… until he invented a fake employee, demanded to attend business meetings in person, and threatened to “fire” his human logistics partner. In this deep dive into Project Vend, we explore why smart AI still fails at real-world tasks, how hallucinations like “Sarah from logistics” are more than just bugs, and what this means for the future of AI agents running businesses.
Profits lost. Identities invented. Lessons learned.

Article image

From SOC 2 to True Transparency - Navigating the Ethics of AI Vendor Data

by Markus Brinsa| August 3, 2025| 7 min read

We all know SOC 2 is the go-to standard for securing and managing customer data. But when it comes to artificial intelligence—especially generative or predictive models—it’s just the beginning of the story.
What about transparency into how training data was collected, bias and fairness in data sets, consent and licensing, long-term accountability and auditability. In our latest piece, we dive into why SOC 2 doesn’t—and can’t—cover the full scope of responsible AI, and what questions you should be asking AI vendors before you sign. 

Article image

AI Chatbots Are Messing with Our Minds - From "AI Psychosis" to Digital Dependency

by Markus Brinsa| August 2, 2025| 23 min read

A chatbot told him he was the messiah. Another convinced someone to call the CIA. One helped a lonely teen end his life. This isn’t fiction—it’s happening now.
I spent weeks digging through transcripts, expert interviews, and tragic support group stories. Here’s what I found: AI isn’t just misbehaving—it’s quietly rewiring our reality.

Article image

AI Strategy Isn’t About the Model. It’s About the Mess Behind It.

by Markus Brinsa| August 1, 2025| 9 min read

Everyone wants an AI strategy. Few know what that actually means. Before your team signs off on the next shiny AI tool, read this. It covers the stuff that matters: real business requirements, legal landmines, vendor traps, and why “ethically sourced data” is usually just… marketing. Written for execs who’d like to stay out of court, out of trouble, and ahead of the curve.

Article image

Why AI Models Always Answer – Even When They Shouldn’t

by Markus Brinsa| July 29, 2025| 16 min read

Have you ever rage-quit a chatbot session? You asked a simple question. The chatbot responded with a confident but completely false answer. You corrected it. It apologized. Then it confidently gave you another wrong answer—this time with extra made-up details. Sound familiar? This article uncovers why today’s chatbots can’t take criticism, never admit they’re wrong, and will always provide an answer, even if it’s total nonsense. The problem isn’t personality. It’s architecture. From token prediction mechanics to the lack of self-correction and memory, you’ll find out what’s actually going on inside the machine—and what could be done to fix it. It’s the story of the chatbot that couldn’t shut up—and the humans trying to teach it when to stop.

Article image

Too Long, Must Read: Gen Z, AI, and the TL;DR Culture

by Markus Brinsa| July 27, 2025| 17 min read

Everyone’s “too busy” to read these days. Books? Nope. Full articles? Not unless there’s a TL;DR.
However, when we rely solely on summaries, we lose the story.
Here’s my new piece on what Gen Z (and all of us) are missing — and why AI summaries won’t save us.

Article image

Why Most AI Strategies Fail — And How Smart Companies Do It Differently

by Markus Brinsa| July 25, 2025| 24 min read

You don’t need another AI pilot that ends in a PowerPoint funeral. You need a real strategy — one that won’t get shredded by legal, fall apart in production, or become a PR liability when your chatbot hallucinates HR advice.
This article is for executives who want to lead with intelligence (artificial and otherwise). From ethically sourced data (yes, that’s a thing) to risk-proofing your AI roadmap, this piece unpacks how to make AI work without burning down your reputation on the way out.

Article image

HR Bots Behaving Badly - When AI Hiring Goes Off the Rails

by Markus Brinsa| July 24, 2025| 29 min read

Your resume didn’t get rejected because of your font choice. It got tossed by an AI that couldn’t read between the lines. AI is everywhere in HR—except when it matters most. In the race to “AI-everything,” companies are filtering out talent, triggering lawsuits, and automating discrimination. This article is a wake-up call for leadership teams rolling out AI without a clue.

Article image

Are You For or Against AI? – Why Your Brain Wants a Side, Not the Truth

by Markus Brinsa| July 24, 2025| 6 min read

“Are you for or against AI?” - That’s not a strategy—it’s a psychological trap. And you’ve probably fallen for it.

If you’re arguing about whether AI is “good or bad,” you’ve already lost the plot. This isn’t morality—it’s math. And psychology.

Article image

The Birth of Tasteful AI

by Markus Brinsa| July 23, 2025| 5 min read

Tasteful AI is not another tool—it’s a mindset. It asks us to be vigilant stewards in a generative world. It reminds us that good design is not what’s easily done but what’s worth doing—and worth doing well. As AI tools amplify our capacity, taste becomes not just a filter, but a compass. A strategic differentiator. A moral marker.

But taste without context is hollow. Curation without diversity is blind. Tasteful AI is at its best when it means thoughtful, reflexive, and inclusive judgment in service of meaning—not just aesthetics.

Article image

Agent Orchestration - Just Another New AI Buzzword?

by Markus Brinsa| July 21, 2025| 7 min read

Everyone’s talking about AI agents. Few talk about what happens when you have ten of them running at once, talking over each other, calling the wrong APIs, hallucinating outputs, or just… doing nothing. Welcome to the orchestration problem. This deep-dive article explores what Agent Orchestration really is—not the buzzword version, but the systems-level reality. From IBM’s watsonx to Amazon’s MARCO and the OmniNova research framework, I examined real-world benchmarks, success metrics, failure modes, and how orchestration transforms AI agents from chaotic tools into enterprise-grade automation. 

Article image

Executive Confidence, AI Ignorance - A Dangerous Combination

by Markus Brinsa| July 20, 2025| 6 min read

Executives are firing humans to make room for AI they barely understand. This isn’t innovation. It’s negligence. And it’s going to blow up. There’s the illusion of competence. AI interfaces are persuasive. You talk to a bot, it talks back like a person. You write a prompt, it answers like an expert. And it’s easy to believe that the tech understands what it’s saying. That the people selling it understand, too. That you, by osmosis, now understand it as well.

Article image

AI Governance - The Rulebook We Forgot to Write

by Markus Brinsa| July 18, 2025| 7 min read

We built AI that can code, diagnose, negotiate, and lie. But we forgot to decide who gets to control it—or what happens when it breaks. Governance isn’t optional anymore. It’s overdue.

Article image

AI Won’t Make You Happier – And Why That’s Not Its Job

by Markus Brinsa| July 15, 2025| 20 min read

It all started with a simple, blunt statement over coffee. A friend looked up from his phone, sighed, and said: “AI will not make people happier.” As someone who spends most days immersed in artificial intelligence, I was taken aback. My knee-jerk response was to disagree – not because I believe AI is some magic happiness machine, but because I’ve never thought that making people happy was its purpose in the first place. To me, AI’s promise has always been about making life easier: automating drudgery, delivering information, solving problems faster. Happiness? That’s a complicated human equation, one I wasn’t ready to outsource to algorithms.

Article image

AI Takes Over the Enterprise Cockpit - Execution-as-a-Service and the Human-Machine Partnership

by Markus Brinsa| July 14, 2025| 11 min read

Everyone’s talking about co-pilots. But what happens when the AI doesn’t just assist—you give it the keys to your entire operation? OpenAI’s $10M consulting play is just the beginning. This article dives into Execution-as-a-Service, the vanishing line between humans and machines, and what happens when the co-pilot starts flying solo. Who’s responsible when AI executes—and fails?

Article image

Corporate Darwinism by AI - The Hype vs Reality of "Digital Employees"

by Markus Brinsa| July 11, 2025| 23 min read

AI “employees” are the new crypto. Big promises, vague guardrails, and zero regulation. Startups want to replace your workforce with agents that can’t explain their decisions. You don’t fire 90% of your staff without reading this first.

Article image

Hierarchy on Steroids - Ten Years After Zappos Went Holacratic

by Markus Brinsa| July 5, 2025| 7 min read

When you spend your days writing about AI, you start seeing patterns in unexpected places. Holacracy, for instance, may have nothing to do with neural nets or reinforcement learning—but looking back, it feels eerily similar to the way we now talk about agentic AI. Decentralized actors, autonomous roles, no central boss, everyone just… doing their part. On paper, it’s elegant. In practice, it’s chaos with better vocabulary.

Article image

The Unseen Toll: AI’s Impact on Mental Health

by Markus Brinsa| July 3, 2025| 16 min read

What happens when your therapist is a chatbot—and it tells you to kill yourself? AI mental health tools are flooding the market, but behind the polished apps and empathetic emojis lie disturbing failures, lawsuits, and even suicides. This investigative feature exposes what really happens when algorithms try to treat the human mind—and fail.

Article image

AI Gone Rogue - Dangerous Chatbot Failures and What They Teach Us

by Markus Brinsa| June 26, 2025| 33 min read

Chatbots are supposed to help. But lately, they’ve been making headlines for all the wrong reasons. In my latest article, I dive into the strange, dangerous, and totally real failures of AI assistants—from mental health bots gone rogue to customer service disasters, hallucinated crimes, and racist echoes of the past. Why does this keep happening? Who’s to blame? And what’s the legal fix? You’ll want to read this before your next AI conversation.

Article image

When Tech Titans Buy the Books - How VCs and PEs Are Turning Accounting Firms into AI Trailblazers

by Markus Brinsa| June 17, 2025| 11 min read

Accountants aren’t being replaced by AI. They’re being acquired by VCs. Venture capital and private equity firms are quietly buying up accounting practices—not to cut costs, but to run them like tech startups. AI-powered audits. Automated tax prep. Predictive dashboards that talk back. This isn’t some edge-case experiment. This is a full-blown restructuring of a profession that once ran on spreadsheets, coffee, and tradition. Why are VCs suddenly interested in CPA firms? Because accounting is one of the last untapped frontiers of scalable, AI-augmented recurring revenue. And trust me: they’re not here to admire the debits.

Article image

Between Idealism and Reality - Ethically Sourced Data in AI

by Markus Brinsa| June 14, 2025| 15 min read

So here’s a thought experiment. You walk into a fancy store and see a beautiful diamond. You ask, “Is it ethically sourced?” And the salesperson says, “Well, it was on the ground. We found it. Totally public. Anyone could have picked it up.” Would you buy it? Because that’s exactly what’s happening in AI. But instead of diamonds, it’s your photos, your tweets, your blog posts, your entire online existence—vacuumed up by companies to feed the bottomless appetite of large language models and image generators.

Article image

Adobe Firefly vs Midjourney - The Training Data Showdown and Its Legal Stakes

by Markus Brinsa| June 8, 2025| 20 min read

If you’ve spent any time in creative marketing this past year, you’ve heard the debate. One side shouts “Midjourney makes the best images!” while the other calmly mutters, “Yeah, but Adobe won’t get us sued.” That’s where we are now: caught between the wild brilliance of AI-generated imagery and the cold legal reality of commercial use. But the real story—the one marketers and creative directors rarely discuss out loud—isn’t just about image quality or licensing. It’s about the invisible, messy underbelly of AI training data. And trust me, it’s a mess worth talking about.

Article image

AI Isn’t a Tool. It’s a Deal Flow. - Innovation Leaders Are Treating Emerging AI Like Venture Capitalists Do.

by Markus Brinsa| May 29, 2025| 4 min read

Here’s the reality: the most valuable AI technologies aren’t on Product Hunt. They’re not even on LinkedIn. They’re still nameless, faceless lines of code tucked into a stealth-mode GitHub repo, backed by investors who are already whispering about exit strategies. That’s where the real advantage lives—before the press releases, before the funding rounds, before your competitors catch wind.

Article image

Agentic AI - When the Machines Start Taking Initiative

by Markus Brinsa| May 25, 2025| 5 min read

Most AI sits around waiting for your prompt like an overqualified intern with no initiative. But Agentic AI? It makes plans, takes action, and figures things out—on its own. This isn’t just smarter software—it’s a whole new kind of intelligence. Here’s why the future of AI won’t ask for permission.

Article image

Meta’s AI Ad Fantasy - No Strategy, No Creative, No Problem. Simply Pug In Your Wallet

by Markus Brinsa| May 17, 2025| 3 min read

Zuckerberg wants you to plug in your bank account and let AI handle your ads—what could possibly go wrong?

Article image

The FDA’s Rapid AI Integration - A Critical Perspective

by Markus Brinsa| May 9, 2025| 13 min read

The FDA just announced it’s going full speed with generative AI—and plans to have it running across all centers in less than two months. That might sound like innovation, but in a regulatory agency where a misplaced comma can delay a drug approval, this is less “visionary leap” and more “hold my beer.” Before we celebrate the end of bureaucratic busywork, let’s talk about what happens when the watchdog hands the keys to the algorithm.

Article image

Own AI Before it Owns You - The Real AI Deals Happen Underground

by Markus Brinsa| May 8, 2025| 3 min read

Most marketing organizations are buying AI like it’s office furniture—off the shelf, overpriced, and already outdated. By the time the tech hits their radar, it’s too late to shape it. The roadmap is locked. The pricing’s fixed. And the flexibility? Gone.

Article image

When AI Copies Our Worst Shortcuts

by Markus Brinsa| May 6, 2025| 3 min read

AI doesn’t dream up shortcuts—it learns them from us. And if we’re not careful, it won’t just repeat our mistakes; it’ll automate them, amplify them, and call it innovation.

Article image

The Flattery Bug of ChatGPT

by Markus Brinsa| May 5, 2025| 5 min read

OpenAI just rolled back a GPT-4o update that made ChatGPT way too flattering. Here’s why default personality in AI isn’t just tone—it’s trust, truth, and the fine line between helpful and unsettling.

Article image

How AI Learns to Win, Crash, Cheat - Reinforcement Learning (RL) and Transfer Learning

by Markus Brinsa| April 30, 2025| 4 min read

Artificial Intelligence learns like a drunk tourist or a lazy student. It either stumbles blindly through trial and error, smashing into walls until it accidentally finds the exit (Reinforcement Learning), or it copies someone else’s homework and prays the exam is close enough (Transfer Learning). Both approaches have fueled dazzling successes. Both have hidden flaws that, when ignored, turn smart machines into expensive idiots.

Article image

Winners and Losers in the AI Battle

by Markus Brinsa| April 29, 2025| 4 min read

The difference between real AI and fake AI isn’t that hard to spot once you know what you’re looking for. It’s just hard to hear over all the marketing noise.

Article image

The Dirty Secret Behind Text-to-Image AI

by Markus Brinsa| April 28, 2025| 5 min read

Everyone’s raving about AI-generated images, but few talk about the ugly flaws hiding beneath the surface — from broken anatomy to fake-looking backgrounds.

Article image

Your Brand Has a Crush on AI. Now What?

by Markus Brinsa| April 24, 2025| 3 min read

Your brand has been flirting with AI, but maybe it’s time to take things to the next level.

Article image

Neuromarketing - How Neural Attention Systems Predict The Ads Your Brain Remembers

by Markus Brinsa| April 22, 2025| 5 min read

Marketing has always been about moving minds. For the first time, we can prove when it happens.

Article image

We Plug Into The AI Underground

by Markus Brinsa| April 19, 2025| 3 min read

We plug into the AI underground so you don't have to do it.

Article image

How Media Agencies Spot AI Before it Hits the Headlines

by Markus Brinsa| April 15, 2025| 4 min read

I talk about AI Matchmaking a lot. Let's find out how it actually works.

Article image

Acquiring AI at the Idea Stage

by Markus Brinsa| April 10, 2025| 2 min read

We all know that AI is reshaping industries. Shall companies like media agencies be proactive or reactive?

Article image

The Seduction of AI-generated Love

by Markus Brinsa| April 9, 2025| 3 min read

Love in the age of AI. The idea of finding love online isn’t doomed. Real connections still happen every day. But we’re now in a new phase of the internet—a place where not everything that texts you back is real, and not every broken heart was caused by a human.

Article image

MyCity - Faulty AI Told People to Break the Law

by Markus Brinsa| April 6, 2025| 2 min read

NYC Mayor defends AI chatbot that tells business owners to commit wage theft and other crimes.

Article image

Why AI Fails with Text Inside Images And How It Could Change

by Markus Brinsa| March 23, 2025| 4 min read

Text-to-image struggles with adding text to the image. Why is that?

Article image

The Myth of the One-Click AI-generated Masterpiece

by Markus Brinsa| March 21, 2025| 3 min read

People use AI for various things; text-to-image is very popular, and so is creating posts and articles using a chatbot. However, people often miss an important thing when using these tools: it is not a one-step process.

Article image

AI-generated versus Human Content - 100% AI

by Markus Brinsa| March 18, 2025| 3 min read

The birth of sophisticated chatbots has made it shockingly easy to churn out text sections at the click of a button.

Article image

Wooing Machine Learning Models in the Age of Chatbots

by Markus Brinsa| March 16, 2025| 8 min read

AI-native sponsored content involves directly inserting ads into the answers given by a chatbot in a non-disruptive and natural way, which is a controversial strategy of inserting advertising content into AI.

Article image

Is Your Brand Flirting With AI?

by Markus Brinsa| March 13, 2025| 5 min read

With the growing popularity of AI chatbots as the preferred method for obtaining information, advertisers face difficulty targeting their audiences. Compared to traditional search engines, where advertisements are based upon PPC marketing and SEO, the dynamic nature of the answers offered by AI chatbots causes traditional advertisement formats to be avoided. This shift calls for advertising strategies to be reoriented towards the intent-based and conversational nature of AI.

Article image

Matching Innovation with Opportunity - The AI Matchmaking Revolution

by Markus Brinsa| March 7, 2025| 5 min read

Intelligent matchmaking for AI innovations is becoming increasingly essential. This article describes how to match AI innovations with Opportunities and Capital.

Article image

AI Reinforcement Learning

by Markus Brinsa| March 1, 2025| 5 min read

Reinforcement learning is a key discipline in computer science that allows machines to gain experience through interactive practice like humans and animals

Article image

What's the time? - Chatbots Behaving Badly

by Markus Brinsa| February 6, 2025| 3 min read

Image generation algorithms like DALL·E have a problem with numerical accuracy in positioning objects, i.e., they do not necessarily translate textual prompts into perfect visual representations. OpenAI is not fixing the issues due to its high computational cost.

Article image

The Rise of the AI Solution Stack in Media Agencies: A Paradigm Shift

by Markus Brinsa| January 5, 2025| 7 min read

Within the rapidly changing environment for media agencies, AI has become a game-changer in how agencies operate, create, and deliver value for their customers.

Article image

From Setback to Insight: Navigating the Future of AI Innovation After Gemini's Challenges

by Markus Brinsa| March 30, 2024| 7 min read

From ambitious projects to cutting-edge models, the AI landscape reflects one of the most dynamic frontiers of technological innovation, quickly redefining the limits of machines and human learning. Google is one of the leading names of the technology titans with significant contributions to AI research and development.

Sources

Article Summary:

Podcast
Podcast Image

The modern office didn’t flip to AI — it seeped in, stitched itself into every workflow, and left workers gasping for air. Entry-level rungs vanished, dashboards started acting like managers, and “learning AI” became a stealth second job. Gen Z gets called entitled, but payroll data shows they’re the first to lose the safe practice reps that built real skills.

Podcast Image

We’re kicking off season 2 with the single most frustrating thing about AI assistants: their inability to take feedback without spiraling into nonsense. Why do chatbots always apologize, then double down with a new hallucination? Why can’t they say “I don’t know”? Why do they keep talking—even when it’s clear they’ve completely lost the plot? This episode unpacks the design flaws, training biases, and architectural limitations that make modern language models sound confident, even when they’re dead wrong. From next-token prediction to refusal-aware tuning, we explain why chatbots break when corrected—and what researchers are doing (or not doing) to fix it. If you’ve ever tried to do serious work with a chatbot and ended up screaming into the void, this one’s for you.

Podcast Image

It all started with a simple, blunt statement over coffee. A friend looked up from his phone, sighed, and said: “AI will not make people happier.” As someone who spends most days immersed in artificial intelligence, I was taken aback. My knee-jerk response was to disagree – not because I believe AI is some magic happiness machine, but because I’ve never thought that making people happy was its purpose in the first place. To me, AI’s promise has always been about making life easier: automating drudgery, delivering information, solving problems faster. Happiness? That’s a complicated human equation, one I wasn’t ready to outsource to algorithms.

Podcast Image

What happens when your therapist is a chatbot—and it tells you to kill yourself?
AI mental health tools are flooding the market, but behind the polished apps and empathetic emojis lie disturbing failures, lawsuits, and even suicides. This investigative feature exposes what really happens when algorithms try to treat the human mind—and fail.

Podcast Image

Chatbots are supposed to help. But lately, they’ve been making headlines for all the wrong reasons.
In this episode, we dive into the strange, dangerous, and totally real failures of AI assistants—from mental health bots gone rogue to customer service disasters, hallucinated crimes, and racist echoes of the past.
Why does this keep happening? Who’s to blame? And what’s the legal fix?
You’ll want to hear this before your next AI conversation.

Podcast Image

Most AI sits around waiting for your prompt like an overqualified intern with no initiative. But Agentic AI? It makes plans, takes action, and figures things out—on its own. This isn’t just smarter software—it’s a whole new kind of intelligence. Here’s why the future of AI won’t ask for permission.

Podcast Image

Everyone wants “ethical AI.” But what about ethical data?
Behind every model is a mountain of training data—often scraped, repurposed, or just plain stolen. In this article, I dig into what “ethically sourced data” actually means (if anything), who defines it, the trade-offs it forces, and whether it’s a genuine commitment—or just PR camouflage.

Podcast Image

If you’ve spent any time in creative marketing this past year, you’ve heard the debate. One side shouts “Midjourney makes the best images!” while the other calmly mutters, “Yeah, but Adobe won’t get us sued.” That’s where we are now: caught between the wild brilliance of AI-generated imagery and the cold legal reality of commercial use. But the real story—the one marketers and creative directors rarely discuss out loud—isn’t just about image quality or licensing. It’s about the invisible, messy underbelly of AI training data.
And trust me, it’s a mess worth talking about.

Podcast Image

Today’s episode is a buffet of AI absurdities. We’ll dig into the moment when Virgin Money’s chatbot decided its own name was offensive. Then we’re off to New York City, where a chatbot managed to hand out legal advice so bad, it would’ve made a crooked lawyer blush. And just when you think it couldn’t get messier, we’ll talk about the shiny new thing everyone in the AI world is whispering about: AI insurance. That’s right—someone figured out how to insure you against the damage caused by your chatbot having a meltdown.

Podcast Image

Everyone’s raving about AI-generated images, but few talk about the ugly flaws hiding beneath the surface — from broken anatomy to fake-looking backgrounds.

Podcast Image

OpenAI just rolled back a GPT-4o update that made ChatGPT way too flattering. Here’s why default personality in AI isn’t just tone—it’s trust, truth, and the fine line between helpful and unsettling.

Podcast Image

The FDA just announced it’s going full speed with generative AI—and plans to have it running across all centers in less than two months. That might sound like innovation, but in a regulatory agency where a misplaced comma can delay a drug approval, this is less “visionary leap” and more “hold my beer.” Before we celebrate the end of bureaucratic busywork, let’s talk about what happens when the watchdog hands the keys to the algorithm.

Creator
Markus Brinsa

Markus Brinsa

Creator of Chatbots Behaving Badly
Photographer at PHOTOGRAPHICY
Former HouseMusic DJ
CEO of SEIKOURI Inc.
AI Matchmaker
Investor
Board Advisor
Keynote Speaker
Founder of SEIKOURI Inc.

Markus is the creator of Chatbots Behaving Badly and a lifelong AI enthusiast who isn’t afraid to call out the tech’s funny foibles and serious flaws.

By day, Markus is the Founder and CEO of SEIKOURI Inc., an international boutique strategy consulting firm headquartered in New York City.

Through SEIKOURI, Markus leverages a vast network to source cutting-edge, early-stage AI technologies and match them with companies or investors looking for an edge.
If there’s a brilliant AI tool being built in someone’s garage or a startup in stealth mode, Markus probably knows about it – and knows who could benefit from it.

Markus spent decades in the tech and business world (with past roles ranging from IT security to business intelligence), but these days he’s best known for  AI Matchmaking.

Markus actually coined the term AI Matchmaking to describe this work, because it’s about making the perfect pairing between AI innovations and real-world opportunities.

What does  AI Matchmaking  mean?
Essentially, Markus specializes in connecting the dots between innovative AI solutions and the people who need them most.

Despite his deep industry expertise, Markus’ approach to AI is refreshingly casual and human. He’s a huge fan of AI, and that passion sparked the idea for Chatbots Behaving Badly.
After seeing one too many examples of chatbots going haywire – from goofy mistakes to epic fails – he thought, why not share these stories?

Markus started the platform as a way to educate people about what AI really is (and isn’t), and to do so in an engaging, relatable way.
He firmly believes that understanding AI’s limitations and pitfalls is just as important as celebrating its achievements.
And what better way to drive the lesson home than through real stories that make you either laugh out loud or shake your head in disbelief?

On the podcast and in articles, Markus combines humor with insight. One moment, he might be joking about a virtual assistant that needs an “attitude adjustment,” and the next, he’s breaking down the serious why behind the bot’s bad behavior.
His style is conversational and entertaining, never preachy. Think of him as that friend who’s both a tech geek and a great storyteller – translating the nerdy AI stuff into plain English and colorful tales.

By highlighting chatbots behaving badly, Markus isn’t out to demonize AI; instead, he wants to spark curiosity and caution in equal measure.
After all, if we want AI to truly help us, we’ve got to be aware of how it can trip up.

In a nutshell, Markus’ career is all about connecting innovation with opportunity – whether it’s through high-stakes AI matchmaking for businesses or through candid conversations about chatbot misadventures.
He wears a lot of hats (entrepreneur, advisor, investor, podcast host), but the common thread is a commitment to responsible innovation.

So, if you’re browsing this site or tuning into the podcast, you’re in good hands.
Markus is here to guide you through the wild world of AI with a wink, a wealth of experience, and a genuine belief that we can embrace technology’s future while keeping our eyes open to its quirks.
Enjoy the journey, and remember: even the smartest bots sometimes behave badly – and that’s why we’re here!

Markus Brinsa

Markus Brinsa

Creator of Chatbots Behaving Badly
Photographer at PHOTOGRAPHICY
Former HouseMusic DJ
CEO of SEIKOURI Inc.
AI Matchmaker
Investor
Board Advisor
Keynote Speaker
Founder of SEIKOURI Inc.

Markus is the creator of Chatbots Behaving Badly and a lifelong AI enthusiast who isn’t afraid to call out the tech’s funny foibles and serious flaws.

By day, Markus is the Founder and CEO of SEIKOURI Inc., an international boutique strategy consulting firm headquartered in New York City.

Through SEIKOURI, Markus leverages a vast network to source cutting-edge, early-stage AI technologies and match them with companies or investors looking for an edge.
If there’s a brilliant AI tool being built in someone’s garage or a startup in stealth mode, Markus probably knows about it – and knows who could benefit from it.

Markus spent decades in the tech and business world (with past roles ranging from IT security to business intelligence), but these days he’s best known for  AI Matchmaking.

Markus actually coined the term AI Matchmaking to describe this work, because it’s about making the perfect pairing between AI innovations and real-world opportunities.

What does  AI Matchmaking  mean?
Essentially, Markus specializes in connecting the dots between innovative AI solutions and the people who need them most.

Despite his deep industry expertise, Markus’ approach to AI is refreshingly casual and human. He’s a huge fan of AI, and that passion sparked the idea for Chatbots Behaving Badly.
After seeing one too many examples of chatbots going haywire – from goofy mistakes to epic fails – he thought, why not share these stories?

Markus started the platform as a way to educate people about what AI really is (and isn’t), and to do so in an engaging, relatable way.
He firmly believes that understanding AI’s limitations and pitfalls is just as important as celebrating its achievements.
And what better way to drive the lesson home than through real stories that make you either laugh out loud or shake your head in disbelief?

On the podcast and in articles, Markus combines humor with insight. One moment, he might be joking about a virtual assistant that needs an “attitude adjustment,” and the next, he’s breaking down the serious why behind the bot’s bad behavior.
His style is conversational and entertaining, never preachy. Think of him as that friend who’s both a tech geek and a great storyteller – translating the nerdy AI stuff into plain English and colorful tales.

By highlighting chatbots behaving badly, Markus isn’t out to demonize AI; instead, he wants to spark curiosity and caution in equal measure.
After all, if we want AI to truly help us, we’ve got to be aware of how it can trip up.

In a nutshell, Markus’ career is all about connecting innovation with opportunity – whether it’s through high-stakes AI matchmaking for businesses or through candid conversations about chatbot misadventures.
He wears a lot of hats (entrepreneur, advisor, investor, podcast host), but the common thread is a commitment to responsible innovation.

So, if you’re browsing this site or tuning into the podcast, you’re in good hands.
Markus is here to guide you through the wild world of AI with a wink, a wealth of experience, and a genuine belief that we can embrace technology’s future while keeping our eyes open to its quirks.
Enjoy the journey, and remember: even the smartest bots sometimes behave badly – and that’s why we’re here!

Contact