Article imageLogo
Chatbots Behaving Badly™

The Chat Was Fire. The Date Was You.

By Markus Brinsa  |  September 29, 2025

Sources

When the Pick-Up Line Isn’t You

Your phone chirps at 2:03 a.m. A match. She writes something quick and clever about the book in your photo, and you volley back a reply that lands with exactly the right kink of wit. The rhythm is perfect. The dopamine hits on schedule. Then you meet in person, and the dialogue suddenly drops two reading levels. She didn’t change. You did—because a large language model co-wrote the version of you she met.

Two summers ago, the New York Post clocked the earliest wave: AI matchmakers, virtual pick-up lines, and “ChatGPT-like” tools riding shotgun in our dating apps. A parade of wingmen in the cloud. It was a snapshot of a trend about to tip from fringe to normal. 

Since then, the apps themselves have leaned in. Tinder now markets AI that picks your best photos—automation for the most fraught editorial decision in modern romance. Hinge and Bumble have experimented with AI-assisted prompts and message nudges to curb creepiness. Grindr announced a “Wingman.” And, as of this week, even Facebook Dating is shipping an AI “dating assistant” and a weekly “Meet Cute,” grafting a bot onto the world’s biggest social graph to fight swipe fatigue. If AI is a wingman, it’s now on the payroll. 

A new class of apps takes it a step further. Teaser AI lets you chat with an AI clone of a person’s profile before you decide if you want the real thing, outsourcing small talk to a simulacrum. Volar’s experiment replaces the early icebreakers with bots, allowing humans to skip directly to the date. These ideas are either delightful time-savers or the social equivalent of sending a stunt double to the chemistry read. Both can be true. 

And yes, the “rizz economy” is booming. Tools like Rizz and YourMove.ai draft openers, rewrite replies, and polish profiles. Washington Post reporters found twenty-somethings happily piping Hinge conversations through these assistants for more effortless banter. The charm is real; the author is complicated. 

The Borrowed-Charisma Problem

Psychologists have language for what’s happening to us here. In computer-mediated communication, we over-idealize the person on the other end—a dynamic Joseph Walther called the “hyperpersonal” effect. We fill in gaps, smooth the edges, and upgrade their qualities in our heads. AI intensifies that effect by optimizing your self-presentation: cleaner syntax, brighter humor, fewer awkward pauses. The expectation balloon inflates faster; the face-to-face pin is still sharp. 

Layer in self-discrepancy—the gap between your actual and your presented self—and you get a predictable crash. When the “AI-aided you” sets the standard, the in-person you has to keep up. Many people can, especially if the bot just gets their social gears unjammed. But for others, the first date becomes an expectations audit, and the bill comes due. Studies of online dating long before AI found that over-idealization raises the odds of disappointment at the first meeting; the bot simply adds high-octane fuel. 

AI also changes authorship. If she loved your messages, who did she love—your intent, or your model’s style transfer? We don’t have a single tidy study that settles the ethics for dating, but emerging research on AI-mediated communication shows what intuition already whispers: when a system co-writes our words, we manage impressions differently and often feel more “successful,” yet the authenticity ledger gets messier. That ambiguity can corrode trust later, especially if the reveal feels like a bait-and-switch. 

The Case for the Bot (Used Right)

Used with consent and restraint, AI can be scaffolding. Neurodiverse daters or anxious first-messagers sometimes just need help getting past the tyranny of “hey.” The “practice arena” apps—like Replika’s sister project Blush, a dating simulator with personalities that push back—are closer to a batting cage than a mask. You go there to warm up your swing, not to send the pitching machine to dinner in your place. 

Mainstream platforms aren’t only juicing charm; they’re aiming at safety and quality. Tinder’s AI “Photo Selector” professionalizes profile curation, and video-selfie verification has accelerated across the industry to fight impersonation and spam. In California, Tinder now requires new users to complete “Face Check” at signup, a biometric video selfie meant to throttle fake or duplicate accounts. That won’t cure deception, but it tightens the door. 

The Case Against the Bot (Used Wrong)

There’s also an obvious dark pattern: when everyone’s opening lines are harvested from the same style orchard, the orchard gets boring. Editors at The Cut recently chronicled how AI-polished profiles and pick-up lines are flattening the vibe. If “quirky, vulnerable, funny” is just a template, the signal in the dating market degrades. You think you’re meeting an outlier; you’re meeting the prompt. 

Worse, scammers are industrializing romance. The FBI and FTC now warn that generative models make con jobs smoother—from pig-butchering crypto schemes groomed through “perfect” chats, to AI-generated images and deepfake video that harden the disguise. Result: billions siphoned, plus a trust hangover that bleeds into legitimate connections. Apps are responding with AI for verification and moderation, and senators are leaning on Match Group to do more. None of this is theoretical; it’s the new ground truth of online intimacy. 

Terms, Consent, and the “Reveal”

There’s a legal-ish wrinkle users routinely miss: major apps prohibit third-party automation that touches their services. Tinder’s terms explicitly ban using or “developing any third-party applications… including artificial intelligence or machine learning systems” that interact with the app. Bumble bars “inauthentic or manipulative” use, including automation or scripting that influences conversations. Translation: piping your matches into an external bot that replies for you may violate the rules you clicked through. You might never get caught. You might also get banned mid-courtship. 

From a human point of view, the fix is simpler than a policy page. If AI helped you shine, own it early and lightly. “I used a writing assistant to clean up my opener—I get anxious about first messages.” That single sentence reframes the tool as scaffolding rather than disguise, and it invites the other person to calibrate their expectations to the human in front of them. You’re not renouncing technology; you’re asserting authorship. 

So… Should We Swipe With a Cyborg Wingman?

Yes—with boundaries. If AI acts like a spotlight or a rehearsal partner, it can make the awkward parts of dating less punishing and help people who’ve been shut out by performative banter finally get a fair shake. If it becomes a mask, it will set expectations you can’t meet and corrode the very trust you’re hoping to build. And if it’s a money pipe for criminals, it’s not dating; it’s extraction. The psychology here is old wine in a smart new bottle: idealization, impression management, the shock of expectations meeting reality. AI didn’t invent those; it just put them on turbo.

The Post wasn’t wrong to notice the early wave. What’s changed is that AI isn’t just whispering lines from the sidelines anymore—it’s embedded in the stadium, selling tickets, picking outfits, and ejecting the drunks. The rest is on us. If you borrow a voice to start the conversation, make sure you can keep talking once the prompter goes dark. And if your opener was perfect at 2:03 a.m., consider this the real flex at 7 p.m.: letting your date meet the person who wrote the next sentence.

About the Author

Markus Brinsa is the Founder & CEO of SEIKOURI Inc., an international strategy firm that gives enterprises and investors human-led access to pre-market AI—then converts first looks into rights and rollouts that scale. He created "Chatbots Behaving Badly," a platform and podcast that investigates AI’s failures, risks, and governance. With over 15 years of experience bridging technology, strategy, and cross-border growth in the U.S. and Europe, Markus partners with executives, investors, and founders to turn early signals into a durable advantage.

©2025 Copyright by Markus Brinsa | Chatbots Behaving Badly™