Article imageLogo
Chatbots Behaving Badly™

HR Bots Behaving Badly - When AI Hiring Goes Off the Rails

By Markus Brinsa  |  July 24, 2025

Sources

It sounds like the setup for a bad joke: “My resume walks into a bar and the bartender says, ‘Sorry, we don’t serve your kind.’” Except it’s not a joke—it’s reality for too many job seekers. In 2025, more and more candidates are finding that their applications vanish into a black hole before a human ever lays eyes on them. The culprit isn’t a biased recruiter who didn’t like your LinkedIn photo or your quirky cover letter. It’s often an algorithm with no common sense that decided, for reasons only it knows, that you weren’t worth a hiring manager’s time. Meanwhile, HR teams are pulling their hair out because their shiny new AI hiring tools are filtering out too many good candidates. And let’s not forget the employees (including HR professionals themselves) who worry the next resume to be tossed out might be their own – not by a manager, but by a machine learning model gunning for their job.

Welcome to the strange new world of AI in HR, where promises and pitfalls are piling up in equal measure. Investors and boards are demanding “AI-everywhere” to boost productivity and shareholder value, convinced that this is the silver bullet for talent woes. But as we’ll see, a blind rollout of AI can just as easily backfire – leading to missed talent, embarrassing failures, even lawsuits and regulatory smackdowns that tank value instead of creating it. In this long-form exploration, we’ll travel through recent cautionary tales of AI HR projects gone wrong, examine the legal and privacy landmines (from the US to Europe), and outline how to harness AI in healthy doses with human oversight. Buckle up, because even if this is a serious topic, we’re going to tackle it with a bit of humor and a human touch. After all, behind every algorithm is a human who really should know better.

The Great AI Hiring Hype (and the Hard Fall)

Executives around the globe have caught a serious case of AI fever. Over the past two years, artificial intelligence has rocketed from obscure data-science project to boardroom obsession, touted as the magic bullet for everything from recruiting and onboarding to performance reviews. Corporate leaders and investors see rivals boasting about AI adoption and fear missing the boat – so they demand their own HR teams implement AI everywhere, often with a vague mandate of “we need to do AI because that’s what everyone is doing.” In fact, global corporate spending on AI solutions is on track to reach nearly $200 billion by the end of this year. Surveys show over 60% of HR leaders say AI is a top priority, illustrating the intense pressure to ride this trend.

Yet for all the hype, the reality is sobering: many AI-in-HR projects are crashing and burning. One recent analysis found that nearly half of companies with AI projects had abandoned most of them in 2025. In other words, after pouring time and money into AI, a huge number of organizations quietly pulled the plug when results didn’t live up to promises. That’s not just wasted budget – it’s a strategic failure and a hit to leadership credibility. As a Fortune/IBM survey highlighted, only about 25% of AI projects succeed as expected, meaning a whopping 75% fall short or outright fail.

Why the disappointment? The problem usually isn’t the AI technology itself – it’s how it’s rolled out. Too often, companies chase the hype without clear goals or understanding, slapping AI onto broken HR processes and expecting miracles. Without proper oversight or alignment with how people actually work, the tools end up unused, or worse, they amplify existing problems like bias. As one HR observer put it, it’s the old story of “garbage in, garbage out”: if your data or processes are flawed, an algorithm will dutifully magnify those flaws. Unfortunately, many leaders have learned this the hard way. More than half of senior executives in a late-2024 survey confessed they felt like failures in leading AI initiatives, and over 50% said their employees were exhausted and overwhelmed by the breakneck pace of AI changes.

At this point, we’re riding the downside of the classic hype cycle – after the inflated expectations, here comes the trough of disillusionment. The big risk is reputational: a high-profile AI faceplant can make leadership look reckless and erode trust among employees and customers. Nobody wants to be the next headline about an “AI disaster”. For boards and investors hungry for AI-driven gains, it’s time to recognize that throwing algorithms at a problem without a plan is a recipe for wrecking, not revolutionizing, the workplace. As we’ll see, nowhere is this more evident than in the recruiting trenches, where AI’s promise to streamline hiring has often backfired spectacularly.

The Resume Black Hole – Qualified Candidates, Gone Missing

One of the earliest and loudest complaints about AI in hiring is the mysterious fate of the missing resume. If you’ve ever sent out dozens of applications and heard nothing but crickets, you’re not alone – and you might have been filtered out by a robot. Today’s hiring process is heavily automated: AI-driven Applicant Tracking Systems (ATS) are used by the majority of medium and large employers to screen and rank applications long before any human recruiter gets involved. These systems scan resumes for keywords, years of experience, education and more, and then an algorithm decides who makes the initial cut. The intention is good (handle high volumes efficiently), but the outcome? Potentially 9 out of 10 resumes get tossed out automatically – never reaching a real person’s eyes. Yes, you read that right: by some estimates, up to 90% of job applications never make it to a hiring manager because an AI judged them unworthy.

Think about what that means for both candidates and employers. Talented people are being rejected thanks to minor quirks of formatting or phrasing – the algorithmic equivalent of a typo. For example, an ATS might be confused by a creative resume layout with tables or graphics, so it just fails to parse your information. Or it doesn’t recognize that “M.S. in Comp Sci” is the same as “Master of Science in Computer Science” because it was programmed to match exact keywords. If your wording doesn’t mirror the job description exactly, the bot might conclude you lack required skills when in fact you have them. A human reviewer would catch these nuances; a naive AI won’t. The result is that perfectly qualified candidates get lost in translation – their resumes filtered out for reasons unrelated to their ability to do the job.

HR teams have started to realize they’re shooting themselves in the foot with these rigid algorithms. In one survey, 88% of employers admitted they believe highly qualified candidates are being weeded out simply because their resumes aren’t ATS-friendly. In other words, the system is so focused on exact criteria that it’s missing people with slightly unconventional backgrounds or those who didn’t cram their application with the “right” keywords. The irony here is rich: companies adopted AI to find better talent faster, but poor implementation means they might be locking out the very talent they seek.

A famous early example of this phenomenon was Amazon’s ill-fated recruiting AI. Back in the 2010s, Amazon built an algorithm to screen resumes, aiming to automate their hiring for tech roles. But in 2018 it was revealed that the system had taught itself a nasty bias: it started penalizing resumes that mentioned the word “women” (as in “women’s chess club captain”) and downgrading graduates of women’s colleges. Why? Because the AI was trained on past hiring data in which male candidates were overrepresented – so it “learned” an insidious lesson that male applicants were preferable. The project was swiftly scrapped once these chauvinistic quirks came to light, but it became the cautionary tale of AI inadvertently automating discrimination.

While Amazon’s case was an extreme (and embarrassing) example, it exposed a broader issue: many AI screening tools are effectively automating old biases present in the data or criteria they’re given. And even when overt bias isn’t the problem, a lack of contextual understanding is. For instance, an algorithm can’t yet gauge “potential” or “culture fit” or the value of a non-traditional career path the way a human might. An HR director at a Fortune 500 company recently lamented that their AI system was rejecting applicants who didn’t meet an absurdly specific checklist, even though some of those people could have been great hires after a brief training. “The algorithm has no common sense,” they sighed – it couldn’t see the forest for the trees in candidate profiles.

The takeaway? AI isn’t (yet) a substitute for human judgment in spotting talent. If companies rely on it too heavily, they risk throwing away diamonds in the rough. Until AI can appreciate a candidate’s unique story – or at least until it’s calibrated and monitored well – we’re going to keep seeing qualified folks fall into the resume black hole. And each of those lost candidates is a lost opportunity for the business.

When HR Algorithms Go Off the Rails: Cautionary Tales

If the resume black hole is frustrating, some of the more dramatic AI failures in HR have been downright facepalm-worthy. Let’s tour a few high-profile examples from the last couple of years where AI in HR went down the drain completely, causing headaches for everyone involved. Consider this the HR edition of “Chatbots Behaving Badly.”

Reading through these examples, a common theme emerges: lack of proper oversight and testing. In each case, the AI did something a savvy human could have flagged as a bad idea – if only someone had been watching closely. Whether it’s blatant bias, legal ignorance, or just contextual stupidity, these failures were predictable. As one tech pundit quipped, “We’ve summoned the genie but we’re not paying attention to the wishes.” Before unleashing an AI system, you have to ask: what could go wrong? (Because something always goes wrong.) And after deploying, you must monitor it like a hawk.

When AI HR integrations fail, the consequences range from embarrassing to catastrophic. Companies have faced public outrage, regulatory investigations, and multimillion-dollar settlements. The wrong move can tank employee morale (imagine discovering your employer’s algorithm thinks you’re unfit for a promotion because you’re deaf, as in the Intuit case) or spark boycotts from candidates who feel the process is rigged. For investors pushing AI to boost share value, nothing will sink that value faster than an AI scandal that forces a company into apology mode – or into court.

Humans vs. Machines: Job Security in the Age of AI

Beyond the direct hiring process, there’s another elephant in the room whenever AI enters the workplace: Will the robots take my job? This concern hangs heavy over HR teams and employees alike. After all, if an algorithm can shortlist candidates or answer HR inquiries, what stops it from eventually replacing HR staff or other roles entirely? It’s a valid question – and it’s causing a fair bit of anxiety on both sides of the hiring desk.

Let’s start with the employees. Recent surveys show that AI anxiety is widespread in the workforce. A late-2023 study by EY found a staggering 75% of workers are concerned that AI will make certain jobs obsolete, and about two-thirds (65%) are explicitly worried that their own job could be replaced by AI. That’s right: nearly 7 in 10 people are looking at ChatGPT or some HR bot and thinking, “Is that thing coming for my paycheck?” Not only that, but employees fear AI could hurt their career growth in less direct ways too. In the same study, 72% thought AI might negatively impact their salary or pay (perhaps by automating higher-paying roles or creating a surplus of labor), and 67% worried they’d miss out on promotions if they don’t master AI tools. Essentially, workers feel they’re in a race to upskill or be left behind – a tough spot that HR departments and leaders need to manage with empathy and training.

Now, what about HR professionals themselves? You’d think the folks implementing these systems would feel secure, but even many HR practitioners privately fret about their future in an AI-driven world. The truth is, certain traditional HR tasks (screening resumes, scheduling interviews, answering routine employee questions) are highly automatable. Forward-looking HR teams see this as an opportunity: offload the drudgery to AI and free up human HR for strategic, high-touch work. But not everyone is convinced their organization will navigate that transition gracefully. A recent Gallup poll of CHROs (Chief Human Resource Officers) found 72% believe AI will eliminate more jobs than it creates in the near term. Even if, in the long run, AI creates new roles (as many economists predict), there’s palpable fear of a bumpy ride where a lot of people could lose out in the interim.

The flipside of the fear is that humans bring irreplaceable strengths to HR and people management. AI might beat us at sorting data or even at detecting patterns of attrition risk, but it can’t (so far) replicate empathy, complex judgment, and the personal touch that good HR professionals provide. In fact, many of the failures we described earlier (from biased hiring algorithms to tone-deaf chatbots) underscore how badly things go when humans are taken out of the loop. Rather than making HR jobs obsolete, these fiascos highlight the need for humans to work alongside AI as supervisors, interpreters, and mitigators. A well-trained HR analyst who understands AI can use it as a powerful tool – double-checking its suggestions, overriding it when needed, and focusing on the human elements of talent management that no machine can handle alone.

A case in point: when Klarna reversed its AI-first customer service strategy, it wasn’t an “AI or humans” decision – it became an “AI and humans” solution. Klarna still values automation, but they discovered the hard way that people are essential for quality service. The company opted to bring back human agents (even if in flexible or freelance roles) to complement the AI, aiming for the best of both worlds. This hybrid approach is increasingly seen as the model for HR as well: let AI do the heavy lifting on repetitive tasks, while humans handle the nuanced, strategic, and compassionate work.

For employees worried about job security, the message from forward-thinking leaders is “adapt and evolve.” Smart companies are investing in re-skilling their workforce so that people can move into new roles that AI can’t do – or into roles managing and improving the AI itself. We’re already seeing new job titles like “HR AI Trainer” or “Ethical AI Officer” emerging in some organizations. Indeed, the real revolution isn’t AI itself – it’s how organizations choose to work with AI. Those that find the right balance will not only keep their people employed, they’ll likely have more engaged, higher-skilled teams as AI takes over the drudge work.

Still, this is cold comfort to someone who’s just watched a company implement an AI system that does half of what they used to do. The key for HR leaders is transparency and involvement. If you’re rolling out AI in, say, performance management or training, involve employees in that process. Explain what the AI will do, what it won’t do, and how people’s roles will shift. Offer training so employees can actually leverage the AI tools (remember, a significant chunk of workers feel their organizations aren’t teaching them how to use AI ethically or effectively). By bringing employees along, you replace fear with curiosity – or at least with managed fear.

Bottom line: AI will undoubtedly change HR and many jobs, but a failed integration (one done carelessly, without human-centric design) is likely to cause more havoc – lost talent, legal trouble, demoralized staff – than if you had done nothing at all. In contrast, a well-managed integration, with humans firmly in control of the off switch, can augment and elevate HR work. Which side of that coin you end up on depends on the choices leaders make today.

Legal and Privacy Landmines: Why “Set it and Forget it” Won’t Fly

So far we’ve focused on the practical and ethical pitfalls of AI in HR, but let’s address the big legal elephant in the room. Rolling out AI in hiring or HR without proper control isn’t just risky from a business standpoint – it can land you in serious legal and regulatory trouble. In fact, regulators on both sides of the Atlantic have made it clear that when it comes to employment decisions, you can’t simply defer to “the algorithm”. If something goes wrong, your company will ownthat problem, in court if necessary.

Discrimination and bias laws are the most obvious area. In the U.S., there are decades-old laws prohibiting employment discrimination on the basis of race, sex, age, disability, etc. It doesn’t matter if a biased decision was made by a human manager or a machine-learning model – it’s illegal either way. We’ve seen this in the earlier examples: iTutorGroup’s settlement for age bias and the Workday lawsuit. The EEOC (the federal agency enforcing workplace discrimination laws) has been quite vocal that using AI won’t shield employers from liability. If your hiring AI ends up disproportionately screening out, say, older applicants or minority candidates, you could violate the Age Discrimination in Employment Act, Title VII of the Civil Rights Act, the Americans with Disabilities Act, and so on. Even if the bias was unintentional or emerged from a black-box algorithm, ignorance is no defense. As EEOC Chair Charlotte Burrows said, “Even when technology automates the discrimination, the employer is still responsible”.

On top of federal laws, states are jumping in with their own rules. For instance, New York City recently implemented a law (NYC Local Law 144) that requires employers to conduct bias audits on AI hiring tools and to notify candidates when AI is being used in the assessment process. Companies that fail to do so can face fines – in NYC’s case, up to about $1,500 per day of non-compliance. Illinois has a law regarding AI analysis of video interviews (aimed at tools like HireVue), requiring consent and disclosure. These are just the tip of the iceberg; as of mid-2025, more than 20 U.S. states or local jurisdictions have passed laws or regulations imposing obligations on employers using AI for hiring or employee data. The patchwork is growing, even as the federal stance remains a bit inconsistent.

Now, pivot to data privacy, which is a huge deal especially in Europe. In the EU, personal data of candidates and employees is protected by the General Data Protection Regulation (GDPR). If you use AI to process someone’s data (like analyzing their resume or video interview), you must comply with GDPR’s strict requirements on transparency, data minimization, purpose limitation, and security. Perhaps most relevant is GDPR’s Article 22, which gives individuals the right not to be subject to solely automated decisions that have a significant effect on them (like, say, not being hired for a job), unless certain conditions are met. What does that mean? In plain terms, if you are hiring in Europe and let an AI make the decision without any human review, you could be violating GDPR. European regulators and courts have already shown they’re willing to act on this. In a prominent 2023 case, the Amsterdam Court of Appeals ruled that Uber’s automated “robo-firing” of drivers violated GDPR because there wasn’t meaningful human involvement in the decision. Uber had to provide transparency and potentially reinstate or compensate drivers whose accounts were terminated by an algorithmic fraud detection system. The principle carries over to hiring: a purely algorithmic rejection with no human check and no chance for the candidate to appeal can be a legal ticking time bomb in the EU.

Furthermore, the EU isn’t stopping at GDPR. They have forged ahead with the EU AI Act, a sweeping regulation specifically targeting AI systems. Under this law (which provisionally took effect in August 2024, with a rollout of requirements through 2025-2026), AI tools used for recruitment, hiring, promotion, or termination are classified as “high-risk”. That designation means companies deploying such tools in Europe will have to meet strict standards for risk assessment, transparency, and human oversight. For example, providers might have to conduct conformity assessments before putting an HR AI system on the EU market, and employers using them will need to keep detailed logs and documentation. The AI Act also bans outright certain “unacceptable” AI uses, like social scoring systems or predictive policing. While hiring AI isn’t banned, any whiff of something like scoring candidates’ “social behavior” could be seen as crossing into risky territory. Notably, as of February 2025, EU companies (and non-EU companies doing business in the EU) were required to eliminate any AI practices deemed unacceptable and train employees on compliant AI use. If that sounds like a big compliance lift – it is. But ignore it at your peril: violations of the EU AI Act can draw fines up to €35 million or 7% of global revenue, even higher than GDPR’s 4% of global turnover cap for data breaches.

Speaking of fines and penalties, consider the differences in scale. In Europe, regulators love their big fines for big companies – think of those multi-hundred-million-euro GDPR fines in recent years. As mentioned, GDPR can hit 4% of worldwide annual revenue for serious infractions. For a Fortune 500, that could be billions of dollars. The EU AI Act raises the stakes to 7% for the worst offenses. That’s “shareholder value” vaporized in an instant if you mess up. In the U.S., penalties for AI misuse have been more case-by-case: settlements like iTutorGroup’s $365k or whatever damages might come from private lawsuits (which, in class actions, can still reach the millions). However, U.S. companies should note that while regulatory fines might be lower per violation, American juries can award punitive damages in discrimination cases that far exceed actual damages – not to mention legal fees and reputational damage. Also, if you’re a global company, you might end up dealing with both U.S. and EU regimes, doubling your trouble.

It’s not only discrimination and privacy. Other legal gotchas include data security (if you’re feeding personal data into an AI, you better secure it – a breach of candidate or employee data invites lawsuits and regulatory fines on the cybersecurity front) and duty to accommodate disabilities. The Intuit promotion case we discussed is a prime example: a deaf employee alleges the AI interview platform didn’t offer proper captioning, thus denying her a fair shot and violating disability discrimination laws. Employers must remember that using AI doesn’t absolve them from accommodations – you may need to provide alternative assessments for those who can’t engage with the AI in the standard way (e.g., if a video interview AI can’t handle sign language, you’d better have a human alternative).

Finally, transparency requirements are rising. In many jurisdictions (like several U.S. states and proposed in the EU AI Act), organizations must inform candidates when AI is being used and in some cases get consent for it. For instance, an Illinois law requires you to notify and get consent from applicants for AI video interviews. If a candidate says “no, I don’t want an algorithm judging me,” you may need to offer a human process as an alternative. Failing to do so could lead to legal complaints or at least bad press. And from a fairness perspective, transparency is key: people are more accepting of AI tools if they know how they work and how decisions are made. When decisions feel like they came from a black box, distrust grows – and that’s when people lawyer up.

US vs. Europe in summary: Europe is putting up hard guardrails – comprehensive privacy laws and AI-specific regulations that mandate human oversight and carry heavy fines. The U.S., lacking a federal AI law, is taking a patchwork approach: some guidance here, some local laws there, and the use of existing discrimination laws to reel in the worst abuses. The current U.S. regulatory climate is in flux; at one point the federal government was considering more AI guidance, but political shifts have trended towards less regulation in 2025. As a result, states like New York, California, Illinois, and even some unlikely ones (hello, Alabama bias audit law!) are stepping in. It’s a bit of a Wild West compared to the EU. But don’t interpret that as a free pass – the legal consequences of an uncontrolled AI rollout in HR can be dire in any jurisdiction.

To put it bluntly: a failed AI integration in HR not only risks bad outcomes, it risks courtroom drama and massive fines. As one report noted, companies worldwide could collectively lose hundreds of billions of dollars by 2025 due to AI implementation failures – once you factor in wasted investment, legal penalties, and remediation costs. So if the motivation for pushing AI is ROI and shareholder value, know that getting it wrong will have the opposite effect. An AI screw-up in HR can mean costly lawsuits, regulator audits, negative press, employee distrust, and sunk costs to fix or scrap systems. The only winning move is to play carefully – which brings us to our final act: how to do AI in HR right, or at least less badly.

AI in HR, Done Right: A Human-Centric Playbook

By now, you might be thinking AI in HR is a minefield best avoided entirely. Not so fast – despite all the horror stories, AI can deliver real benefits in HR when used wisely. The key is to implement it in healthy doses with plenty of human oversight and common sense. It’s absolutely possible to leverage AI to speed up drudgery and improve decision-making without creating a dystopian hiring process or a legal quagmire. The difference lies in how you introduce and manage these tools. Think of AI like a powerful medication: administered correctly, it can cure ailments; administered haphazardly, it can cause harm. So, what does a good “prescription” for AI in HR look like? Here’s a playbook for executives, HR leaders, and even investors to consider:

When AI in HR is done right, the results can indeed be impressive. Mundane tasks get streamlined – freeing HR professionals to focus on strategic initiatives and genuine human connection. Candidates get quicker responses and maybe even a more personalized experience. Bias can be reduced (yes, AI can help here too, by highlighting inconsistent hiring patterns or flagging potentially biased language in job descriptions). But none of this happens by accident or simply by purchasing a fancy system. It takes a deliberate strategy: humans and AI working in tandem, each doing what they do best. As one HR tech expert wisely noted, “The real revolution is working alongside AI, not bowing down to it”.

In the end, human judgment, empathy, and oversight are the secret sauce that turns an AI deployment from a risky bet into a competitive advantage. HR is, fundamentally, about humans – nurturing talent, building culture, and navigating complex social dynamics in organizations. AI is a powerful tool in the HR toolbox, but it’s not a replacement for the human touch. As executives and investors eagerly fund AI initiatives, they must also invest in the less glamorous side: governance, training, and change management. It might not be as exciting as watching an algorithm “automate hiring,” but it’s the difference between a successful integration and a headline-grabbing debacle.

Conclusion: Walking the Tightrope with Eyes Wide Open

AI in HR is a classic high-reward, high-risk proposition. On one hand, you have the enticing promise – faster hiring, better matches, less paperwork, more insights, maybe even removal of human biases. On the other, the perils – good candidates inadvertently rejected, biased models perpetuating inequality, privacy violations, bewildered employees, and the ever-present specter of lawsuits and reputational damage if things go wrong. The past two years have provided a crash course in what happens when organizations charge into AI without understanding it: we’ve seen brilliant successes in some corners, but also spectacular failures that read like cautionary tales from a dark comedy.

For the C-suite readers and investors: pushing your HR teams to “implement AI everywhere, ASAP” might seem like visionary leadership in the boardroom, but in practice it’s like telling a chef to use a blowtorch for every dish because it’s a hot new tool. The results will vary, and you might set the kitchen on fire in the process. Responsible leadership in 2025 means tempering enthusiasm with due diligence. By all means, explore AI’s benefits – there are real gains to be had – but insist on guardrails, ask the uncomfortable questions (“how do we know this algorithm isn’t sexist?”), and listen to the people who actually work with these tools daily (your HR staff and the applicants going through the system). Remember that shareholder value can just as easily be destroyed by a bungled AI rollout as created by a successful one. There is no faster way to turn a stock price sour than becoming the poster child for AI-driven discrimination or privacy scandal.

For the HR professionals: you are the bridge between the technology and the people. It’s a tough spot – you’re asked to champion innovation and efficiency, but also to uphold fairness, morale, and compliance. Don’t be afraid to pump the brakes when an AI proposal seems half-baked or when the folks upstairs don’t grasp the nuances. Your intuition and expertise are incredibly valuable in this AI era. You know that hiring isn’t just data – it’s intuition, potential, and fit. So be the human advocate in the room. If an algorithm says “reject,” it’s okay to say, “let’s double-check.” If leadership says “replace our recruiters with a chatbot,” you can reply, “how about we let the chatbot handle the midnight queries about PTO balance, and keep humans for the hard conversations?” In other words, use AI to enhance your work, not eclipse it.

And for all the employees out there biting your nails about robots coming for your jobs: it’s true that change is afoot, and some jobs will evolve or even disappear. But history shows technology creates new opportunities even as it disrupts. The fact you’re reading an article about AI in HR means you’re already ahead of many in thinking about these issues. Stay curious and keep learning – become the person in your organization who knows how to use the AI tools, or better yet, who helps make them fair and effective. HR isn’t going to turn into Skynet overnight; it’s going to need real people to guide it for the foreseeable future.

In a sense, we’re all walking a tightrope: leveraging AI’s advantages in HR without falling prey to its pitfalls. It ispossible to achieve that balance, but only with eyes wide open and a willingness to adjust course when wobbles happen. If we do it right, maybe in a few years we’ll be telling success stories of AI in HR – how it found a great candidate everyone else overlooked, how it helped eliminate a bias we humans kept missing, how it freed up time for HR to actually talk to people instead of pushing paperwork. Until then, consider yourself warned (and informed) about the rocky road we have to navigate.

AI can be a trusted assistant or a loose cannon. The deciding factor is us. As a cheeky final thought: the next time someone says “our AI hiring tool will revolutionize everything,” feel free to respond with a wink, “Sure – just make sure it doesn’t revolutionize us into a legal settlement or a viral news story.” In HR, as in life, common sense and a bit of humor go a long way – even with the fanciest of algorithms.

About the Author

Markus Brinsa is the Founder and CEO of SEIKOURI Inc., an international strategy consulting firm specializing in early-stage innovation discovery and AI Matchmaking. He is also the creator of Chatbots Behaving Badly, a platform and podcast that investigates the real-world failures, risks, and ethical challenges of artificial intelligence. With over 15 years of experience bridging technology, business strategy, and market expansion in the U.S. and Europe, Markus works with executives, investors, and developers to turn AI’s potential into sustainable, real-world impact.

©2025 Copyright by Markus Brinsa | Chatbots Behaving Badly™