Article image Logo

Gibberish on the Record - AI note-takers are creeping into child protection

The new superpower nobody asked for

Social work has never been short on urgency. The decisions are difficult, the paperwork is endless, and the margin for error is effectively zero. So when someone shows up promising to shave hours off admin, the pitch lands like relief.

In parts of England and Scotland, that promise is being delivered by AI note-takers that record meetings and spit out transcripts and summaries. Tools like Magic Notes and Microsoft Copilot are used to turn messy, emotional, complex conversations into something that fits within official case systems. The idea is simple: less typing, more time with people.

Then the transcript starts talking about fishfingers.

That’s not a joke. In interviews collected during an eight-month study shared with The Guardian, social workers described AI outputs that replaced a child's description of parents fighting with unrelated words such as “fishfingers or flies or trees.” Others said the tools produced “gibberish.” And in one of the most alarming examples, a summary incorrectly suggested suicidal ideation where the client never discussed anything remotely like it.

In most workplaces, that would be embarrassing. In child protection, it’s radioactive.

Why hallucinations hit harder in social care

The danger isn’t just that the AI gets words wrong. It’s that it gets reality wrong in a context where records don’t sit quietly in a folder. They travel. They get shared. They shape risk assessments, care plans, referrals, and court narratives. They become the “what happened” that other professionals inherit.

Social work documentation has a special kind of power. It’s not a diary. It’s an instrument. A poorly written sentence can harden into a label. A misheard phrase can become a risk marker. A fabricated detail can change the trajectory of a family. 

AI note-takers don’t need to be malicious to do damage. They only need to be confidently wrong in the kind of way that looks plausible when you’re exhausted and rushing to close a case. That’s why the most chilling part of this story isn’t the hallucination itself. It’s the workflow around it.

The human in the loop who is out of time

If you ask vendors and adopters what keeps this safe, you’ll hear the same phrase, delivered like a magic spell: human in the loop. The AI is only a draft. The professional checks it. Nothing to worry about.

The problem is that “human in the loop” is not a control. It’s a hope. It’s a staffing model. It’s a mood. The reporting around this study highlights how uneven the checking actually is. Some social workers described spending serious time reviewing transcripts. Others described spending minutes. And the moment you can cut and paste a neat-looking summary into a system, the temptation isn’t just to trust it. It’s to treat it like it came from you.

At that point, the tool isn’t saving time. It’s shifting responsibility. The risk doesn’t disappear. It gets pushed downhill, onto the person with the least time and the most liability. 

That dynamic is exactly what the Ada Lovelace Institute flagged more broadly in its work on AI transcription tools in social care: adoption is accelerating, evaluations are often focused on efficiency, and accountability for output quality tends to land on frontline workers, even though they don’t control the model design, the tuning, or the guardrails.

In other words, the system is being introduced as an operational fix, and governed like a convenience feature.

This is not “just transcription”

A normal transcription error is a typo. A missed word. A name spelled wrong. Annoying, but usually obvious.

These tools are different because they often combine two stages. First, automated speech recognition turns audio into text. Then a large language model reshapes that text into summaries and structured notes, sometimes in the tone and format your organization expects. That second stage is where things get weird, because the model isn’t merely repeating. It’s interpreting.

Interpretation is not neutral. It introduces assumptions, narrative glue, and sometimes invented connective tissue that feels “right” to the model even when it never happened. In social care, that’s not a feature. That’s a governance problem wearing a productivity badge.

The most revealing line in the reporting is the one about crossing the line between it being your assessment and being the AI’s assessment. That’s the entire issue in one sentence. Social work requires reflective practice. The writing is part of the thinking. If the tool does the thinking-shaped part for you, you don’t just lose accuracy. You lose process. And then you still sign your name.

The politics of “incredible time savings”

This story also has a familiar soundtrack: public leaders praising innovation while the frontline quietly deals with what innovation actually does on Tuesday afternoon.

The Guardian notes that the UK prime minister previously championed time-saving transcription technology in social work. That political enthusiasm makes sense. It’s clean. It’s optimistic. It photographs well. It implies modernization without mentioning budgets.

But AI doesn’t fix underfunding. It re-routes it.

 If the reason you need an AI note-taker is that your workforce is stretched to the point of failure, then your quality controls are already under stress before the AI arrives. That’s the worst possible environment to introduce a system that sometimes produces plausible nonsense.

What goes wrong next if nobody slows down

If AI transcription errors become normal, you get a new kind of institutional decay. You get records that feel polished but drift away from lived reality. You get disagreements about what was said that turn into “the system says,” which is the bureaucratic version of “the computer is always right.” You get families trying to correct a narrative that was never theirs. You get professionals spending time defending documentation instead of improving outcomes. You get court-adjacent paperwork contaminated by machine-generated assumptions. 

And you get a quiet shift in culture: people stop treating the record as something you craft carefully, and start treating it as something you approve quickly. That’s how hallucinations scale. Not because the model is powerful, but because the workflow is permissive.

What “safe enough” actually looks like

The goal here isn’t to ban tools. It’s to stop pretending that a draft is harmless. If councils want the efficiency benefits, they have to treat AI transcription like a high-risk system, because in child protection and adult social care, it is. That means the controls have to be structural, not motivational.

It means keeping and securing the original audio as the true source record, with clear retention rules. It means requiring verification steps that can’t be skipped, and designing interfaces that make review unavoidable rather than optional. It means preventing direct cut-and-paste into final records without explicit confirmation that the content has been checked against the audio. It means auditing error rates in the real world, including how performance changes with accents, background noise, and distress. It means documenting where the tool was used in a case file so downstream readers know what they’re looking at. It means clear accountability for procurement decisions and model updates, not just for the social worker who clicked “accept.”

Most of all, it means being honest about what these tools are doing. They’re not dictation machines. They’re narrative engines.

In social work, narratives are power. If you outsource them to a system that sometimes invents fishfingers, you’re not modernizing the paperwork. You’re modernizing the risk.


©2026 Copyright by Markus Brinsa | Chatbots Behaving Badly™

Sources

  1. The Guardian - Social workers’ AI tool makes ‘gibberish’ transcripts of accounts from children theguardian.com
  2. Ada Lovelace Institute - Scribe and prejudice? Exploring the use of AI transcription tools in social care adalovelaceinstitute.org
  3. Ada Lovelace Institute - AI transcription is rapidly being rolled out across social work, but current approaches to ethics and evaluation are limited and light-touch adalovelaceinstitute.org
  4. Ada Lovelace Institute - What is an AI transcription tool? adalovelaceinstitute.org
  5. UK Digital Marketplace (pricing document) - Magic Notes pricing document (PDF) assets.applytosupply.digitalmarketplace.service.gov.uk

About the Author