Episode artwork
Chatbots Behaving Badly
Software Update With a Scalpel - When “smart” medical devices start acting like consumer tech
This episode is based on the article "The Accuracy Discount - What happens when “80% is fine” gets anywhere near a human skull" written by Markus Brinsa.

AI didn’t march into the operating room with a dramatic entrance. It arrived the way risk usually arrives in 2026: as a “software update.”

In this episode of Chatbots Behaving Badly, the host breaks down the Reuters reporting on AI-enabled medical devices and what happens when machine-learning features get bolted onto tools that clinicians may treat as authoritative. The conversation quickly turns to the real hazard: not “evil AI,” but governance gaps. Validation that looks good on paper but not in real clinical conditions. Interfaces that make uncertainty feel like certainty. Update cycles that behave like consumer software while the stakes behave like neurosurgery.

Joined by Christy Walker, an independent researcher in healthcare technologies, we unpack why these risks are so hard to detect early, what defensible validation actually looks like, and how hospitals and vendors should treat AI-enabled changes as safety events, not feature releases. The future of AI in medicine might still be promising, but only if the industry stops confusing “AI-powered” with “clinically trustworthy.”

0:00 0:00
Previous
-10s
Play
+10s
Next
Mute

Transcript

Loading transcript...