How to keep your head when everything on your screen can be faked—and how to prove what’s real.
The photo hits your feed at 09:03. The caption screams catastrophe. Your stomach drops, your thumb hovers, and for a few jittery seconds, the markets even flinch. Then officials say it never happened. The image was synthetic theater. We’ve been here before, and we’ll be here again. In a world where any pixel can lie and any voice can be borrowed, “digital authenticity” is no longer a nice-to-have; it’s the seatbelt you only notice when it saves your life. When I say authenticity, I don’t mean truth—I mean proof: proof of origin, proof of integrity, proof of context.
Authenticity is the chain that connects a thing to the hands that made it and the path it took to you. Its identity, integrity, provenance, and context work together: who authored it, whether it was altered, how it traveled, and what disclosures belong with it. It doesn’t guarantee a claim is correct; it guarantees the artifact is genuinely itself. That’s the difference between a signed press photo and a rumor with a ring light.
There are two grand strategies. Provenance attaches durable receipts to content at creation and updates them with each edit, so the story of the file becomes part of the file. Detection inspects a finished artifact and guesses how it was made. Detection is an arms race; provenance, when implemented end-to-end, is paperwork. You want microscopes, but you lead with receipts.
Content Credentials—built on the C2PA standard—let cameras, creative tools, and platforms cryptographically sign media and carry a tamper-evident history forward. When those signatures survive your publishing pipeline and your CDN refuses to strip them, you get something radically better than vibes: a checkable provenance card for every image, clip, and composite. You don’t have to trust a caption; you can verify one.
Invisible watermarks are helpful, especially when they’re widely adopted across image, video, audio, and text. But they’re only one layer, and adversaries will try to break them. The serious play is watermarks plus signed provenance, not either/or.
Regulators and platforms are forcing context where it matters most. Policies now require clear disclosures when synthetic or “realistically altered” media might be mistaken for reality, and some platforms surface “captured on camera” labels derived from Content Credentials. This isn’t a cure-all; it’s scaffolding for a culture that expects receipts.
If a file is authentic but the sender is spoofed, the audience is still lost. That’s why inboxes demand authenticated mail (SPF, DKIM, DMARC) and why brands are shifting staff accounts to passkeys. It’s dull plumbing—and that’s exactly why it works.
A single convincing fake can outrun the correction by hours and the truth by days. We’ve watched market blips from a synthetic photo and elections probed by deepfake robocalls. Without provenance and policy, outrage becomes a denial-of-service attack on attention itself.
Every app that captures, edits, or hosts media is part of the chain of custody. If your tools can be tampered with, so can your provenance. The fix is the same mindset we use for supply chains: sign artifacts, log builds in public ledgers, and publish attestations so your editors know the tools in their hands are exactly what you shipped.
The authenticity story gets bigger—and more interesting—the moment you look upstream at the AI that touches your work. An image or paragraph isn’t just a file anymore; it’s the tip of a process iceberg. To keep the promise of “authentic,” you need receipts for the inputs that trained the model, the model and pipeline that produced the output, and the output itself that audiences will see and share.
Start where the learning begins: the training set. If a model quietly learned from materials you wouldn’t publish or license, output-level labels are too little, too late. Responsible builders are documenting the composition of their corpora, the collection windows, the licensing posture, the share of synthetic versus human-created material, and the mechanisms to remove or quarantine data later. They’re keeping a changelog for datasets, naming their crawlers in public, and maintaining a record of what’s in and what’s out so rightsholders can actually check. That paper trail isn’t performative; it’s how you prove your model wasn’t smuggled into existence.
Then move to the model itself. Treat weights like code. Sign them. Record the training configuration, major checkpoints, augmentation steps, and the evaluation suite used to judge performance. Keep attested logs for fine-tunes and safety updates, because a model’s personality can change meaningfully with a small nudge to the diet or the guardrails. This is where the industry’s “model cards” and “datasheets for datasets” stop being academic niceties and start being your warranty.
Finally, carry the receipts to the audience. When an output lands in your CMS, it should arrive with an attached record that declares whether AI was involved, how it was involved, and what edits happened afterward. Content Credentials already have a vocabulary for “AI-generated” and “AI-assisted” assertions, which means the same verify button that works for a photojournalist’s raw can also tell you that a brand image started life in a text-to-image model and was retouched in post. Disclosure isn’t a confession; it’s a sign of professionalism.
Executives keep asking for a one-page checklist. Here’s the idea without bullets or bureaucracy: an AI Bill of Materials reads like a passport stamp for your model. It says where the data came from, when it was collected, what share was synthetic, which licenses apply, and who can revoke what; it names the crawlers used and the domains that contributed the most; it specifies the model version, the fine-tune lineage, the safety filters in place, and the evaluation sets that gave you confidence; it includes signatures for the weights, the configs, and the outputs you ship to your audience. If someone challenges a claim, you don’t scramble—you open the folder.
If you publish, begin at capture and keep the label alive. Enable Content Credentials in-camera or in your tools, and configure your pipeline so the signature survives your CDN and your CMS. If you build or buy models, demand an AI-BOM, not a shrug. If you’re a platform, don’t strip provenance by default, and put a verify affordance next to media so audiences can check without leaving. If you run communications, authenticate mail, and move teams to passkeys. If you ship software, sign your builds and keep a public notebook of how they were made. None of this guarantees truth. It does give your audience—and your future self—something sturdier than anxiety.
No standard will stop motivated people from lying, but standards can make lying expensive, traceable, and slower to spread. We shouldn’t promise perfect detection; we should promise accountability by design. That means receipts you can verify, histories you can audit, and warnings where ambiguity remains. The point isn’t to halt deception altogether; it’s to keep reality competitive.
Chatbots Behaving Badly has one rule for “real”: if it matters, make it checkable. Your post can be satirical; your photo can be staged; your podcast can have an AI-polished script. Just carry your paperwork. When the next viral “event” hits your feed, you should be able to answer three questions in under ten seconds: Who made this? What changed? Who says so?
If you can’t, don’t share it. If you can, share the receipts with it.