There is something almost beautifully insulting about being impersonated by software and then being told this was really a form of respect.
That, in essence, was the pitch behind Grammarly’s now-disabled “Expert Review” feature, which lived inside the company now called Superhuman. The idea was that users could get AI-generated feedback supposedly inspired by named experts, including journalists, authors, and public intellectuals.
Some of those people were alive. Some were dead. Many had never heard of the feature. None of them, as far as the public reporting showed, had agreed to become little editorial ghosts floating around inside a writing tool.
The Verge found that its own staff, including editor-in-chief Nilay Patel, had been turned into these synthetic review personas without permission. WIRED separately reported that the feature offered users AI reviews from famous writers and thinkers without consent.
This story combines three modern instincts into one tidy little corporate incident. First, the irresistible tech urge to ship something creepy and call it innovative. Second, the managerial belief that public content is apparently communal paste, ready to be scraped, remixed, and stapled to a revenue feature. Third, the now-standard AI defense strategy: act as though everyone is overreacting to a harmless experiment right up until the moment the experiment explodes.
Then comes the truly elegant flourish. The fake expert comments reportedly appeared with names and even verification-style cues that made the whole thing look official enough to cross the line from “inspired by” into something much closer to impersonation.
That was central to Patel’s confrontation with Superhuman CEO Shishir Mehrotra on Decoder, where Patel pressed him on why the company thought launching a feature with real people’s names, without permission, would go over well. Mehrotra admitted the feature “was not a good feature,” said it had little usage, and said the company killed it quickly, though he also kept arguing over the boundary between attribution and impersonation.
The most revealing part of this whole mess is not that the feature existed. By now, AI companies trying something ethically deranged and then calling it a product test barely qualifies as weather. The revealing part is the logic underneath it.
Superhuman’s early public defense, as quoted by The Verge, was that the feature did not claim direct participation or endorsement from the named experts and merely provided suggestions inspired by publicly available work. That defense is doing a lot of cardio. Because ordinary humans do not experience this as a subtle philosophical exercise about attribution norms. They experience it as, “Why is this company using a real person’s name to generate advice that person did not write?” That is not an edge case. That is the entire case.
And this is where AI product culture keeps telling on itself. The builders often behave as if public writing is not expression but raw material. If you published it, indexed it, or dared to have influence in public, then congratulations: your work has been reclassified as training slurry. Your identity is now apparently a UX layer.
Your voice is a feature set. Your professional credibility is a convenient costume.
When Patel confronted Mehrotra on Decoder, the exchange got at a much larger tension. Mehrotra tried to frame the issue partly around the idea that using someone’s name can be a form of attribution, because online creators want to be linked and cited. That sounds plausible for about seven seconds, until you remember that citation and simulation are not the same thing.
A hyperlink sends people to your work. A fake AI editor wearing your name sends people to a machine’s approximation of what some product team thinks you might say. That is not tribute. That is ventriloquism with product-market fit aspirations.
One of the nastiest details in the reporting is how banal the whole mechanism was. This was not some sci-fi android replacing an investigative reporter at a newspaper. It was a sidebar feature in a consumer writing tool. A little helper. A little nudge. A little “what would this famous person say?” box. The kind of thing product people love because it looks small enough to avoid triggering adult supervision. But the smallness is the trick.
The AI industry has become very good at hiding major boundary violations inside minor conveniences.
Add a badge, add a name, add a vaguely official interface, and suddenly you are no longer just offering generic machine feedback. You are smuggling authority. You are renting trust you did not build.
That matters because most users will not parse the ontology of “AI-generated suggestions inspired by public works.” They will parse the interface. If software shows them a real person’s name next to editorial guidance, especially with design choices that suggest legitimacy, many will assume there is some actual relationship there. Some kind of approval. Some kind of participation. Some kind of reality.
The interface does the lying long before the terms of service arrive to explain that technically, legally, cosmically, no one promised anything.
The Verge also reported that the feature’s sources could be difficult to inspect and sometimes linked to spammy or archived copies instead of actual source pages, which makes the whole “explore more deeply” justification look even thinner.
This is the larger AI con in miniature.
The system borrows the outer shell of human authority while quietly replacing the human part with predictive paste. Then everyone acts shocked that the public feels tricked.
The sequence here was also depressingly familiar. The feature launched in August 2025. Backlash mounted after reporting revealed how it worked. Superhuman first moved toward an opt-out approach, then disabled the feature entirely. The company later said it wanted to rebuild the concept so experts would have real control over whether and how they were represented. WIRED and The Verge both reported that the feature was shut down after criticism, with apologies acknowledging the company missed the mark.
There it is again: consent, but after deployment. Participation, but retroactive. Control, but only once somebody gets caught.
This pattern should have a proper industry label by now. Not “move fast and break things.” More like “ship first and discover ethics through public humiliation.” It is one of AI’s signature management styles. Launch the synthetic thing. Borrow the reputation. See if anyone screams. If they do, apologize and announce a new framework where creators will finally be empowered. By then, of course, the company has already tested the boundary it actually cared about: not whether the idea was right, but whether it was survivable.
What makes the Superhuman story especially useful is that it strips away the usual grand rhetoric. This was not framed as Artificial General Intelligence (AGI), scientific transformation, or civilizational uplift. It was writing software. Productivity software, even. Which is precisely why it matters. When identity misuse shows up in the boring tools, the normal tools, the stuff people use to write emails and documents, that tells you the industry has already normalized a very dangerous assumption: if AI can create the effect of expertise, many companies no longer see a pressing need to secure the expert.
Mehrotra’s broader defense on Decoder was not simply that the old feature was clumsy. It was that there is still a future in which experts participate in platforms like this more directly, perhaps shaping AI versions of themselves and monetizing them. He compared aspects of the debate to earlier platform fights over creator economics. Nieman Lab’s write-up of the interview captured that larger framing as Superhuman tried to move from unauthorized editorial mimicry toward a model of creator-controlled AI participation.
That is the part worth watching, because it tells us where this does not end.
The industry keeps stumbling into the same destination from different directions: a world in which people become licensable interfaces. Not merely quoted, not merely cited, but packaged. Their judgment flattened into a subscription layer. Their style translated into a product tier. Their identity turned into a reusable asset that software can deploy at scale.
Some creators will absolutely opt in to that, and some may do well from it. Fine. Adults can make contracts. But the Superhuman episode exposed the temptation underneath the cleaner future story. If platforms think there is money in synthetic expertise, they will keep testing how much of a person they can capture before the person objects.
The unauthorized version is not a bug in the business model. It is the first draft of it.
That is why this incident is bigger than one creepy feature. It is about the industry trying to renegotiate what a public identity is worth. Once upon a time, using somebody’s name and authority in a commercial product without permission was the sort of thing that immediately sounded bad in the meeting room. In AI, it too often survives long enough to become a launch.
The easiest way to dismiss this story is to say the feature was buried, the product was clumsy, the company backed down, the market corrected, moving on. But that would miss the signal.
The signal is that a real company with real executives and roughly 1,500 employees, by Mehrotra’s own count on Decoder, got far enough into the process of turning real writers into synthetic editorial ornaments that the thing shipped. It had names. It had interface polish. It had a theory. It had, somewhere deep in the machinery, enough internal approval to become customer-facing.
That is not random weirdness. That is institutional judgment, briefly made visible.
And institutional judgment is the story now. The next generation of AI failures will not all arrive as hallucinations or chatbot breakdowns. Many will arrive as governance failures disguised as convenience features. Little identity shortcuts. Little consent shortcuts. Little credibility shortcuts. Little moments when software companies decide it is probably fine to simulate the presence of a human because the user mainly wants the effect anyway.
That is the logic that corrodes trust fastest. Not the spectacular robot uprising. The smaller insult. The quiet replacement. The moment your name, your face, your work, or your authority becomes a software component before anyone bothers to ask.
AI did not just impersonate a few writers here. It revealed a deeper ambition. It wants the benefits of human credibility without the friction of actual humans. And like most modern scams, it briefly tried to pass that off as a feature.