Spellcheck used to be the most boring part of writing. It sat quietly in the background, underlining obvious mistakes, asking polite questions about typos you already suspected were there. You either accepted the correction or ignored it, and everyone went on with their day.
For more than a decade, Grammarly positioned itself not just as spellcheck, but as judgment. It didn’t merely flag errors; it evaluated tone, confidence, clarity, even intent. It promised to make your writing “better,” not just correct. For non-native English speakers especially, that promise mattered. Grammarly wasn’t just a tool. It was reassurance.
Over the last few weeks, Grammarly began quietly ignoring one of the most basic rules in written English: American English and British English are not the same thing. Words like “favourite,” “organise,” and “colour” slipped through documents configured explicitly for American English. Apple’s built-in spellchecker caught them instantly. Grammarly did not.
This wasn’t a rare edge case. It was systematic. And when pressed, Grammarly’s own chatbot admitted the problem wasn’t user error. It was a known issue caused by “recent updates.”
At that point, Grammarly stopped being an editor and became something else entirely.
There is something deeply revealing about a grammar tool that treats dialect rules as optional. American and British spellings are not stylistic flourishes. They are conventions. You pick one so readers don’t trip over inconsistencies, editors don’t roll their eyes, and your credibility doesn’t quietly erode.
That message is not accidental. It’s the byproduct of how modern AI writing tools work. Grammarly no longer operates as a deterministic rules engine. It is a probabilistic suggestion system trained on massive amounts of mixed-language text. The more it leans into “understanding” language, the less rigid it becomes about enforcing it.
And common is not the same as correct.
Spelling issues are annoying, but they are not the real problem. The real problem begins when Grammarly confidently rewrites sentences that were already correct.
Writers have reported Grammarly suggesting synonym swaps that subtly change intent, tone, or factual emphasis. A sentence meant to express caution becomes assertive. A nuanced argument becomes blunt. A careful distinction collapses into something vaguely adjacent.
This is where Grammarly’s AI crosses from assistance into hallucination. Not hallucination in the dramatic sense of inventing facts, but in the quieter sense of inventing intent. Grammarly assumes it knows what you were trying to say and optimizes toward that assumption.
Sometimes it guesses right. Often it does not.
For experienced writers, this is irritating. For less confident writers, especially non-native speakers, it is dangerous.
Grammarly markets itself as a safety net for people who don’t fully trust their English. That’s exactly the group least equipped to challenge its suggestions.
When Grammarly proposes a rewrite, the interface frames it as improvement. Green checkmarks signal success. Confidence meters go up. Resistance feels irrational. After all, the machine must know better.
But Grammarly does not understand nuance. It does not understand irony, subtext, or rhetorical pacing. It does not know when repetition is intentional or when a “weak” word is doing important work. It does not know your audience.
When those patterns override authorial intent, the result is writing that sounds polished and wrong at the same time. Readers sense it immediately. The text feels off. Sometimes it becomes unintentionally funny. Sometimes it becomes misleading. Sometimes it earns exactly the kind of laughter or complaints writers were trying to avoid by using Grammarly in the first place.
The deeper issue here is not Grammarly. It’s automation bias. When software presents suggestions with confidence, humans defer. Grammarly’s interface does not say “maybe.” It says “improve.” Over time, writers internalize the idea that correctness comes from approval, not judgment.
That’s how standards erode quietly. Not through spectacular failure, but through normalization. A spelling error here. A meaning shift there. A dialect inconsistency nobody notices until publication.
Grammarly didn’t suddenly become useless. It became untrustworthy. And those are very different failures.
After enough missed spellings and context-breaking suggestions, many users do what they never expected to do. They turn Grammarly off.
Not because those tools are smarter, but because they are honest. They correct what they know how to correct and stay quiet about the rest.
That silence matters. A tool that knows its limits is safer than one that pretends not to have any.
Grammarly likes to position itself as an editor. It is not. Editors understand context, audience, and intent. They ask questions. They hesitate. They explain.
That distinction matters more now than ever, as AI tools expand from correcting mistakes to reshaping language itself. If a grammar tool can’t reliably enforce the rules you explicitly selected, it’s not an editor. It’s a pattern matcher with a confidence problem.
And trusting it blindly is how “favorite” quietly becomes “favourite,” and nobody notices until it’s too late.