Article image Logo

Grammarly Is Not Your Editor

Spellcheck used to be the most boring part of writing. It sat quietly in the background, underlining obvious mistakes, asking polite questions about typos you already suspected were there. You either accepted the correction or ignored it, and everyone went on with their day.

Then Grammarly happened.

For more than a decade, Grammarly positioned itself not just as spellcheck, but as judgment. It didn’t merely flag errors; it evaluated tone, confidence, clarity, even intent. It promised to make your writing “better,” not just correct. For non-native English speakers especially, that promise mattered. Grammarly wasn’t just a tool. It was reassurance.

That’s why it’s unsettling when the reassurance starts lying.

Over the last few weeks, Grammarly began quietly ignoring one of the most basic rules in written English: American English and British English are not the same thing. Words like “favourite,” “organise,” and “colour” slipped through documents configured explicitly for American English. Apple’s built-in spellchecker caught them instantly. Grammarly did not.

This wasn’t a rare edge case. It was systematic. And when pressed, Grammarly’s own chatbot admitted the problem wasn’t user error. It was a known issue caused by “recent updates.”

At that point, Grammarly stopped being an editor and became something else entirely.

When correctness becomes optional

There is something deeply revealing about a grammar tool that treats dialect rules as optional. American and British spellings are not stylistic flourishes. They are conventions. You pick one so readers don’t trip over inconsistencies, editors don’t roll their eyes, and your credibility doesn’t quietly erode.

When a system allows those rules to blur, it sends a message: close enough is good enough.

That message is not accidental. It’s the byproduct of how modern AI writing tools work. Grammarly no longer operates as a deterministic rules engine. It is a probabilistic suggestion system trained on massive amounts of mixed-language text. The more it leans into “understanding” language, the less rigid it becomes about enforcing it.

In other words, Grammarly doesn’t always know what’s wrong anymore. It knows what’s common.

And common is not the same as correct.

Suggestions that rewrite meaning, not mistakes

Spelling issues are annoying, but they are not the real problem. The real problem begins when Grammarly confidently rewrites sentences that were already correct.

Writers have reported Grammarly suggesting synonym swaps that subtly change intent, tone, or factual emphasis. A sentence meant to express caution becomes assertive. A nuanced argument becomes blunt. A careful distinction collapses into something vaguely adjacent.

Nothing breaks. No red error appears. The sentence still looks fine. It just no longer says what the author meant.

This is where Grammarly’s AI crosses from assistance into hallucination. Not hallucination in the dramatic sense of inventing facts, but in the quieter sense of inventing intent. Grammarly assumes it knows what you were trying to say and optimizes toward that assumption.

Sometimes it guesses right. Often it does not.

For experienced writers, this is irritating. For less confident writers, especially non-native speakers, it is dangerous.

The trap non-native writers fall into

Grammarly markets itself as a safety net for people who don’t fully trust their English. That’s exactly the group least equipped to challenge its suggestions.

When Grammarly proposes a rewrite, the interface frames it as improvement. Green checkmarks signal success. Confidence meters go up. Resistance feels irrational. After all, the machine must know better.

But Grammarly does not understand nuance. It does not understand irony, subtext, or rhetorical pacing. It does not know when repetition is intentional or when a “weak” word is doing important work. It does not know your audience.

It knows patterns.

When those patterns override authorial intent, the result is writing that sounds polished and wrong at the same time. Readers sense it immediately. The text feels off. Sometimes it becomes unintentionally funny. Sometimes it becomes misleading. Sometimes it earns exactly the kind of laughter or complaints writers were trying to avoid by using Grammarly in the first place.

Automation bias dressed up as confidence

The deeper issue here is not Grammarly. It’s automation bias. When software presents suggestions with confidence, humans defer. Grammarly’s interface does not say “maybe.” It says “improve.” Over time, writers internalize the idea that correctness comes from approval, not judgment.

That’s how standards erode quietly. Not through spectacular failure, but through normalization. A spelling error here. A meaning shift there. A dialect inconsistency nobody notices until publication.

Grammarly didn’t suddenly become useless. It became untrustworthy. And those are very different failures.

Why turning it off feels like progress

After enough missed spellings and context-breaking suggestions, many users do what they never expected to do. They turn Grammarly off.

They fall back to simpler tools. Built-in spellcheckers. Manual rereading. Actual editing.

Not because those tools are smarter, but because they are honest. They correct what they know how to correct and stay quiet about the rest.

That silence matters. A tool that knows its limits is safer than one that pretends not to have any.

The editor Grammarly never was

Grammarly likes to position itself as an editor. It is not. Editors understand context, audience, and intent. They ask questions. They hesitate. They explain.

Grammarly suggests. Confidently. Relentlessly. Sometimes correctly. Sometimes not.

That distinction matters more now than ever, as AI tools expand from correcting mistakes to reshaping language itself. If a grammar tool can’t reliably enforce the rules you explicitly selected, it’s not an editor. It’s a pattern matcher with a confidence problem.

And trusting it blindly is how “favorite” quietly becomes “favourite,” and nobody notices until it’s too late.


©2026 Copyright by Markus Brinsa | Chatbots Behaving Badly™

Sources

  1. Grammarly Support - Select between British English, American English, Canadian English, Australian English, and Indian English support.grammarly.com
  2. Grammarly Blog - How to Select Your English Dialect grammarly.com
  3. Reddit r/Grammarly - “Grammarly, I prefer British English over American …” (includes a response stating Grammarly’s generative AI may not recognize the language preference) reddit.com
  4. Reddit r/Grammarly - “Grammarly is completely reversing the meaning of my writing!” (user reports meaning reversal, including removal of “not”) reddit.com
  5. ScienceDirect (Heliyon) - Exploring the use of Grammarly in assessing English academic writing (study reporting false positives and limitations) sciencedirect.com
  6. SamR’s Musings (Grinnell College) - Bad advice from Grammarly: Repeated words cs.grinnell.edu
  7. The Verge - Grammarly can now fix your Spanish and French grammar (coverage of Grammarly’s AI expansion and rewrite behavior across languages) theverge.com
  8. The Verge - Grammarly is changing its name to Superhuman (coverage of Grammarly’s shift into a broader AI agent/productivity suite) theverge.com
  9. TechRadar - Grammarly has rebranded as Superhuman, launching a new AI assistant that works across 100+ apps techradar.com
  10. Jisc National Centre for AI (UK higher-ed) - From Typos to Tone: Exploring Current Tools for Writing Support (includes Grammarly generative AI rewrite examples and behavior) nationalcentreforai.jiscinvolve.org

About the Author