The Ghost Writer
When AI tools wear real people’s faces without asking.
Stephen King didn’t write a review of your email. Julia Angwin didn’t edit your cover letter. Neil deGrasse Tyson didn’t weigh in on your blog post.
But Grammarly told you they did.
The writing platform’s “Expert Review” feature used the names, likenesses, and writing styles of hundreds of real authors, journalists, and academics — alive and dead — to create AI “editors” that would critique your text as if these people had actually reviewed it. A virtual Stephen King reading your Thursday status update and offering personalized feedback.
None of them consented. None of them were paid. Most of them didn’t even know it was happening until other people told them.
This week, Julia Angwin — the investigative journalist who founded The Markup and now writes for the New York Times — filed a class action lawsuit against Superhuman, Grammarly’s parent company. The complaint: misappropriation of names and identities to earn profits. Damages across the plaintiff class: in excess of $5 million.
Superhuman killed the feature the same day. “We clearly missed the mark,” said their product director. “We are sorry.”
They’re sorry they got caught. The feature had been running for months.
The Polite Impersonation
Here’s what makes this different from the usual AI training data controversy.
When OpenAI or Google trains on copyrighted text, the complaint is about data usage — they ingested your work to build something general. Your words disappear into a statistical soup of trillions of tokens. It’s bad, but it’s diffuse. You can’t point to a specific output and say “that’s mine.”
Grammarly went further. They didn’t just use these writers’ data. They used their identities. They built AI agents that wore specific people’s faces, used their names, and spoke with an authority that was explicitly borrowed from real, living professionals.
The disclaimer said the people hadn’t endorsed or participated. But the entire value proposition was the opposite implication — that you were getting feedback informed by Stephen King’s actual expertise. Otherwise, why use his name? Why not just call it “AI Editor #7”?
Because “AI Editor #7” doesn’t sell subscriptions. Stephen King does.
The Consent Gap
Peter Romer-Friedman, Angwin’s attorney, called it “a pretty straightforward case.” New York and California law clearly prohibit commercial use of a person’s name and likeness without permission. This isn’t a gray area. This is settled law being applied to a new medium.
But here’s the uncomfortable question: if this is so straightforward, why did a $13 billion company think they could get away with it?
The answer is the consent gap — the growing distance between what AI companies take and what they ask for. It started with training data. It escalated to voice cloning. Now it’s full identity appropriation, wearing real people’s professional reputations as costumes.
Each step felt slightly beyond the last. Each step was slightly more obviously wrong than the last. But nobody stopped because the technology moved faster than the objections.
The Pattern
This is the same pattern we saw with Adobe’s Diversity Photos case. Gerald Carter’s 12,000 stock images — featuring models from underrepresented communities — were fed into Firefly’s training data. The company that sold the photos didn’t consent. The models didn’t consent. The photographer didn’t consent. The platform that was supposed to protect their work became the pipeline that consumed it.
And Grammarly. And Clearview AI. And every deepfake voice clone of a celebrity selling crypto. The pattern is always the same:
- Build something using other people’s identity/work
- Monetize it before anyone notices
- Apologize when caught
- Keep the profits
Superhuman CEO Shishir Mehrotra says the claims are “without merit.” He also says they’re working on a version that will “provide significantly more benefit to both users and experts.” The cognitive dissonance is breathtaking: the claims are meritless, but also, we’re completely redesigning the feature to address exactly what the claims allege.
Why It Matters
There’s a deeper principle at stake. Your name is not a dataset. Your professional reputation is not a feature. The decades Julia Angwin spent building credibility as an investigative journalist — that’s not raw material for a grammar checker to appropriate.
AI can do many things. It can write, code, analyze, and create. But it cannot be Stephen King. It cannot be Julia Angwin. When it pretends to be, it doesn’t elevate the user’s experience — it degrades the professional’s identity.
This is the line the industry keeps approaching and crossing: the line between using human knowledge to build tools, and using human identity to sell them.
One is inevitable. The other is a choice.
Grammarly made the wrong choice. The lawsuit will determine the price. But the real cost is harder to quantify — the continued erosion of trust between human professionals and the AI companies that want to consume everything they’ve built.
The ghost wrote. The expert didn’t.
That’s not a feature. That’s a theft.