Grammarly's AI 'Expert' Avatars Spark Ethical Outrage

Grammarly has launched an AI feature that simulates writing critiques from famous authors and deceased scholars without their consent. This 'Expert Review' tool has sparked a backlash among academics who view the scraping of intellectual work as a violation of ethics and personhood. Critics also question the tool's utility, noting its failure to catch plagiarism and its potential to undermine the traditional student-teacher relationship.
Key Points
- Grammarly's new 'Expert Review' feature uses AI to mimic the writing styles and critiques of famous authors and scholars without their permission.
- The feature includes 'reanimated' AI versions of deceased experts, a move described by some academics as 'obscene' and 'unethical.'
- Legal and ethical concerns persist regarding the scraping of copyrighted works to train these specific AI personas.
- Testing revealed functional flaws, such as the tool's failure to identify direct plagiarism from popular media.
- The technology risks further complicating academic integrity and the role of human instructors in education.
Sentiment
The HN community is strongly critical of Grammarly's decision. There is near-universal agreement that using real people's identities without consent for commercial purposes is ethically wrong, even if legally uncertain. The tone is outraged and sardonic, with multiple commenters using harsh language. Even the few voices arguing for legal defensibility tend to acknowledge the ethical problem. Hacker News clearly disagrees with Grammarly's approach and sees it as emblematic of broader AI industry disregard for consent and human authorship.
In Agreement
- Using publicly available works does not grant companies the right to commercialize real people's identities without consent — the line between 'inspired by' and 'Grammarly-brand AI agent of X' is ethically clear and commercially exploitative.
- The legal exposure is real and significant: potential GDPR violations, right of publicity claims, Lanham Act false endorsement provisions, and defamation risk, especially for living individuals featured without permission.
- LLMs can only mimic stylistic output, not the actual creative process — which involves private brainstorming, repeated drafts, editorial feedback, and tacit knowledge never captured in published works.
- This feature erodes academic integrity by simulating expert authority and feedback without authentic expertise or the scholarly relationship students would have with real faculty.
- Grammarly's decision reflects broader AI industry ethics problems where 'publicly available' is treated as equivalent to consent for commercial exploitation.
Opposed
- Legally, style mimicry and 'inspired by' features may constitute fair use — writing styles and personalities cannot be copyrighted, and disclaimers about no affiliation could be legally sufficient.
- Some argue that the distinction between an LLM replicating 'output' versus 'process' is overstated, and that LLMs can effectively capture stylistic patterns useful for practical feedback.
- People have been asking ChatGPT to 'write like X' for years without outrage — Grammarly simply attracts blame because they branded and sold what users already do informally.
- One commenter suggested this feature may actually drive useful innovation in AI interfaces by forcing clearer labeling and consent frameworks.