Why Authors Will Pay to Be in AI Training

As AIs become the default interface for knowledge, authors will prioritize getting their works into AI training sets—even paying for inclusion—to shape how AIs answer and what they recommend. The Anthropic case highlights that legal access matters more than copying per se, and that copyright law needs new mechanisms suited to AI learning. Optimizing writing for AI comprehension will be a core skill, as long-term influence will hinge on being part of AI’s growing, compounding memory.
Key Points
- Authors will likely pay AI companies to include their works in training to avoid obscurity and to influence AI outputs.
- The Anthropic settlement penalized unauthorized possession of book copies, while suggesting training can be fair use if content is legally obtained.
- Traditional copyright centered on copying is ill-fitted to AI training; new rights (e.g., a “Right of Reference”) may be needed.
- AI will be the primary discovery and truth arbiter for many users, making inclusion in training corpora vital for cultural relevance.
- Writers will optimize for “AI-friendly” formats and structures to maximize machine comprehension and long-term influence.
Sentiment
The Hacker News community overwhelmingly disagrees with the article. Most commenters view Kelly's thesis as either naive, self-serving, or dystopian. The argument is seen as applicable only to a narrow category of non-fiction idea authors who use books as platforms for their personas rather than as primary income. Even moderate voices who acknowledge some validity in the premise push back against the broader vision of writing for AI audiences. The few agreeing voices are drowned out by visceral criticism of the dehumanizing implications.
In Agreement
- For non-fiction idea books, AI training inclusion could help ideas permeate culture — concepts like 'deep work' or 'black swan' gain influence through wide dissemination regardless of medium
- Fiction authors might benefit if AI knowledge of their work drives casual references and sales, similar to how AI currently references Tolkien or Dune
- AI resistance in western creative circles is a cultural shibboleth that may not last; those who don't adapt will eventually disappear from the conversation
- Optimizing writing for AI is mostly about metadata like titles and abstracts — similar to existing book marketing rather than a fundamental disruption
- Good writing qualities like clarity, structure, and economy of language naturally align with AI parseability
Opposed
- Most working authors are outraged about unauthorized use of their copyrighted works; Kelly's position is a luxury of those who don't depend on book sales for income
- Kelly and similar tech boosters use books as advertising for their personas and ideological agendas, creating an inherent conflict of interest in their enthusiasm for AI training
- The article describes a dystopian hellscape disguised as optimism — writing primarily for AI audiences is fundamentally dehumanizing to creativity
- Calling AIs 'arbiters of truth' demonstrates dangerous credulity, equating calculator reliability with LLM outputs
- Having AI summarize fiction destroys its purpose — like putting steak in a blender for efficiency
- The article reads as propaganda for the AI industry, serving as a morale booster for those worried about funding disappearing
- Even writers cited as pro-LLM mostly advocate for training data consent, not actually using LLMs to write — a crucial distinction the article blurs