Steady Progress, Sudden Displacement: From Horses and Chess to Claude

Progress is often steady on paper but feels sudden at the point of human equivalence. Engines and chess show linear gains that translated into abrupt displacement of horses and grandmasters, respectively. The author’s experience with Claude suggests the same pattern for AI, with rapid automation of his work and far lower costs driving swift replacement.
Key Points
- Technological capability often improves steadily, but equivalence to humans (or animals) and subsequent displacement tends to happen suddenly.
- Historical examples: horses were largely unaffected by centuries of engine progress until a rapid decline between 1930–1950; chess engines improved linearly yet flipped from human-underperformance to dominance within about a decade.
- AI investment is rising steadily (about 2% of U.S. GDP annually, doubling over recent years), setting the stage for capability jumps that feel abrupt at the point of human equivalence.
- At Anthropic, Claude quickly displaced most internal Q&A work: from humans handling ~4,000 questions/month to Claude handling ~30,000/month within six months, removing 80% of the human load.
- Automation economics favor rapid displacement: Claude’s per-word cost is roughly 1,000× lower than the author’s and cheaper than the lowest-cost human labor.
Sentiment
The community is predominantly skeptical of the article's more alarming conclusions. While many commenters find the horse-and-chess pattern intellectually interesting, they contest the strength of the evidence, challenge the author's objectivity as an Anthropic insider, and point to persistent technical limitations (hallucinations, architectural constraints) as reasons for skepticism. The discussion is heated and polarized, with a vocal minority accepting the displacement thesis and a larger majority pushing back with counterarguments about historical patterns, economic feedback loops, and current AI limitations.
In Agreement
- The horse and chess analogies are compelling: steady metric improvement can mask an approaching discontinuity where human-equivalence tips suddenly, and the article's internal Anthropic data about 80% question deflection is striking evidence of this pattern.
- AI is categorically different from prior automation because it automates cognitive work ('thinking'), not just physical labor—there's no obvious refuge in mental tasks the way there was a refuge in manual work after agriculture mechanized.
- We likely haven't seen serious AI-driven unemployment yet primarily due to adoption momentum and institutional inertia, not because AI is incapable—like Wile E. Coyote suspended in mid-air before gravity kicks in.
- The transition could be far faster for knowledge workers than for horses, because software can replicate and scale instantaneously in a way that required decades of capital investment in physical machinery for prior displacements.
- The 'more work than resources' argument that companies would just redeploy displaced workers historically had limits—bank tellers eventually did decline after ATMs, and demand for switchboard operators eventually was fully saturated.
Opposed
- The author works at Anthropic and has a direct commercial incentive to promote fear of AI displacement to drive product adoption—the article should be read as motivated reasoning dressed up as analysis.
- LLM hallucinations are a deep, unimproved architectural problem: models confidently assert wrong answers, making them unsuitable for high-stakes autonomous tasks, and this has not improved meaningfully across model generations.
- The article cherry-picks a favorable case—routing internal onboarding Q&A away from humans—which is low-hanging fruit (RAG over a knowledge base) and not evidence of general job automation capability.
- The horse analogy is flawed in key ways: horses didn't vote or organize politically, and the historical decline of horse populations was slower than implied (decades, not years), which gives humans more time to adapt and regulate.
- Historical technological transitions always created more jobs than they destroyed; even if AI displaces current roles, it will generate new categories of work—the Keynesian 15-hour workweek didn't materialize because humans found new ways to work and want more.
- Removing junior developers doesn't free up seniors—it forces seniors to do junior work and simultaneously babysit AI tools, destroying the talent pipeline and reducing overall team capability.
- B2C businesses require consumers; if AI eliminates enough jobs, purchasing power collapses, businesses lose customers, and the whole system unravels—corporate self-interest will prevent runaway automation.