After the GenAI Bubble: Fewer Layoffs, Persistent Hallucinations, and Pragmatic Code Gen

Read Articleadded Oct 1, 2025

Bray predicts hallucinations won’t be solved under current LLM paradigms and that mass layoffs from AI won’t occur because reverse-centaur workflows create costly, low-quality output while centaur workflows don’t deliver outsized gains. He expects a painful AI-bubble deflation, largely harming investors but not the broader economy. In software, code generation will become routine where tests can validate results, while the real reason to resist GenAI is the grifter-driven hype, environmental costs, and exploitative labor.

Key Points

  • Hallucinations are intrinsic to current LLM training and won’t be “fixed” without a fundamental breakthrough.
  • Mass layoffs from AI won’t happen: reverse-centaur workflows create costly workslop, and centaur workflows don’t yield enough gains to justify firing millions.
  • The AI investment bubble will pop (likely by 2026), inflicting large financial damage—mostly on investors—but not crashing the broader economy.
  • Software development will adopt code generation as a routine tool, especially where results can be validated by compilers/tests; benefits cluster around app logic, big APIs, CSS, and SQL.
  • The strongest reason to resist GenAI is the grifter-led hype and its environmental and labor harms; once the bubble deflates, their imagined future won’t prevail.

Sentiment

The overall sentiment of the Hacker News discussion is predominantly critical and skeptical of Tim Bray's predictions. While some specific points, like the inherent nature of some hallucinations or the presence of hype, found agreement, the majority of commenters presented strong counter-arguments and evidence against the article's more definitive claims regarding the unfixability of hallucinations, the absence of mass layoffs, and the limited impact on low-level software.

In Agreement

  • Hallucinations are an inherent part of statistical models, making a complete fix computationally impossible or impractical for general-purpose LLMs due to reality's long-tail nature.
  • The 'optimal number of hallucinations is far more than zero,' implying they serve a necessary purpose in creative or summary-generating use cases.
  • The S&P 500's growth being concentrated in a few 'Mag 7' stocks suggests that if AI revenues falter, the broader index could see a predictable downturn, supporting the idea of a concentrated bubble.
  • The GenAI space is accurately characterized by a 'panoply of grifters and chancers and financial engineers' driven by hype.

Opposed

  • Hallucinations are fixable; research is still young (only ~10 years), there are strong incentives to solve them, and historically, complex problems have taken much longer to crack.
  • The definition of 'hallucination' is nuanced; LLMs are already improving at *not* presenting made-up facts as real in factual queries, and creative tasks inherently request non-real information.
  • Mass layoffs or significant job displacement *will* occur and are already evident in sectors like translation, call centers, boilerplate coding, and AI art, leading to societal impact.
  • The premise that GenAI's central goal is the elimination of knowledge workers is incorrect; the technology is a 'cool toy' whose applications are still being discovered.
  • AI *will* significantly help with low-level infrastructure code, as it's a valuable domain for recursive self-improvement with objective quality metrics, and models are already proficient, enabling cheaper optimization.
  • AI valuations do not necessarily require mass layoffs; even marginal increases in economic growth can justify large investments.
  • AI is evolving at a rate far faster than biological evolution, and agents are already demonstrating problem-solving capabilities superior to many humans, inevitably reshaping humanity's trajectory.
  • The article's predictions are too vague to be falsifiable, acting more as 'sentiment' than concrete, measurable forecasts.
After the GenAI Bubble: Fewer Layoffs, Persistent Hallucinations, and Pragmatic Code Gen