LLMs: Great Demos, Little Real-World Value

Added Oct 1, 2025
Article: NegativeCommunity: NeutralDivisive

LLMs make it easy to craft impressive demos but often fail to deliver consistent value in day-to-day work. The hype-fueled expectation that rapid model improvements would bridge this gap is waning. Without becoming indispensable, AI products will see poor renewals, threatening the economics of today’s AI boom.

Key Points

  • Demoware wins on curated demos but disappoints in everyday use; LLMs supercharge this dynamic by making great demos trivial.
  • LLMs possess broad but shallow competence, leading to failures in real-world scenarios such as student engagement, edge-case support, and complex coding.
  • AI hype has accelerated adoption, but the assumption that rapid model improvement would soon fix shortcomings is fading.
  • A practical test of value is indispensability: if removed, would work suffer? Most AI tools fail this test today.
  • Subscription-based software needs sustained real value to retain customers; without it, renewals—and massive GPU-driven AI bets—are at risk.

Sentiment

The Hacker News community is notably divided on whether LLMs are demoware. The most upvoted and substantive comments push back against the article's blanket dismissal, with prominent voices providing detailed personal experience of LLMs delivering consistent real-world value. However, a substantial contingent agrees with the article's broader economic argument, pointing to the lack of measurable productivity gains and questioning whether the massive capital investment will pay off. The discussion reveals a fault line between individual practitioners who report genuine utility and macro-level skeptics who see no systemic evidence of transformative value.

In Agreement

  • Self-reported productivity gains from LLM users are unreliable — people routinely mistake the impression of learning or productivity for actual improvement, similar to endorsing pseudoscience
  • No measurable macro-economic productivity gains have materialized despite widespread LLM adoption, and removing AI hype reveals an economy actually trending downward
  • The absence of an open source renaissance or significant LLM-authored libraries and dependencies undermines claims that LLMs create real value at scale
  • LLM improvement follows a sigmoid curve, not exponential growth — the initial rapid gains create misleading extrapolations about future capability
  • If most users cannot get reliable output from LLMs, the problem lies with the tool, not the users

Opposed

  • LLMs are demonstrably excellent at math tutoring — providing clear explanations, step-by-step verification, and instant problem generation with near-perfect accuracy on calculus-level content
  • Millions of credible professionals use LLMs daily for real work; dismissing this is contradictory when the tools are clearly being adopted at massive scale
  • GitHub data shows hundreds of thousands of LLM-generated pull requests with high merge rates, proving real-world code production beyond demos
  • LLMs have improved dramatically — the gap between 2024 and 2025 models is enormous, with reasoning capabilities enabling gold-medal performance on math olympiad problems
  • LLMs serve as force multipliers for experienced professionals, enabling context-switching and parallel task execution even if individual tasks take slightly longer
  • Anything LLMs produce that can be statically checked — math proofs, compilable code — can be reliable when proper verification is built around the output
LLMs: Great Demos, Little Real-World Value | TD Stuff