A Web Server With No App Code: LLM + 3 Tools

Added Nov 1, 2025
Article: PositiveCommunity: NeutralDivisive
A Web Server With No App Code: LLM + 3 Tools

A minimal server delegates every HTTP request to an LLM equipped with database, web response, and memory tools, and it successfully delivered a functional CRUD contact manager with no application code. However, requests were 30–60 seconds and cost roughly $0.01–$0.05 each, with weak design memory and occasional hallucinations causing errors. The author argues these are problems of degree, not kind, implying code may be largely transitional as LLM speed, cost, context, and reliability improve.

Key Points

  • Architecture: a minimal HTTP server defers all logic to an LLM with three tools—SQL database access, web response generation, and persistent feedback memory.
  • Demonstrated capability: the LLM independently created a working CRUD contact manager with schemas, safe SQL, REST-ish APIs, responsive UI, validation, and error handling.
  • Severe practical limits today: 30–60s latency per request and ~$0.01–$0.05 in token costs make it 300–6000× slower and 100–1000× more expensive than conventional apps.
  • Stability issues: weak design memory (UI drift) and occasional hallucinated SQL leading to 500 errors; 75–85% of time spent in reasoning.
  • Outlook: performance, cost, context, and error rates are improving; the author argues these are degree—not kind—problems, hinting that code as we know it may be transitional.

Sentiment

The community is predominantly skeptical but respectful, with roughly 45% critical, 25% enthusiastic, and 30% constructively analytical. Critics focus on determinism, security, and resource waste as fundamental rather than incremental problems. Enthusiasts see trajectory and potential. The middle ground converges on hybrid approaches that essentially preserve traditional code but use LLMs for bootstrapping — which, as several commenters noted, undermines the original thesis.

In Agreement

  • The experiment demonstrates that LLMs can already perform application logic end-to-end, and the remaining barriers (speed, cost, reliability) are problems of degree that will shrink over time
  • This is a glimpse of a future where LLMs produce richer, more dynamic output with integrated storage — not recreating existing apps but enabling entirely new interaction patterns
  • For rarely-used functionality, waiting for an LLM response may be more practical than writing and maintaining dedicated code
  • The hybrid code-as-cache approach — where LLMs generate deterministic code on first request — could be the practical middle ground that makes this viable
  • Code itself may be a transitional artifact, a 'hack' for communicating intent to machines that will eventually understand intent directly

Opposed

  • Replacing deterministic behavior with non-deterministic inference is fundamentally backwards — users expect tools to behave consistently every time
  • Security is a fatal flaw: an LLM between user requests and a database creates trivially exploitable attack surfaces that no amount of prompt engineering can fully close
  • The energy and computational costs are absurd — LLMs consume orders of magnitude more resources than traditional code for the same output
  • The approach essentially reinvents software development less efficiently: if the LLM caches its outputs as code, you just have slower, more expensive code generation
  • The comparison to early demos like the Mother of All Demos is misleading — improvements in AI capability are far more uncertain than improvements in hardware performance
  • Nobody is asking for non-deterministic applications; the real demand is for reliable, predictable software that works the same way every time
A Web Server With No App Code: LLM + 3 Tools | TD Stuff