A Web Server With No App Code: LLM + 3 Tools

Read Articleadded Nov 1, 2025
A Web Server With No App Code: LLM + 3 Tools

A minimal server delegates every HTTP request to an LLM equipped with database, web response, and memory tools, and it successfully delivered a functional CRUD contact manager with no application code. However, requests were 30–60 seconds and cost roughly $0.01–$0.05 each, with weak design memory and occasional hallucinations causing errors. The author argues these are problems of degree, not kind, implying code may be largely transitional as LLM speed, cost, context, and reliability improve.

Key Points

  • Architecture: a minimal HTTP server defers all logic to an LLM with three tools—SQL database access, web response generation, and persistent feedback memory.
  • Demonstrated capability: the LLM independently created a working CRUD contact manager with schemas, safe SQL, REST-ish APIs, responsive UI, validation, and error handling.
  • Severe practical limits today: 30–60s latency per request and ~$0.01–$0.05 in token costs make it 300–6000× slower and 100–1000× more expensive than conventional apps.
  • Stability issues: weak design memory (UI drift) and occasional hallucinated SQL leading to 500 errors; 75–85% of time spent in reasoning.
  • Outlook: performance, cost, context, and error rates are improving; the author argues these are degree—not kind—problems, hinting that code as we know it may be transitional.

Sentiment

Overall, the Hacker News discussion reflects a mixed but cautiously optimistic sentiment regarding the article. Many commenters express excitement and foresee a future where LLMs dramatically change software development by eliminating traditional coding. However, a significant portion raises serious, fundamental concerns about the immediate practicality and long-term viability of LLMs as direct application runtimes, particularly regarding determinism, security, cost, and performance.

In Agreement

  • The experiment creatively demonstrates how the definition of 'boilerplate code' will shift, emphasizing that LLMs provide a new modality for evaluating user intent and could lead to a 'post-code' world where application logic disappears.
  • The current limitations of speed, cost, consistency, and reliability are perceived as temporary engineering challenges ('problems of degree, not kind') that will diminish as LLM inference becomes faster, cheaper, and context windows grow.
  • Hybrid approaches, where LLMs generate stable internal representations (like UI specs or code fragments) that are cached or 'compiled' for deterministic execution, could address performance and consistency issues while still avoiding manual coding.
  • Some argue that users are already accustomed to non-deterministic software behavior due to frequent A/B testing and auto-updates, suggesting that fluid, customizable UIs could be beneficial for personalization.
  • The ultimate goal of software is to fulfill user intent ('solutions' rather than 'web apps'), and LLMs offer a path to direct intent execution, potentially making traditional coded determinism the 'wrong approach' for complex problems.
  • The project is viewed as a brilliant thought experiment and a glimpse into the future trajectory of the web, potentially leading to an 'Internet moment' for AI, making complex tasks faster, more efficient, and cheaper.

Opposed

  • The inherent non-determinism of LLMs is a fatal flaw for web applications, as users require consistent, predictable behavior, especially for critical functions like banking.
  • The security implications are severe, with the feedback mechanism essentially being a 'built-in prompt injection' vulnerability, making the system highly susceptible to malicious inputs and difficult to diagnose or fix when it fails.
  • The current performance and cost are catastrophically impractical, consuming excessive energy and resources, making conventional, deterministic coding methods vastly more efficient and economical.
  • Poor UI consistency and lack of 'object permanence' are fundamental issues for user experience, as humans naturally expect interfaces to remain stable across interactions.
  • The LLM, despite claims of 'no business logic,' is implicitly performing these functions in a probabilistic and inefficient manner, suggesting that core components like routing and controllers are not truly eliminated but merely obscured.
  • Skepticism exists about whether LLMs possess the fundamental 'intelligence' needed to reliably perform complex, deterministic logic, suggesting that throwing more computing power at the problem may not solve inherent limitations.
  • Constantly describing detailed intent for every interaction through natural language is considered a poor user experience compared to the efficiency of clicking on pre-defined links and buttons.
A Web Server With No App Code: LLM + 3 Tools