LLMs Don’t Code—They Compile Your Prompts
The author argues that AI coding is best understood as a compiler workflow: prompts are code and outputs are compilations, not autonomous programming. English is a flawed medium for specifying software, leading to brittle, non-deterministic results, so perceived gains often don’t translate into real productivity. Instead of hype-driven ‘vibe coding,’ we should treat AI as a tool and focus on building better programming languages, compilers, and libraries.
Key Points
- AI programming should be modeled as a compiler: prompts are code, outputs are compiled results, and iteration is like recompilation.
- English is a poor programming language—imprecise, non-deterministic, and highly non-local—making AI coding workflows brittle.
- Perceived productivity gains often mask real slowdowns; hype around ‘vibe coding’ mirrors past tech bubbles.
- LLMs add value primarily through search, optimization, and pattern reuse; the human remains the programmer.
- AI may eventually replace some programming tasks like compilers and spreadsheets did, but it should be treated as a workflow tool, not an autonomous coder.
Sentiment
Mixed but leaning pragmatic: many find AI coding tools genuinely useful for boilerplate, exploration, and acceleration within disciplined workflows, while a substantial minority agrees with the article’s skepticism about hype, non-determinism, reduced understanding, and long-term talent pipeline risks.
In Agreement
- AI is a tool akin to a compiler or junior dev: it needs precise, structured guidance and works best on boilerplate and well-known patterns.
- English is a poor, non-deterministic specification language; small prompt changes can unpredictably affect outcomes.
- LLMs succeed via pattern reuse, search, and optimization—not true creative understanding or design.
- Vibe coding can replace critical thinking with a gambler’s mentality; useful mainly for POCs, ad hoc scripts, and low-stakes tasks.
- Perceived productivity can outpace actual throughput or quality; a cited study finds users feel ~20% more productive but are ~19% slower.
- AI use can erode understanding of one’s codebase and shift effort to heavy code review and reading; rigorous workflows and oversight are required.
- Hype parallels self-driving: geofenced successes aren’t the promised universal autonomy; huge spend has often outpaced realistic capability.
- The real opportunity is better languages, compilers, libraries, and tooling design—not burning billions on hype.
Opposed
- Many practitioners report large real productivity gains (sometimes 4x) by offloading repetitive coding, which accelerates delivery and reduces procrastination.
- LLM autocomplete and chat integrate seamlessly and are the biggest day-to-day wins; even seniors benefit when scoping tasks well.
- AI can improve discipline by forcing upfront thinking, decomposition, and clearer specs; it’s an effective personal assistant and research aide.
- AI broadens access: non-experts can build useful apps; startups can move faster; the tool reveals weak seniors rather than creating them.
- Self-driving is working in multiple cities; calling it ‘obviously nonsense’ ignores operational deployments and potential viable niches.
- Non-determinism is acceptable at higher abstraction levels, similar to delegating to humans; the point is working outcomes, not exact internals.
- Research is mixed and often small-n; cherry-picking studies is unhelpful—experience and process changes matter.
- Architecting smaller, independent components and adding guardrails lets LLMs work better on larger systems.
- The ‘anti-AI hype’ narrative dominates online discourse; many real users quietly find AI indispensable despite warts.