The End of Writing Code: A Developer's AI Productivity Explosion

A veteran software engineer describes their journey from being an AI skeptic to a developer who exclusively uses LLMs for coding. By focusing on outcomes rather than the manual process of writing code, they have achieved unprecedented productivity across a wide range of technical projects. However, they warn that this shift moves the development bottleneck toward the critical need for better testing and documentation verification.
Key Points
- A shift in mindset from valuing 'beautiful code' to prioritizing problem-solving and business value.
- The transition from writing code manually to acting as a prompt engineer and code reviewer.
- A significant increase in project velocity, enabling the completion of dozens of complex personal and professional tasks in under a year.
- The realization that AI-generated code is maintainable through tests and re-prompting rather than traditional design patterns.
- The identification of testing and documentation verification as the new primary bottlenecks in software development.
Sentiment
Mixed with a contentious undercurrent. Many HN commenters corroborated the article's productivity claims with their own AI-built projects and experiences, but a substantial skeptical contingent challenged whether these outputs represent real value, raised concerns about skill atrophy and code quality, and pushed back on the cultural dynamics around AI evangelism. Neither side dominated cleanly, and the debate generated more heat than light in several threads.
In Agreement
- Multiple commenters independently validated the article's productivity claims with their own AI-augmented accomplishments, from weekend tax dashboards to 11 open-source projects built with minimal manual coding.
- AI is particularly valuable for one-off personal tools: commenters noted they can now scratch their own itches exactly the way they want rather than settling for imperfect open-source alternatives.
- Using AI to write tools and scripts rather than having AI directly compute results is a widely-agreed best practice, leading to verifiable, reusable outputs.
- Reviving abandoned open-source projects is a compelling new AI use case - one commenter successfully modernized an abandoned web editor and added missing features with minimal manual effort.
- Several developers echoed the article's core claim that not typing code doesn't mean losing connection to the codebase, as reviewing and directing AI output keeps engineers engaged with the system.
- The economics of AI assistance enable software experimentation that was previously too costly: people can now build bespoke tools, use them, and discard them without traditional maintenance overhead.
Opposed
- Most AI-generated personal projects are abandoware: code piles with rotting dependencies that nobody will touch again, representing cheap dopamine rather than genuine productivity gains.
- LLMs are good at generating code (nodes in a software graph) but fundamentally unable to maintain the implicit relationships, assumptions, and emergent behaviors (edges) that define real maintainability - so the claim that 'sufficient tests' enable AI code regeneration is broken for software with real users.
- Skill atrophy is a genuine risk: heavy AI use trains developers to depend on tools rather than developing the deep understanding needed to work effectively on complex systems.
- AI should not be trusted for high-stakes domains like taxes without expert verification, since hallucinations can lead to costly errors like audits and penalties.
- The 'fictional encyclopedia' and similar AI-powered misinformation projects are actively harmful, deliberately blurring truth and fiction with no redeeming value.
- The enthusiast framing that skeptics will be 'left behind' is condescending and meaningless - these tools are not hard to adopt, and skeptics who choose not to use them are making a conscious decision, not falling behind through ignorance.
- Privacy concerns about uploading sensitive financial and personal data to commercial AI services are legitimate and routinely dismissed too casually by AI enthusiasts.
- The AI compute bubble is economically unsustainable, with compute heavily subsidized right now, and a reckoning within 18 months seems plausible.