Back to Hand Coding: Why Agentic AI Still Ships Plausible Slop

Added Jan 26
Article: NegativeCommunity: PositiveMixed
Back to Hand Coding: Why Agentic AI Still Ships Plausible Slop

Initial excitement about AI coding gives way to systemic issues when tackling real projects. Detailed specs don’t rescue agents because they can’t evolve designs over time and produce code that looks good locally but fails holistically. As a result, Mo returns to hand-coding, citing better speed, accuracy, creativity, and integrity across the full development lifecycle.

Key Points

  • Early successes with AI coding are deceptive; larger, real-world tasks expose misalignments and brittle decisions.
  • Upfront, detailed specs don’t solve the problem because engineering specs must evolve through discovery, which current agents can’t manage well over time.
  • Agent outputs can look impressive in isolation and PRs but often degrade the codebase’s overall coherence, patterns, and structural integrity.
  • The dynamic mirrors vibewriting: plausible local text that fails at the chapter or system level.
  • Given quality, safety, and user trust requirements, hand-coding proves more effective and efficient when the full cost and lifecycle are considered.

Sentiment

The community is strongly sympathetic to the article's position. The vast majority of commenters agree that over-reliance on AI coding tools is dangerous, particularly for learning and skill development. The tone is thoughtful and concerned rather than hostile — most commenters speak from personal experience rather than ideology. Disagreements tend to be about degree rather than direction: few defend unrestricted vibe coding, but some argue experienced developers can use AI responsibly as a productivity tool. The discussion has a notable educational dimension, with multiple CS teachers and examiners weighing in with firsthand observations about AI's impact on students.

In Agreement

  • AI-generated code looks plausible in isolation but creates architecturally incoherent codebases — resembling 'vibewriting' where convincing paragraphs don't add up to a sound chapter
  • Students and junior developers who rely on AI miss critical learning that comes from struggling through problems themselves, leading to people who can recite theory but can't explain their own code
  • Skills atrophy is real and insidious: developers report diminishing patience, eroding pride of ownership, and collapsing tolerance for thinking through hard problems after extended AI use
  • LLM-generated code 'has no theory' unlike traditional abstractions like frameworks — there's no documentation to consult when things go wrong, and nobody understands why the AI made particular choices
  • The debugging paradox: if debugging is twice as hard as writing code, and you weren't even capable of writing it yourself, how will you ever fix it when things break?
  • AI introduces subtle bugs in complex domains (concurrency, state management, legacy APIs) that are harder to debug than they would have been to write correctly by hand
  • Vibe-coded projects exhibit accelerated codebase decay compared to hand-written code

Opposed

  • AI is just another abstraction layer in computing's long history — each generation feared the next level of abstraction (assembly to C, C to Python) and the world didn't fall apart
  • Experienced developers can productively pair with AI by reading every diff and maintaining architectural control, using it as a 'mech suit' that amplifies existing skills
  • Learning to work effectively with AI is itself a valuable new skill; mid-career developers who shun AI entirely risk falling behind
  • Not everyone needs deep foundational knowledge — for many developers building CRUD apps and REST APIs, AI-assisted development is perfectly adequate
  • The fault lies with uncritical usage rather than the tool itself; proper AI use requires constraints, 'skills,' and careful prompting to avoid slop