Agentic Engineering: Patterns for Mastering AI Coding

Added Mar 4
Article: PositiveCommunity: PositiveDivisive

Simon Willison outlines a comprehensive set of 'Agentic Engineering Patterns' designed to optimize the use of AI coding agents. The framework emphasizes that while code has become cheap to produce, maintaining quality requires strict TDD and using agents for deep code explanation. This guide serves as a roadmap for transitioning from manual coding to high-level AI orchestration.

Key Points

  • The economic shift where code generation is now inexpensive allows for a more iterative and experimental approach to software development.
  • Rigorous testing frameworks, specifically Red/Green TDD, are essential for verifying the accuracy of agent-generated code.
  • AI agents are highly effective tools for code comprehension, providing linear walkthroughs and answering interactive questions about logic.
  • Developers should focus on high-level orchestration and maintaining deep knowledge of the system while delegating repetitive coding tasks to agents.
  • Success with AI agents requires specific, documented prompting patterns and a disciplined workflow to ensure reliability.

Sentiment

The community response is broadly receptive to the guide's core ideas while maintaining healthy skepticism. Most commenters acknowledge AI coding tools provide real value but push back on hype about agents replacing deep engineering understanding. There is strong agreement that good engineering practices are even more important in agentic workflows, but disagreement about whether codifying them into named patterns is helpful or premature. The discussion is constructive rather than hostile, with genuine knowledge-sharing about practical workflows alongside pointed criticism of over-formalization.

In Agreement

  • Test-driven development and test harnesses are the most critical patterns for effective agentic coding, providing deterministic guardrails that prevent agents from drifting
  • The patterns are essentially good software engineering practices (testing, documentation, modularity) that happen to also make agents more effective — and that's a feature, not a bug
  • Agent-assisted coding delivers genuine productivity gains, especially for boilerplate, unfamiliar codebases, and heavily-patterned work in strongly-typed languages
  • Planning before execution (spec-driven development, plan mode) dramatically improves agent output quality and reduces wasted iterations
  • Integration tests and end-to-end tests have become much more viable because the cost of writing them with AI is negligible, shifting the testing pyramid

Opposed

  • The guide risks creating an Agile-for-AI consulting industry that over-formalizes simple, common-sense advice with fancy terminology and certification schemes
  • Many developers find agents frustratingly unreliable — hallucinating APIs, looping on simple issues, and producing code they would not be comfortable shipping
  • Outsourcing understanding to LLMs creates dangerous cognitive debt where engineers cannot debug or reason about code they are responsible for
  • For experienced developers who know their codebase well, manual coding is often faster than the prompt-wait-verify-reprompt cycle
  • These patterns will be obsolete within months as models rapidly improve, making codification premature and potentially misleading
Agentic Engineering: Patterns for Mastering AI Coding | TD Stuff