The Annotated Plan Workflow for AI Coding

Added Feb 22
Article: PositiveCommunity: PositiveDivisive
The Annotated Plan Workflow for AI Coding

The author describes a 'plan-first' workflow for AI coding that mandates a reviewed markdown specification before any code generation occurs. By using an iterative annotation cycle to refine research and implementation plans, developers can maintain strict architectural control and avoid wasted effort. This approach transforms AI-assisted development from a chaotic chat-based interaction into a structured and predictable engineering pipeline.

Key Points

  • Never allow the AI to write code until a written implementation plan has been reviewed and approved.
  • Use persistent markdown files as shared mutable state to provide precise, inline feedback during the planning stage.
  • Force deep research by using specific language like 'intricacies' and 'deeply' to prevent the AI from surface-level skimming.
  • The 'Annotation Cycle' allows for iterative refinement of the plan, ensuring the AI understands business constraints and technical preferences.
  • Implementation should be a mechanical process where the AI follows a finalized todo list without needing further creative input.

Sentiment

The community is notably split. While the high upvote count suggests broad interest and general approval of the topic, the comments themselves reveal significant polarization. Supporters appreciate the practical, structured approach and validate it from their own experience. Detractors range from those who think the article restates the obvious to those who question whether AI coding workflows produce real value at all. The discussion is more contentious than the upvote count would suggest.

In Agreement

  • The planning-execution separation mirrors established software engineering practices and produces consistently better results than ad-hoc prompting
  • Emphatic language in prompts does influence LLM behavior through the attention mechanism, steering the model toward expert-level training data segments
  • Having the AI research the codebase first and write findings to a document prevents 'reasonable-but-wrong' architectural assumptions
  • The annotation cycle — editing the plan directly rather than describing changes verbally — gives developers precise control over what gets implemented
  • Long context sessions with a persistent plan document work well because the plan acts as an anchor even after context compaction

Opposed

  • This is just conventional Claude Code usage repackaged as a novel workflow — plan mode and spec-driven coding are already well-established patterns
  • Emphatic prompt language is cargo cult superstition with no rigorous statistical evidence proving it changes output quality
  • Pasting open-source reference code into LLM prompts raises serious licensing and ethical concerns, effectively using AI as a license filter
  • The entire AI coding field is driven by anecdata and commercial hype rather than empirical evidence of productivity gains
  • Modern models are already trained through reinforcement learning to read code thoroughly, making explicit 'read deeply' instructions redundant