Ship Faster by Treating AI as a Forgetful Junior Dev

Added Sep 2, 2025
Article: PositiveCommunity: NeutralDivisive
Ship Faster by Treating AI as a Forgetful Junior Dev

A Sanity staff engineer now lets AI generate most initial code while he focuses on architecture, review, and coordination. His effective pattern is a three-attempt loop, bolstered by strong context (Claude.md, MCP integrations), disciplined task management, and a staged review process. Despite costs and pitfalls, the ROI is compelling, shifting the role from code ownership to problem ownership.

Key Points

  • Treat AI like a junior developer who forgets between sessions; expect a three-attempt workflow where only the third is typically shippable.
  • Solve the context problem with Claude.md files and MCP integrations (Linear, docs, non-prod DBs, codebase, GitHub) to jump-start from attempt two.
  • Run multiple AI threads deliberately: don’t parallelize the same problem space, track in Linear, and mark human-edited code.
  • Adopt a layered review: AI reviews first for tests/bugs, then engineer reviews architecture/business logic, then normal team review.
  • ROI is strong (2–3x faster shipping) despite $1k–$1.5k/month/engineer cost; main risks are lack of learning, overconfidence, and context limits.

Sentiment

Cautiously mixed, leaning slightly skeptical. While many commenters acknowledge genuine utility for specific tasks, the most upvoted comments take measured or deflating positions. The community is roughly evenly split between enthusiasts, skeptics, and pragmatists, with a notable undercurrent of fatigue at having the same AI coding conversation repeatedly. Supporters tend to emphasize workflow discipline and domain-specific wins, while detractors focus on unverifiable claims, quality concerns, and the risk of deskilling.

In Agreement

  • Multi-pass planning workflows with detailed specs unlock AI productivity that can compress weeks of work into days, especially for greenfield projects, UI code, and boilerplate
  • The 'forgetful junior developer' framing maps naturally to how senior engineers already manage teams, and the same mentoring skills transfer directly to AI orchestration
  • At engineering salary levels, even modest productivity gains easily justify AI tool costs, similar to how enterprise software seats routinely cost far more per engineer
  • CLAUDE.md files, MCP integrations, and structured context management are critical for getting past the initial garbage output and reaching usable results faster
  • AI tools are transformative for solo developers and small teams who can now ship MVPs and production features that would previously have required hiring additional engineers

Opposed

  • Elaborately prompting, reviewing, and iterating on AI output is essentially coding with extra steps using an imprecise, verbose meta-language that adds overhead rather than removing it
  • AI coding tools fail completely in embedded systems, firmware, custom C++ idioms, and other niche domains where the training data is sparse or the constraints are hardware-specific
  • The apparent productivity gains may mask growing technical debt, since developers end up with code they don't intimately understand and can't maintain as effectively
  • Junior developers using AI tools aren't building foundational skills and can't competently review AI output, creating a trust and competency gap that threatens long-term team development
  • Claims of transformative productivity lack verifiable evidence — blog posts and anecdotes from people with incentives to promote AI tools don't constitute proof
  • A staff engineer having 80% of their code written by a tool that 'doesn't learn' raises questions about what kind of work that engineer is actually doing