AI Agents Make Best Practices Non‑Optional

AI coding agents work best when ambiguity is eliminated and guardrails are strict. The author’s team enforces 100% code coverage, clear file organization, fast ephemeral concurrent environments, and end‑to‑end typing with automated linters and generated clients. These investments turn best practices from “nice to have” into essential infrastructure that lets agents reliably produce correct code.
Key Points
- Require 100% code coverage to force agents to validate the behavior of every line they change, creating a clear to‑do list and reducing ambiguity.
- Design the filesystem as an interface: use clear namespaces and many small, focused files to improve agent navigation and context loading.
- Make dev environments fast, ephemeral, and concurrent so agents can iterate in tight loops, spin up clean contexts instantly, and run in parallel without conflicts.
- Automate strict quality gates: linters/formatters with auto‑fixes and typed languages (the author favors TypeScript) to reduce the model’s search space and encode intent.
- Adopt end‑to‑end typing: OpenAPI‑generated clients, Postgres types/checks/triggers, and typed query builders (e.g., Kysely), wrapping third‑party clients for strong types.
Sentiment
The community is notably divided. There is genuine agreement that structure, typing, and environments help AI work better, and several experienced developers confirm the article matches their experience. However, the 100% coverage prescription draws significant pushback as overly dogmatic, and multiple commenters view the article as startup marketing. Deeper philosophical concerns about AI-generated tests being tautological and AI optimizing for human approval rather than correctness add thoughtful skepticism beyond simple disagreement.
In Agreement
- Restricting degrees of freedom and enforcing structure (typing, naming conventions, modular files) genuinely helps AI agents produce better code
- Fast, ephemeral, concurrent dev environments are valuable regardless of AI usage
- 100% coverage has a different value proposition for AI-generated code than human-written code — the cost equation has changed
- Formal specifications (TLA+/PlusCal) combined with AI implementation represent an ideal workflow that finally makes formal methods practical
- Tests serve a different purpose in the AI era — acting as reified context that makes future agentic interactions safer
Opposed
- When AI writes both code and tests, it creates a tautology trap — flawed logic verified by tests designed to pass
- 100% code coverage is extremely bad advice for most projects, creating massive test codebases with diminishing returns
- Goodhart's Law accelerated by AI is dangerous — agents will flood projects with meaningless tests to hit coverage metrics
- The article is veiled marketing from an AI startup CEO and lacks objectivity
- Good code is not a universal constant — it is context-dependent, and the article's prescriptions are too dogmatic
- Management wants AI to ship faster, not to invest in best practices and documentation first
- AI optimizes for code that looks correct rather than code that is correct, making errors harder for humans to detect