Write a Minimal, High-Leverage CLAUDE.md

CLAUDE.md should onboard Claude to your codebase with only the essential and universally relevant WHY/WHAT/HOW. Because instruction-following declines with instruction count (and the harness already adds many), keep it short and rely on Progressive Disclosure for task-specific details via separate, referenced docs. Use linters/formatters and hooks instead of stuffing style rules into CLAUDE.md, and avoid auto-generating this high-leverage file.
Key Points
- LLMs are stateless; CLAUDE.md is the persistent, session-wide way to onboard the agent with your project’s WHY/WHAT/HOW.
- Claude often ignores CLAUDE.md if it’s not clearly relevant; include only universally applicable information.
- Instruction-following degrades as instructions increase; keep CLAUDE.md very concise, given the harness already adds ~50 instructions.
- Use Progressive Disclosure: keep task-specific guidance in separate docs and reference them via pointers instead of copying content.
- Don’t use CLAUDE.md as a linter; rely on deterministic tools (linters/formatters, hooks, slash commands), and avoid auto-generating CLAUDE.md.
Sentiment
The overall sentiment of the discussion on Hacker News is mixed but leans towards skepticism regarding the current state of LLM coding agents and the necessity of highly specialized prompt engineering through `CLAUDE.md` files. While many users acknowledge and even agree with the article's observations about LLM limitations (like instruction adherence and statelessness), a significant portion expresses frustration with these limitations, views `CLAUDE.md` as a temporary workaround for flawed tools, and questions the true productivity gains. There's a strong desire for more deterministic, introspectable, and truly intelligent AI interfaces that wouldn't require such 'vibe engineering,' with some predicting that many of the current `CLAUDE.md` practices will become obsolete as models improve.
In Agreement
- LLMs, especially Claude, frequently ignore instructions in `CLAUDE.md` if the file is too long, contains non-universally applicable guidance, or as the conversation progresses, confirming the article's core premise.
- The 'Progressive Disclosure' method of referencing specialized `.md` files from a main `CLAUDE.md` is a recognized strategy for managing context, though its reliability is debated, with some users reporting Claude often fails to read referenced documents.
- Using deterministic linters, formatters, and automated checks (e.g., git hooks) for code style and quality is superior to instructing LLMs via `CLAUDE.md`, aligning with the article's advice as these are more reliable, faster, and cheaper.
- Strategic placement of key, high-level information in `CLAUDE.md` can provide high ROI by reducing repetitive instructions and onboarding effort for the agent.
- `CLAUDE.md` files (or similar conventions like `AGENTS.md`) serve as a configuration point for coding agent harnesses, offering special quality-of-life mechanics like deterministic context injection that differ from general documentation.
- The issue of 'prompt instability' across model updates validates the article's underlying concern about instruction adherence, suggesting that prompt engineering can be a continuous challenge.
- Some users acknowledge that `CLAUDE.md` can be beneficial for communicating project 'why, what, and how' and workflow instructions (e.g., how to lint, run tests, regenerate APIs), which might not fit naturally into a `README.md`.
Opposed
- `CLAUDE.md` is often ineffective or ignored, leading some users to find it mostly useless and to manually paste instructions or rely on LLMs inferring context from the codebase directly.
- The need for intricate 'vibe engineering' and 'canary in the coal mine' tests (like the 'Mr Tinkleberry' example) signals a problematic, non-introspectable interface, departing from principles of 'Real Engineering' and predictable, understandable systems.
- Requiring dedicated `CLAUDE.md` files is seen as a regression, bloating directories, lacking portability, and unnecessary if LLMs could simply understand `README.md` files written for humans.
- The time and effort spent on crafting and maintaining `CLAUDE.md` files (and prompt engineering in general) is not worth the perceived marginal gains, with some preferring to code themselves or perform surgical edits.
- Studies suggest AI tools can *decrease* productivity for experienced developers, challenging the assumption of universal productivity gains from using LLMs.
- The statelessness of LLMs and the current need for `CLAUDE.md` are viewed as temporary 'kludges' that will become obsolete as models develop better memory, statefulness, and inherent contextual understanding, similar to how human junior developers remember instructions.
- Giving LLMs *less* context (e.g., stripping comments and empty spaces from code) can sometimes improve quality on hard problems by reducing 'noise' and increasing the compute-to-information ratio.
- For simpler codebases, auto-generating `CLAUDE.md` files via `/init` is considered sufficient and efficient by some, directly contradicting the article's advice against it.