Make Claude Code Remember: Auto-Capture and Sync Your Preferences

Claude-reflect automatically captures your corrections and preferences during coding sessions and queues them with confidence scores. With /reflect, you review and sync approved learnings to CLAUDE.md and AGENTS.md across global and project scopes. Smart filtering, duplicate checks, and semantic dedup keep your memory clean and reusable.
Key Points
- Automatic hooks capture corrections and positive feedback with confidence scoring and queue them for later review.
- /reflect enables human-in-the-loop processing, with options to apply, select, or review items; additional commands cover history scan, targets, review, and semantic dedupe.
- Multi-target sync updates ~/.claude/CLAUDE.md (global), project CLAUDE.md, and AGENTS.md to propagate learnings across tools and projects.
- Smart filtering excludes questions and one-off instructions; duplicate and semantic deduplication keep CLAUDE.md concise and consistent.
- Installation requires Claude Code CLI, jq, and Python 3; tips include using 'remember:' markers and running /reflect after commits.
Sentiment
The Hacker News community is moderately interested in the problem claude-reflect addresses but divided on the implementation approach. Most commenters agree that preserving corrections across sessions is valuable, yet many prefer manual curation over automated capture. The specific implementation drew targeted technical criticism. Overall, the reception is cautiously positive with substantive constructive feedback and an engaged, responsive author.
In Agreement
- Automated capture catches implicit corrections users don't consciously document, complementing manual workflows
- Human-in-the-loop review gives users control over what gets added to configuration files
- Even the Claude Code team uses CLAUDE.md corrections, validating the general approach of persistent project context
- Corrections can be consolidated into simpler explanations over time, keeping context manageable
- Project-specific preferences can't be captured by general model training, so local persistence is necessary
Opposed
- Growing context and prompt size causes LLM performance degradation — a phenomenon described as 'context rot'
- Manual documentation workflows with session logs and phase reviews may already capture most valuable insights without automation
- CLAUDE.md should remain short and under full human control; automated additions risk uncontrolled bloat
- Grep-based pattern matching for detecting positive feedback is too simplistic and fragile
- Linters, tests, and hooks are more reliable enforcement mechanisms than documentation that the LLM may not consult
- Claude doesn't always read reference materials linked from CLAUDE.md, undermining the entire documentation-based approach