Make Claude Code Remember: Auto-Capture and Sync Your Preferences

Read Articleadded Jan 4, 2026
Make Claude Code Remember: Auto-Capture and Sync Your Preferences

Claude-reflect automatically captures your corrections and preferences during coding sessions and queues them with confidence scores. With /reflect, you review and sync approved learnings to CLAUDE.md and AGENTS.md across global and project scopes. Smart filtering, duplicate checks, and semantic dedup keep your memory clean and reusable.

Key Points

  • Automatic hooks capture corrections and positive feedback with confidence scoring and queue them for later review.
  • /reflect enables human-in-the-loop processing, with options to apply, select, or review items; additional commands cover history scan, targets, review, and semantic dedupe.
  • Multi-target sync updates ~/.claude/CLAUDE.md (global), project CLAUDE.md, and AGENTS.md to propagate learnings across tools and projects.
  • Smart filtering excludes questions and one-off instructions; duplicate and semantic deduplication keep CLAUDE.md concise and consistent.
  • Installation requires Claude Code CLI, jq, and Python 3; tips include using 'remember:' markers and running /reflect after commits.

Sentiment

The overall sentiment of the discussion is cautiously positive. While there's clear enthusiasm for the concept of durable, curated memory for LLMs and recognition of the problem Claude-reflect aims to solve, a significant portion of the discussion expresses valid concerns about "context rot" and the crucial need for human oversight and strict curation. The author's responsiveness to feedback and explanation of the tool's human-in-the-loop approach helped to address some of these reservations, leading to a constructive dialogue rather than outright disagreement.

In Agreement

  • The concern about context rot is real, but proper structure and curation of `CLAUDE.md` can mitigate it, making size less of an issue.
  • The practice of adding corrections to a `CLAUDE.md` file is valid, even endorsed by the Claude Code team themselves, provided there is curation.
  • A `human-in-the-loop` review and approval process is crucial for managing changes and preventing unwanted context accumulation.
  • The concept of knowledge extraction from conversational agents is valuable and should be more widespread.
  • Separating frequently needed high-signal information from general reference material within context helps maintain conciseness.
  • The tool's ability to catch *implicit* corrections that users don't consciously document is a significant and complementary benefit to existing explicit documentation workflows.

Opposed

  • There is a strong concern that LLMs degrade as the context or prompt size grows, leading to "context rot."
  • Some users prefer not to use a `CLAUDE.md` at all, relying instead on linters, tests, or subdividing tasks into smaller chunks to manage recurring needs.
  • A viewpoint exists that `CLAUDE.md` should be kept as short as possible and be under strict human control, never touched by the AI, serving as the user's primary way to influence development direction.
  • Skepticism was expressed about the effectiveness or realism of the tool's positive pattern detection (e.g., "perfect!", "exactly right").