Recall: Persistent Redis Memory for Claude (v1.5)

Read Articleadded Oct 8, 2025

Recall adds durable, searchable memory to Claude using Redis, so important context persists across sessions and survives context limits. It offers a comprehensive toolkit—core memory ops, smart context, relationships/graphs, version history/rollback, templates, categories, and advanced search—with workspace isolation and optional global sharing. Strong security guidance, easy setup via npx, and low cost make it suitable for both individual developers and teams.

Key Points

  • Persistent memory for Claude stored in Redis, isolated by workspace with optional global sharing (isolated/global/hybrid modes).
  • Rich toolset: store/search/update/delete, smart context retrieval, session summaries, export/import, duplicate merging, relationships/graphs, and v1.5 features like version history/rollback, templates, categories, and advanced search.
  • Simple install via npx and straightforward Claude MCP configuration; supports local or cloud Redis.
  • Emphasis on security and compliance: require authentication, TLS, access controls, auditing, and careful handling of sensitive data.
  • Designed for individuals and teams to build shared, reusable knowledge with fast performance and low operating cost.

Sentiment

The sentiment is mixed. While there's an acknowledgment of the problem Recall aims to solve and some excitement about its potential, there's also significant critical inquiry and skepticism regarding the implementation's complexity, potential for context bloat, intrusiveness, and the inherent challenges of memory maintenance (pruning and updating).

In Agreement

  • The problem of Claude starting conversations from scratch and losing context due to limits is a significant pain point for daily coding workflows.
  • If Recall 'delivers,' it could be a '100% game changer' for LLM interaction.
  • Persistent memory can prevent context from becoming 'brittle' and losing precision over time, providing a 'always there' way for Claude to access information without feeling bloated.

Opposed

  • Memory management might be better handled outside of inference time, using an LLM as a judge for metaprompting, rather than intrusive tooling within the context window.
  • Instead of a complex system, simpler solutions like using context files in markdown format, as seen in projects like SpecKit, might suffice.
  • Adding 27 tools might 'bloat' the already crowded context window; simpler tools like 'Save Memory' and 'Search Memory' or a listener on a directory of markdown files could be more efficient.
  • The fundamental problem with memory in LLMs lies in 'pruning and updating'; a searchable text database becomes 'wildly out of touch with reality quickly' without proper mechanisms for this.
Recall: Persistent Redis Memory for Claude (v1.5)