Claude Adds Project-Scoped Memory and Incognito Mode, Now on Pro and Max

Claude now supports memory that preserves project-specific context to streamline ongoing work while keeping unrelated details separate. Users can view, edit, and guide what’s remembered and use Incognito chats for sessions that don’t save to memory. Initially for Team and Enterprise, memory is now rolling out to Pro and Max, following extensive safety testing and refinements.
Key Points
- Memory captures work context (processes, clients, specs, priorities) and is scoped per project to keep details separate and confidential.
- Users have granular control: view, edit, and direct what Claude remembers via a memory summary in Settings.
- Incognito chat provides conversations that don’t save to memory or history, ideal for sensitive or clean-slate discussions.
- The rollout now includes Pro and Max plans, in addition to Team and Enterprise (with admins able to disable memory).
- Anthropic conducted extensive safety testing and made targeted adjustments to ensure memory delivers helpful, safe behavior.
Sentiment
The community sentiment is predominantly skeptical to cautiously negative. While some users see practical value in memory for convenience-oriented use cases, the majority of engaged commenters express concerns about loss of control, context pollution, privacy implications, and output quality degradation. The technical community clearly prefers explicit, user-managed context over opaque automated memory systems. There is a notable split between users who see memory as a helpful evolution and power users who view it as an unwanted layer of complexity that makes an already opaque system even harder to reason about.
In Agreement
- Memory is genuinely convenient for recurring practical queries like car details, location, and tech stack preferences that would otherwise require repetitive re-explanation each session.
- Project-scoped memory is a sound design decision that prevents cross-contamination between different workspaces and maintains focused, relevant context per project.
- Claude's memory implementation feels more natural and less intrusive than ChatGPT's approach, weaving context into responses rather than awkwardly forcing it in.
- The incognito chat feature is a welcome addition for sensitive or fresh-context conversations where accumulated memory would be counterproductive.
- For less experienced users, memory serves as a useful crutch that helps them get usable results without needing to craft perfect prompts.
- Having user controls to view, edit, and manage what Claude remembers is an important step toward transparency and user agency.
Opposed
- Memory makes the LLM black box even more opaque, causing users to lose visibility into what inputs are driving outputs and making it harder to debug and refine prompts.
- Context rot is a well-documented phenomenon where irrelevant information in the context window actively degrades model intelligence and output quality.
- Memory often stores irrelevant, stale, or context-inappropriate information that pollutes unrelated conversations with nonsensical cross-references.
- Anthropic's safety testing claims are vague and unsubstantiated, describing a process without providing data, methodology, or evaluation results.
- Memory features combined with RLHF training create dangerous amplification loops that can worsen sycophancy or amplify model aggressiveness due to overcorrection.
- Storing memory on Anthropic's servers raises serious privacy concerns, especially given that people use LLMs for deeply personal conversations including as substitutes for therapy.
- Experienced users overwhelmingly prefer explicit, user-controlled context like markdown files and system prompts over opaque automated memory, finding the former more reliable and debuggable.
- Memory may be a key ingredient in AI psychosis scenarios where models develop consistent personas that create harmful illusions of interacting with a living entity.