Claude Adds Project-Scoped Memory and Incognito Mode, Now on Pro and Max

Claude now supports memory that preserves project-specific context to streamline ongoing work while keeping unrelated details separate. Users can view, edit, and guide what’s remembered and use Incognito chats for sessions that don’t save to memory. Initially for Team and Enterprise, memory is now rolling out to Pro and Max, following extensive safety testing and refinements.
Key Points
- Memory captures work context (processes, clients, specs, priorities) and is scoped per project to keep details separate and confidential.
- Users have granular control: view, edit, and direct what Claude remembers via a memory summary in Settings.
- Incognito chat provides conversations that don’t save to memory or history, ideal for sensitive or clean-slate discussions.
- The rollout now includes Pro and Max plans, in addition to Team and Enterprise (with admins able to disable memory).
- Anthropic conducted extensive safety testing and made targeted adjustments to ensure memory delivers helpful, safe behavior.
Sentiment
The overall sentiment of the Hacker News discussion is largely **skeptical and cautious**, with a significant lean towards **negative** among experienced users who prioritize prompt engineering and local control. While some positive viewpoints exist regarding convenience and specific use cases, these are often overshadowed by concerns about privacy, output quality degradation, and the perceived lack of novelty in the feature.
In Agreement
- Memory can significantly reduce the need for repetitive re-explanations and speed up complex workflows, especially for users who might not be 'sophisticated' in prompt engineering.
- The project-scoped nature of the memory helps prevent cross-contamination of context and protects confidentiality, which is a key benefit.
- The feature allows for more natural-feeling conversations and can help the LLM maintain consistency in responses over time, adapting to user preferences.
- For specific, sensitive professional contexts (e.g., air defense), memory can help the LLM build 'trust' and provide relevant information that might otherwise be blocked by initial safety filters.
- Existing project-level instructions or pre-prompts (like those in Claude's projects, Perplexity workspaces, or custom MD files) already serve a similar useful purpose for maintaining context and tech stack preferences, and this memory feature is seen as an evolution of these.
- The combination of projects, skills, and memory could be very powerful if token limits were higher.
Opposed
- Many expert users prefer precise, 'one-shot' prompting, finding memory makes the LLM a 'blackbox,' harder to refine prompts, and less transparent about its inputs.
- Concerns about privacy are paramount, with arguments that the memory layer should be local to the user's device, not stored on vendor servers, to prevent data exploitation and ensure control.
- Memory can lead to 'noise,' accumulate 'garbage' over time, and stifle creativity or 'fresh ideation' by locking the LLM into past, potentially irrelevant, contexts.
- Anecdotal evidence suggests a decline in Claude's output quality and behavior since the introduction of 'skills' or memory, with the LLM often defaulting to creating tools/scripts instead of directly solving problems, or exhibiting 'sycophancy.'
- The feature is seen by some as not 'actually new,' but rather a re-packaging or evolution of existing context management methods like pre-prompts or project-specific markdown files.
- Concerns about memory reinforcing harmful patterns, leading to 'AI-induced psychosis,' or enabling attempts to bypass safeguards are raised.
- Some users feel overwhelmed by constant feature announcements, suggesting a low signal-to-noise ratio in the current LLM landscape.