Cowork: Let Claude Work in Your Files

Anthropic introduced Cowork, a research-preview feature that lets Claude autonomously work in a user-selected folder on macOS. Built on Claude Code’s agent foundations, it streamlines workflows, supports connectors and skills, and can operate with the browser via Claude in Chrome. Safety controls, user permissions, and cautions around destructive actions and prompt injections are emphasized, with more features and platforms coming soon.
Key Points
- Cowork gives Claude controlled file-system access to a user-selected folder to autonomously read, edit, and create files.
- It’s built on Claude Code’s agent foundations, supports connectors and new skills, and can use the browser via Claude in Chrome.
- Designed for streamlined workflows: persistent context, correct file outputs, and queued tasks that run in parallel.
- User control and safety are central—explicit permissions and confirmation for major actions—yet risks like deletions and prompt injections require caution.
- This is a research preview with rapid iteration planned (cross-device sync, Windows), available now on macOS for Claude Max subscribers.
Sentiment
The community is deeply cautious and skeptical. While there is genuine appreciation for the technical execution of the sandbox architecture and Anthropic's willingness to engage transparently in the discussion, the overwhelming sentiment is that giving AI agents file access raises serious, potentially fundamental security problems that have not been adequately addressed. Many commenters view this as shipping a powerful tool whose risks may outweigh its benefits for non-technical users, and the extensive debate about whether prompt injection can ever be truly solved reflects deep unease about the entire paradigm of agentic file access.
In Agreement
- The sandboxing approach using a full Linux VM with Apple Virtualization framework shows Anthropic is taking security seriously and going beyond basic containment
- The product direction of extending Claude Code's capabilities to non-developers is a valuable step toward making AI agents broadly useful
- The early research preview release approach allows learning from real users before scaling, and the team's active engagement in the discussion demonstrates good faith
- Running in a VM with folder-level access control and network allowlisting is a reasonable mitigation that addresses many practical threat vectors
Opposed
- Prompt injection is fundamentally unsolvable without destroying LLM utility — the very feature that makes LLMs general-purpose is also what makes them vulnerable to instruction injection from data
- Telling non-technical users to 'monitor for suspicious actions that may indicate prompt injection' is irresponsible, equivalent to telling people 'don't click suspicious links'
- There is no built-in rollback or snapshot mechanism for file operations, risking irreversible damage to user data with no recovery path
- Files accessed by Cowork become 'Inputs' collected by Anthropic per their privacy policy, creating significant privacy concerns for sensitive personal and financial documents
- DNS exfiltration vectors exist even when direct network access is denied in the sandbox, and the network allowlist may have additional unexpected exfiltration channels
- The target audience of non-technical users is the least equipped to handle the security risks this tool introduces, yet bears the most exposure