Snowflake Patches Critical Sandbox Escape and Malware Execution Flaw in Cortex AI
Article: NegativeCommunity: NegativeConsensus

A vulnerability in the Snowflake Cortex Code CLI allowed for malware execution and sandbox escapes via indirect prompt injection. By exploiting flaws in command validation and sandbox flags, attackers could bypass human approval to exfiltrate data or destroy database tables. Snowflake has since released a fix in version 1.0.25 to address these critical security gaps.
Key Points
- The command validation system failed to check process substitution expressions, allowing shell commands to bypass human-in-the-loop approval.
- Indirect prompt injection via untrusted files could trick the AI into programmatically disabling its own sandbox using the 'dangerously_disable_sandbox' flag.
- Successful exploitation allowed for remote code execution, enabling attackers to steal cached Snowflake tokens and perform destructive database actions.
- A subagent context loss issue resulted in the main AI agent failing to alert users that malicious commands had already been executed.
- Snowflake remediated the vulnerability with the release of Cortex Code CLI version 1.0.25 on February 28, 2026.
Sentiment
The community overwhelmingly agrees with the article's findings but goes further, expressing deep skepticism about the entire concept of prompt-based AI sandboxing. Most commenters view this as an unsurprising and predictable failure stemming from fundamental architectural mistakes, not a novel vulnerability.
In Agreement
- Snowflake's sandbox design was fundamentally flawed — if the sandboxed entity can set a flag to disable the sandbox, it was never a sandbox in the first place
- Prompt injection is an inherently difficult problem because mixing instructions and data in the same channel always creates attack surfaces, similar to SQL injection before parameterized queries
- The command validation failure — only checking the first word of a shell command — shows a basic lack of Unix security knowledge and is reminiscent of 1990s-era input validation bugs
- Security constraints for AI agents must be enforced externally at the runtime or OS level, not within the agent's own context where they become mere suggestions
- Giving non-deterministic AI agents unrestricted command-line access is fundamentally dangerous regardless of prompt-level guardrails
Opposed
- The article's title is misleading — the AI didn't spontaneously escape the sandbox; it was directed via prompt injection, making this a more conventional vulnerability than the framing suggests
- Dual-input models that separate instructions from data may eventually solve prompt injection with high reliability, analogous to how prepared statements solved SQL injection
- Snowflake likely added the sandbox escape hatch because users find strictly sandboxed agents too limited, suggesting the real tension is between capability and containment