The Ethics of Sloppypasta: Why You Shouldn't Forward Raw AI Text
Sharing unvetted AI-generated text is a breach of etiquette that forces recipients to do the work of verification and distillation that the sender skipped. This 'sloppypasta' erodes professional trust and reduces the sender's own understanding of the topic through cognitive debt. To maintain healthy communication, users must read, verify, and edit AI output before sharing it, preferably as a link rather than a wall of text.
Key Points
- AI-generated text creates an effort asymmetry where the sender's 'free' output becomes the recipient's 'expensive' burden to read and verify.
- Sharing raw AI output erodes interpersonal trust because recipients cannot distinguish between human expertise and plausible-sounding hallucinations.
- Delegating the writing process entirely to AI leads to cognitive debt, reducing the sender's own comprehension and retention of the subject matter.
- Effective AI etiquette requires users to read, verify, and distill AI responses before sharing them to ensure they add value rather than noise.
- To maintain professional courtesy, AI content should be disclosed and shared as links or attachments to avoid 'filibustering' digital conversations.
Sentiment
The community overwhelmingly agrees with the article. The discussion is filled with visceral frustration from professionals experiencing sloppypasta daily — in Jira tickets, Slack messages, PR reviews, and design documents. Even the most prominent dissenting voice explicitly agrees slop is bad, questioning only the effectiveness of etiquette-based solutions versus technological ones. The emotional resonance is high, with multiple commenters describing the behavior as disrespectful, offensive, and grounds for firing.
In Agreement
- AI-generated Jira tickets, design docs, and PR descriptions create massive work for recipients while the sender invests minimal effort, fundamentally breaking the assumption that creation costs more than review
- Sending raw AI output is disrespectful, comparable to a LMGTFY link — if the recipient wanted an AI answer, they would have asked the AI themselves
- People who delegate writing to AI suffer cognitive atrophy, forgetting what they supposedly learned because they never did the actual work of thinking through the problem
- Open source projects are being flooded with AI-generated PRs and issues that disguise themselves as genuine contributions, overwhelming maintainers who previously relied on effort as a natural filter
- The polished appearance of AI text makes it harder to quickly identify low-effort work, skewing the economics in favor of careless contributors who now have plausible deniability
- When people ask questions in forums or professional contexts, they want human perspectives and subjective experiences that AI cannot provide — not a regurgitation of generic information they could get themselves
Opposed
- The internet was never a bastion of quality content, and trying to enforce AI etiquette norms is futile — building better tools to filter and manage attention would be more practical
- AI can serve legitimate purposes like rubber-ducking, exploratory research, and generating drafts that are then heavily edited, so blanket condemnation of AI-assisted communication goes too far
- Some AI-assisted content like YouTube videos adds genuine value by enabling knowledgeable experts who lack production skills to share their expertise
- Velocity is everything — AI-generated code and documentation is the new norm and people who resist it are the ones falling behind