The Etiquette of AI: Why You Must Curate Your Chatbot's Output

Added Mar 3
Article: NeutralCommunity: PositiveMixed
The Etiquette of AI: Why You Must Curate Your Chatbot's Output

The author proposes a social rule against sharing unedited AI-generated text, arguing that it lacks human intention and burdens the reader. To communicate effectively, one should either write manually to clarify their own thinking or strictly curate AI output for brevity and relevance. By adding human context to AI summaries, we can maintain respectful and efficient professional interactions.

Key Points

  • Human communication is valuable because it conveys a person's developed beliefs and intentions, which raw AI output often obscures.
  • Unedited AI text creates an energy asymmetry where the reader spends more effort deciphering a message than the sender spent generating it.
  • Writing is a thinking process that helps authors understand their own points of view; bypassing it leads to lower quality communication.
  • Effective curation of AI output requires applying human awareness to prioritize information and remove needless words.
  • In professional settings like Pull Requests, human-written blurbs should be used to frame and endorse agent-generated summaries.

Sentiment

The community broadly agrees with the article's anti-slop stance, but the discussion is muddied by the fact that a large majority of commenters responded to the title rather than the article's actual content, turning it into a debate about customer support chatbots. Among those who engaged with the actual argument, there is strong agreement that uncurated AI output in professional and personal communication is a problem. A smaller contingent pushes back, arguing that AI quality will improve and that the real issue is curation standards rather than AI itself. The thread carries a notable undercurrent of frustration — both at AI slop in general and at fellow commenters for not reading the article.

In Agreement

  • LLMs act as 'misunderstanding amplifiers' because they sound confident but lack the context to convey jargon or nuance correctly, leading to propagated misunderstandings when their output is pasted without curation
  • AI-generated PR descriptions are verbose, filled with meaningless fluff, leak irrelevant implementation details, and critically lack the 'why' context that only a human author can provide
  • If a person can provide enough context to an LLM to produce useful output, they should just communicate that context directly — the LLM middleman only adds noise the reader must filter out
  • Writing is an essential exercise for clear thinking, and offloading it entirely to LLMs causes both the writer's understanding and their writing ability to atrophy
  • Pasting unedited chatbot output into human conversations is fundamentally rude and dismissive, demonstrating that the sender values their own time over the recipient's
  • The 'just send me the prompt' philosophy resonated strongly — readers would rather see the human's actual inputs than the LLM's expanded, filler-laden output
  • AI slop is more insidious than previous forms of low-quality internet content because it is more widespread, more socially acceptable, and harder to identify

Opposed

  • What matters is quality and signal, not whether content is AI-generated — if AI output were genuinely better and more useful than human writing, nobody would object to it
  • LLMs can legitimately add value by taking terse context and making it more accessible to readers who lack background knowledge, and larger output is not necessarily slop
  • AI writing serves an accessibility function for people who cannot express their ideas well on their own, and blanket rejection of AI-assisted writing is exclusionary
  • Complaints about AI-generated content reflect an aging readership resistant to technological change rather than a genuine quality problem, and AI content quality will rapidly improve
  • People cannot reliably distinguish well-crafted AI-generated text from human writing, and claims of obvious 'AI smell' suffer from survivorship bias
  • The article's framing is overly idealistic about the quality and efficiency of typical human-to-human communication — humans also produce verbose, unclear, low-signal writing
  • AI-generated PR descriptions that lack design intent are a process problem, not an AI problem — teams should require intent regardless of authorship