Don’t Cite Chatbots as Proof

Added Oct 30, 2025
Article: NegativeCommunity: NeutralDivisive
Don’t Cite Chatbots as Proof

The article argues that chatbot answers are not facts but probabilistic word predictions that can sound authoritative while being wrong. It cautions against citing AI outputs as proof, likening chatbots to a well-read but unsourced narrator. Readers are urged to seek verifiable sources and treat AI content as a starting point, not a conclusion.

Key Points

  • LLM responses are word predictions, not verified facts.
  • Chatbots can produce convincing but inaccurate or fabricated information.
  • Analogy: a well-read entity that cannot cite sources highlights strengths in fluency but weaknesses in reliability and attribution.
  • Do not present chatbot output as authoritative proof; treat it as a starting point, not the final say.
  • Multiple studies and reports document hallucinations, overtrust, and consequences of misusing AI-generated content.

Sentiment

The community largely agrees with the core message that LLM outputs should not be treated as authoritative evidence, but is notably critical of the article's execution. Many find the 'next word prediction' framing intellectually dishonest, the tone too snarky to be persuasive, and the site unlikely to reach the people who need it most. The discussion is more nuanced than a simple agree/disagree split, with many commenters occupying a middle ground that acknowledges LLM limitations while pushing back against oversimplified dismissals.

In Agreement

  • LLMs fabricate citations, including nonexistent academic papers and journal issues, making their outputs fundamentally unreliable as authoritative sources
  • LLMs are optimized for sycophancy and plausibility rather than accuracy, producing confident-sounding text regardless of correctness
  • The process by which LLMs generate text is fundamentally different from reasoning or knowledge retrieval, so even correct outputs are arrived at through an unreliable mechanism
  • People citing ChatGPT to override domain experts is a real and growing workplace problem
  • LLM outputs should be treated as starting points for investigation, not as evidence or proof
  • LLMs present poor information with the same certainty as good information, and when corrected, produce more bad information with empty apologies

Opposed

  • The article's 'just predicting the next word' framing is reductive and a non-sequitur — it could be used to dismiss any information medium
  • Modern LLMs with tool use (Gemini, Perplexity, ChatGPT with search) can and do provide real, verifiable citations through web search
  • LLMs are often correct and continue to improve, and all sources including peer review require verification — AI is not uniquely untrustworthy
  • The site is too snarky and passive-aggressive to reach its target audience, and will only serve as self-affirmation for people who already agree
  • The real issue is digital literacy and critical thinking, not the specific medium of LLMs — this mirrors the old 'don't cite Wikipedia' rule
  • Everyone already knows LLMs are not perfectly reliable, making the site redundant and condescending
Don’t Cite Chatbots as Proof | TD Stuff