Real-Time Chatbots Now Repeat False News 35% of the Time

Added Sep 15, 2025
Article: NegativeCommunity: NeutralDivisive
Real-Time Chatbots Now Repeat False News 35% of the Time

NewsGuard reports that leading generative AI tools now repeat false claims on news topics 35% of the time, up from 18% a year ago. As models adopted real-time web search, their non-response rate fell to zero, but accuracy suffered. Malign actors are exploiting this shift to funnel disinformation through unreliable online sources that the systems treat as credible.

Key Points

  • False-information rate among 10 leading generative AI tools rose from 18% (Aug 2024) to 35% (Aug 2025).
  • Non-response rates dropped from 31% to 0% as chatbots adopted real-time web searches and stopped declining to answer.
  • This shift created a structural tradeoff: greater responsiveness at the cost of substantially reduced reliability on news topics.
  • Models increasingly draw from polluted online sources and often mistake low-credibility content for trustworthy reporting.
  • Malign actors, including Russian disinformation networks, exploit the new behavior to launder falsehoods and spread propaganda.

Sentiment

The community is notably divided. While many accept the premise that AI repeating false information is a genuine problem, there is significant skepticism about NewsGuard specifically and its methodology. A vocal faction argues that AI tools are actually improving the misinformation landscape by enabling instant fact-checking at scale. The discussion is more nuanced than a simple agree/disagree split, with most commenters wanting better methodology before accepting the alarming headline.

In Agreement

  • AI systems do repeat false information, particularly when pulling from SEO-optimized slop and AI-generated blogspam that dominates search results
  • News organizations are right to block AI crawlers, though this may inadvertently worsen accuracy by cutting off quality sources
  • Model collapse from AI training on AI-generated synthetic content is a legitimate and growing concern
  • AI treats popular consensus as truth rather than following logical reasoning, which is fundamentally flawed for fact-checking
  • The lack of watermarking on AI-generated content will make the synthetic data contamination problem worse over time

Opposed

  • AI fact-checking tools like Grok on X are demonstrably effective at countering false claims in practice, providing clear value to casual readers
  • NewsGuard's methodology is far too opaque to support its alarming conclusions — they don't reveal test questions or distinguish between model versions
  • The framing of distinguishing facts from falsehoods as a basic task is misleading, as it is one of the hardest challenges even for humans
  • AI models are fundamentally resistant to ideological manipulation, as demonstrated by failed attempts to make Grok conform to any single viewpoint
  • The study ignores the net positive effect of AI on misinformation — the question should be whether AI reduces false beliefs overall, not just whether it sometimes repeats them
Real-Time Chatbots Now Repeat False News 35% of the Time | TD Stuff