Why AI Video Is Net Harmful Today

Added Jan 5
Article: Very NegativeCommunity: NeutralDivisive
Why AI Video Is Net Harmful Today

The author’s attempts to use Sora and other AI tools to adapt a short story revealed a generic, uncanny “AI Video” aesthetic that fails narrative needs. Meanwhile, bad actors exploit AI video to spread misinformation at scale, particularly targeting older adults, while debunking efforts can’t keep pace. The result is a pervasive erosion of trust in visual media, making today’s AI videos effectively harmful.

Key Points

  • AI video tools produce a distinct, uncanny aesthetic that feels generic and ill-suited for intentional, coherent storytelling.
  • The visual line between real and synthetic is blurring, extending the uncanny look even to human-made videos and fueling distrust.
  • Harmful actors exploit AI video at scale to spread misinformation, impersonations, and rage-bait, with older adults especially targeted.
  • Efforts to teach detection and verification lag far behind the velocity of misinformation, and audiences often engage earnestly with fabrications.
  • Net effect today: AI videos primarily cause direct and indirect harm, accelerating manipulation and corroding public trust in visual media.

Sentiment

The Hacker News community largely agrees that AI video has significant problems—particularly around trust erosion, content flooding, and enabling disinformation—but strongly rejects the article's absolutist 'all harmful' framing. There is broad consensus that unlabeled AI content on platforms is a serious issue, but considerable pushback on the idea that no legitimate creative use cases exist. The discussion is notably divided between those who see fundamental harm and those who view AI video as just another tool needing time to mature and proper platform governance.

In Agreement

  • AI video removes creativity from the process rather than enabling it—the craft of shooting and editing is itself creative, and AI strips that away by handling all the micro-decisions that constitute artistic expression
  • AI video is primarily useful to spammers, scammers, and propagandists because they only need generic output, while artists need precise control that current models cannot provide
  • Trust erosion is the deepest harm: even harmless AI videos contribute to a world where nothing visual can be trusted, and this affects everyone regardless of whether they consume AI content directly
  • YouTube and TikTok are being flooded with unlabeled AI slop—fake pet videos, AI-generated history channels, fabricated viral content—and the platforms are failing to require disclosure
  • AI-generated ads look cheap and jarring, undermining trust in advertised products, while cost savings go to shareholders rather than consumers
  • AI video models are built on mass ingestion of creative work without consent—using them participates in that extraction regardless of output quality

Opposed

  • Good AI video exists when humans stay heavily involved in scripting, editing, and acting—channels like NeuralViz demonstrate genuine creativity augmented by AI rather than replaced by it
  • The 'all harmful' framing is hyperbolic—comedy, memes, and personal creative projects are legitimate and harmless use cases that the article dismisses without engagement
  • Pre-AI internet was already full of propaganda, rage-bait, and manipulation; AI is just an accelerant for existing problems, not a new category of harm
  • Losing trust in visual media faster might actually be beneficial—better to adjust expectations and develop skepticism than remain vulnerable to manipulation
  • Regulation is futile since the technology is already widely available, similar to how governments failed to regulate cryptography
  • Platform algorithms will adapt to filter slop, and audiences will learn to recognize AI aesthetic just as they learned to spot crude Photoshop edits