The Forgery of Vibe-Coding: Why AI Needs Source Attribution

Steven Wittens critiques the AI-driven 'vibe-coding' trend, arguing that LLMs produce deceptive 'slop' that undermines the craft of software engineering. He notes that while the art and gaming worlds have resisted AI to preserve authenticity, the tech industry has embraced a model of forgery that lacks accountability. The author concludes that AI will remain fundamentally untrustworthy until it can provide genuine, auditable source attribution for its outputs.
Key Points
- LLMs function as tools for forgery, allowing users to produce imitations of authentic work without the underlying craft or expertise.
- The rise of 'vibe-coding' has led to 'slop'—low-quality, over-engineered code that increases technical liability and burdens open-source maintainers.
- Industries like video games and fine art have resisted AI more effectively than software because they value unique human vision and transparent provenance.
- Current LLMs are incapable of true source attribution, meaning they merely 'role-play' citations based on training data patterns rather than providing auditable facts.
- The industry's focus on efficiency gains through AI often ignores the long-term costs of maintaining uncreative, disposable, and unverified code.
Sentiment
The discussion is deeply polarized and reveals a community grappling with genuine uncertainty. A slight majority of detailed, substantive comments lean toward skepticism of the article's strongest claims, particularly its moralistic framing and factual errors about procedural generation. However, the article's core concerns about code quality, attribution, and the socioeconomic implications of AI-driven automation resonate strongly with a significant portion of commenters. The community generally agrees that LLMs are useful tools for some tasks but diverges sharply on whether they represent a net positive or negative for the profession and society. Many commenters who defend LLM use still acknowledge significant limitations, suggesting the true disagreement is about degree rather than kind.
In Agreement
- LLMs produce unreliable output that requires extensive human review, negating claimed productivity gains - experienced engineers report the tools have never actually saved them time on non-trivial work
- Before LLMs, code reuse was accomplished through libraries and shared abstractions; LLM-generated code produces millions of slightly different incompatible implementations instead of standardized solutions
- The real concern is not technical utility but power dynamics: LLM technology exists primarily to reduce worker agency, enable layoffs, and concentrate wealth among capital owners
- LLM coding destroys the economic incentive to invest in better programming languages, abstractions, and tooling - it is more profitable to let machines produce low-level boilerplate than to fund ergonomic advances
- LLMs are physically incapable of reasoning - they operate on lexical pattern matching and cannot represent truth values, only statistical likelihood from training data
- LLM-generated code lacks source attribution, creating genuine copyright and intellectual property problems that most corporations are ignoring
- Delegating cognitive work to AI leads to skill atrophy, and unlike delegating physical labor, losing the capacity for critical thinking has profound societal consequences
- The 'skill issue' defense of LLMs is circular: if you need deep domain expertise to prompt the LLM correctly, you could have just written the code yourself
Opposed
- The article uses loaded, moralistic language like 'forgery' that confuses an ethical argument with a technical one - code is valued for what it does, not who wrote it
- LLMs are simply tools whose output quality depends on the skill of the user, much like any other tool - experienced engineers get good results while careless users get slop
- Most programming is boilerplate code that LLMs handle well, and the real value developers provide lies in higher-level design decisions, not in writing individual lines
- The article's claim that procedural generation 'failed to deliver' in gaming is demonstrably wrong, undermining the author's credibility on the broader argument
- Code reuse through libraries is genuinely hard and LLMs offer a novel form of 'semantic reuse' that traditional abstractions cannot match
- Vibe-coded internal tools are already saving non-technical workers significant time - the imperfect output is still a net positive when the alternative was no solution at all
- Most enterprise software was already poorly written before LLMs existed - garbage code and technical debt are the industry norm, not an AI-introduced problem
- The technology has clearly progressed and continues to improve, and prominent engineers are productively using LLMs in their workflows