AI’s Real Agenda: Power, Not Productivity

Read Articleadded Nov 26, 2025

The author rejects LLMs not because they work poorly but because they intrinsically centralize power and erode human agency. Tools shape thought, and integrating corporate AI into our cognitive process invites surveillance-capitalist control while devaluing craft. Real resistance comes through community care, organizing, reducing algorithmic influence, learning, and making original work.

Key Points

  • Focusing on AI output quality misses the point; fundamental, structural harms exist regardless of performance.
  • Tools shape cognition—outsourcing writing and coding to LLMs subtly rewires thought and erodes self-directed expression.
  • Adoption is often coerced by workplace expectations, interface patterns, knowledge pollution, and social pressure.
  • AI systems concentrate power for surveillance-capitalist and authoritarian interests; their resource intensity is a feature, not a bug.
  • Undermining craft and skilled labor furthers centralization and alienation; resistance lies in care, organizing, learning, and making new things.

Sentiment

The overall sentiment of the Hacker News discussion is sharply polarized, indicating a deep divide within the community regarding the article's critical stance on LLMs. While strong arguments are made both for and against the article's central thesis, there is no clear consensus of agreement or disagreement. Instead, the discussion is characterized by intense debate, passionate defenses of opposing views, and frequent critiques of the other side's logic or motivations.

In Agreement

  • LLMs devalue craftsmanship, leading to low-quality 'slop' code, a demoralizing 'vibecoding' cycle, and a loss of traditional hacker values that prioritize deep understanding and skilled work.
  • AI tools are intrinsically designed to reinforce existing power structures, consolidating capital for surveillance-capitalist interests and undermining human autonomy, echoing Marxian alienation.
  • The 'devaluation' of programming is a real threat, impacting job security for junior developers and transforming work into 'babysitting' AI-generated code, potentially leading to a decrease in critical thinking and collective intelligence.
  • LLMs introduce significant practical problems, such as shifting work and liability onto human reviewers, generating subtle bugs (e.g., security flaws), and polluting information ecosystems with unverified content.
  • The high resource intensity and centralization of LLM development contribute to an over-reliance on a few big tech companies, posing a threat to decentralization and creating significant opportunity costs for other innovations.
  • Many programmers identify with a 'hacker ethos' that inherently involves resisting corporate control and proprietary, opaque tools, making skepticism towards LLMs a natural extension of these values.
  • The current struggles in the tech job market are exacerbated by LLMs, compounding pressures from factors like past over-hiring and leading to a more challenging environment for developers.
  • LLMs represent a fundamental shift by automating *agency* rather than just tasks, making them categorically different and more threatening than traditional deterministic tools.

Opposed

  • LLMs are powerful, evolving tools that enhance productivity by automating busywork, allowing developers to focus on more complex and creative tasks, similar to other historical technological advancements like compilers or IDEs.
  • The 'devaluation of craft' argument is often dismissed as Luddite, reactionary, or an emotional response to inevitable technological progress, with programming jobs remaining relatively well-paid and comfortable compared to other professions.
  • Claims that LLMs are inherently tools for 'power and violence' or 'fascism' are seen as hyperbolic, politically biased, lacking nuance, or even hypocritical, as technology has always presented dualities.
  • Many experienced developers report substantial personal productivity gains from LLMs in tasks like boilerplate generation, refactoring, and smart search, without feeling their skills are atrophying, suggesting effective use requires skill and experience.
  • The hacker ethos should prioritize results and efficient problem-solving over ideological purity or rejecting effective tools, even if developed by large corporations, and can even involve using AI to fight power structures.
  • Concerns about LLM reliability and output quality are often overstated; human-written software also has bugs, and skilled users can effectively manage AI output through careful review and strategic prompting.
  • The current difficulties in the tech job market are primarily attributed to economic factors like the ZIRP-era over-hiring and subsequent market corrections, rather than LLMs fundamentally eliminating jobs.
  • LLM capabilities are rapidly improving, with claims of them 'plateauing' frequently contradicted by new model releases, making past skepticism seem outdated and ill-informed, and suggesting a future with even more capable models.
AI’s Real Agenda: Power, Not Productivity