Stop AI Workslop to Unlock ROI
Many organizations see little ROI from generative AI because employees are producing workslop—outputs that look good but create rework for others. Survey data show frequent occurrence, substantial time and cost burdens, and interpersonal fallout that undermines collaboration. Leaders must replace blanket AI mandates with clear guardrails, purpose-driven use, and collaborative norms to capture value.
Key Points
- AI adoption is high but ROI is low because employees produce workslop—polished yet shallow AI outputs that shift effort to recipients.
- Workslop is common and costly: 40% encounter it monthly; it consumes nearly two hours per incident and amounts to an estimated $186 per month per employee and over $9 million annually for a 10,000-person firm at 41% prevalence.
- The interpersonal toll is significant, with recipients annoyed or confused and viewing senders as less competent and trustworthy, which harms collaboration.
- Indiscriminate AI mandates encourage indiscriminate usage; leaders must provide nuanced guidance, guardrails, and norms tied to strategy and values.
- Mindsets matter: pilots (high agency and optimism) use AI to enhance creativity and outcomes, while passengers use it to avoid work; organizations should foster pilot behaviors and integrate AI into collaborative workflows.
Sentiment
Overall, the Hacker News discussion demonstrates strong agreement with the article's premise that AI-generated 'workslop' is a significant and growing problem. Commenters widely corroborate the described negative impacts on productivity, collaboration, and employee morale through shared anecdotes and frustrations. While some suggest potential solutions, the prevailing sentiment is one of cynicism and concern regarding the current state of AI adoption, especially when driven by managerial mandates rather than thoughtful implementation.
In Agreement
- Many commenters validate the existence of 'workslop' through personal anecdotes, particularly when non-technical managers generate AI code or reports that appear functional but are deeply flawed and require significant effort to correct or explain.
- The sentiment that AI-generated work shifts cognitive burden from creator to receiver, causing a net loss in productivity (echoing Brandolini's law), is strongly supported, with specific examples like reviewing 2,000+ lines of AI-generated 'rats nest' code.
- The discussion reinforces the idea that AI can amplify existing corporate inefficiencies, leading to a cycle where AI-generated fluff is then summarized by AI, creating more 'bilge' rather than value.
- Commenters agree that uncritical AI adoption by managers, often for superficial 'busyness' or to appear 'cool,' leads to incomplete, incorrect, and verbose reports that lack the factual accuracy of human-generated work.
- The idea that AI-generated 'workslop' damages trust, erodes quality standards, and leads to frustration and a cynical view of corporate productivity is a pervasive theme.
- The concern that money spent on AI subscriptions isn't yielding a return, but rather adding to lost productivity, aligns with the article's core argument about lack of measurable ROI.
Opposed
- Some commenters suggest that the problem of 'workslop' can be mitigated through clear policies, such as a 'no workslop policy,' or by simply demanding detailed explanations for any submitted work, implying the issue is manageable rather than inherently destructive.
- One commenter expresses optimism about the future potential of AI, imagining an 'army of mid-level engineers' that genuinely only need high-level instruction to reliably complete tasks, suggesting the current 'workslop' is a temporary phase before AI matures.