AI Help Speeds Coding, But Slows Learning—Especially Debugging

A randomized trial found that developers using AI while learning a new Python library scored 17% lower on immediate assessments, particularly in debugging, with only marginal, non-significant time savings. Qualitative analysis shows that heavy delegation to AI depresses learning, while using AI for explanations and conceptual questions preserves comprehension. The authors urge managers and tool designers to build and incentivize learning-oriented AI usage to maintain critical oversight skills.
Key Points
- In an RCT with 52 developers learning the Trio library, AI assistance led to significantly lower immediate mastery: 50% vs. 67% on a quiz (−17%; Cohen’s d = 0.738, p = 0.01), with the biggest gap in debugging.
- AI users finished only ~2 minutes faster on average, and the speed difference was not statistically significant, partly because participants spent substantial time composing up to 15 AI queries.
- Interaction style mattered: heavy delegation or AI-led debugging correlated with low scores, while using AI to request explanations or ask conceptual questions correlated with higher comprehension.
- Findings suggest a trade-off: AI can boost productivity on familiar tasks but may impede learning and debugging skill formation when acquiring new tools.
- Managers and tool designers should intentionally incorporate learning-oriented features (e.g., explanation and study modes) to preserve skill development and oversight capacity; more longitudinal research is needed.
Sentiment
The community broadly agrees with the study's findings that AI assistance hurts learning, especially debugging skills. There is genuine appreciation for Anthropic's transparency in publishing negative results about their own product. However, the discussion is pragmatic rather than alarmist — most commenters accept AI tools as inevitable infrastructure and focus on how to use them wisely rather than whether to use them at all. The sharpest disagreements are about implications: whether the learning trade-off matters in practice, whether junior developers are being permanently harmed, and whether the study's methodology supports the conclusions being drawn from it.
In Agreement
- AI assistance creates cognitive offloading that impedes genuine learning, particularly for debugging skills — the study confirms what many experienced developers have intuitively observed
- How you use AI matters enormously: asking conceptual questions and seeking explanations leads to much better outcomes than delegating code generation entirely
- Junior developers are at particular risk of never developing independent problem-solving skills if they rely heavily on AI from the start
- Programming is fundamentally about continuous learning, and tools that reduce the need for deep engagement with code undermine long-term competency
- The modest speed gains from AI assistance may not justify the significant learning costs, especially since the time savings were not statistically significant in this study
Opposed
- Anthropic publishing this research has a conflict of interest — as the company selling these tools, their research should be treated with skepticism until independently verified
- The study's small sample size and short-term assessment limit the strength of its conclusions, and the paper has noticeable quality issues in its figures
- AI dependency concerns are overblown — they mirror the same arguments made about calculators, compilers, IDEs, and the internet, all of which became essential infrastructure
- AI has dramatically improved team productivity in practice: better Jira tickets, PR descriptions, test coverage, and documentation, all essentially for free
- The analogy between learning assembly and learning to code without AI breaks down because AI is shifting the valuable skill toward requirement specification and system design rather than code writing