Knowledge Without Memory: Why LLMs Guess and Humans Don’t
Read ArticleRead Original Articleadded Sep 10, 2025September 10, 2025

Sloan contrasts an AI’s fabricated Ruby methods with a human’s ability to sense what they truly know by remembering learning it. He argues LLMs lack experiential, structured memory, and their weights and context windows are poor substitutes. Until AI can live in the world and accrue causally linked experiences, hallucinations will persist.
Key Points
- Humans rely on experiential, sedimentary memory that includes a felt sense of when and how knowledge was learned, helping us avoid unfounded guesses.
- LLMs hallucinate because they lack lived, episodic memory; their weights resemble inherited DNA, not accumulated experience.
- A context window is not true memory but a scratchpad of notes without ownership or continuity, leading to disorientation and fragility.
- The biological basis of human memory remains unresolved, highlighting its depth and complexity.
- Real solutions to hallucination likely require AI that lives in the world and builds stable, causally linked memories over time.
Sentiment
Mixed and nuanced, with a slight tilt toward skepticism of the article’s experiential-memory thesis in favor of ML-based explanations and calibration.
In Agreement
- LLMs lack episodic, lived memory and thus cannot track the provenance of knowledge, leading to overconfident fabrication.
- Humans often ‘remember learning’ and weight information by source (personal experience > observation > reliable teaching > random text), enabling better self-calibration and willingness to say “I don’t know.”
- LLMs treat all text as statistics with no personal experience to cross-check, so they can’t notice when a claim lacks a learned basis.
- Reducing hallucinations likely requires systems that accumulate temporally structured, grounded experience rather than relying only on static training and transient context windows.
Opposed
- Hallucinations are primarily a bug in training objectives and benchmarks; ML-first fixes (e.g., better objectives, data, and evaluations) are more productive than anthropomorphic memory analogies.
- Models can learn confidence calibration during reinforcement learning, improving the correlation between stated confidence and accuracy.
- Human memory is reconstructive and frequently wrong (e.g., eyewitness errors), so the article overstates humans’ ability to avoid confident mistakes via meta-memory.
- Brains may not ‘store’ memories in the intuitive way we feel; introspection about memory’s phenomenology is an unreliable guide to cognition.
- Strong text prediction often entails real-world knowledge (per compression arguments), undermining the claim that LLMs are ‘just word statistics.’
- Anecdotal failures (e.g., mixing up legal constants and fabricating sources) highlight token-level or training issues rather than absence of experiential memory per se.