Knowledge Without Memory: Why LLMs Guess and Humans Don’t

Added Sep 10, 2025
Article: NeutralCommunity: NeutralDivisive
Knowledge Without Memory: Why LLMs Guess and Humans Don’t

Sloan contrasts an AI’s fabricated Ruby methods with a human’s ability to sense what they truly know by remembering learning it. He argues LLMs lack experiential, structured memory, and their weights and context windows are poor substitutes. Until AI can live in the world and accrue causally linked experiences, hallucinations will persist.

Key Points

  • Humans rely on experiential, sedimentary memory that includes a felt sense of when and how knowledge was learned, helping us avoid unfounded guesses.
  • LLMs hallucinate because they lack lived, episodic memory; their weights resemble inherited DNA, not accumulated experience.
  • A context window is not true memory but a scratchpad of notes without ownership or continuity, leading to disorientation and fragility.
  • The biological basis of human memory remains unresolved, highlighting its depth and complexity.
  • Real solutions to hallucination likely require AI that lives in the world and builds stable, causally linked memories over time.

Sentiment

The community is engaged and finds the topic genuinely interesting, but opinion is notably divided. Many appreciate the article's poetic framing and core insight about episodic memory's role in knowledge confidence, but a substantial contingent pushes back on technical grounds — arguing the brain analogy is imprecise, the neuroscience claims are outdated, and hallucinations are better understood through ML mechanics than cognitive metaphors. The tone is constructive rather than hostile, with several long, detailed exchanges that advance the conversation.

In Agreement

  • A memory neuroscientist appreciates the episodic memory framing and suggests that reasoning models' multi-pass evidence search may be a step toward the 'I remember' (vs. 'I know') distinction the article describes
  • The distinction between LLMs as passive observers of text versus humans who learn through active real-world interaction resonates — LLMs lack the embodied, causal feedback loop essential for building reliable knowledge
  • Source-aware training research supports the article's intuition that humans naturally track who-claims-what, and LLMs could benefit from similar provenance tracking during training
  • Written documentation serves as an external knowledge cache, and without fundamental LLM changes, high-quality written knowledge is the best path to helping agents learn
  • The article captures something real about how humans sense the difference between solid knowledge and guesses, even if the mechanism is debated

Opposed

  • Hallucinations are better understood through first-principles ML analysis (training objectives, benchmarks) than through cognitive science metaphors — systematic research is more productive than analogies
  • Hallucinations aren't a 'bug' or pathology — they're normal LLM outputs that happen to be unhelpful, and the term itself is misleading because it implies a fixable malfunction
  • A software engineer with temporal lobe epilepsy reports that poor episodic memory doesn't prevent functional semantic memory or the ability to sense uncertainty, directly challenging the article's core premise
  • The article overstates neuroscience ignorance — recent research has identified neural correlates of working and episodic memory, and the field is further along than the article suggests
  • Humans also hallucinate and confabulate regularly (false memories, unreliable eyewitness testimony), so the human-LLM contrast is overdrawn
  • Models can already learn calibration through reinforcement learning, showing positive correlation between stated confidence and accuracy
  • Comparing LLMs and brains is inherently misguided — the Transformer was designed as a language model, not a cognitive architecture, and anthropomorphizing it leads to confused analysis