AI as Compression: Why LLMs May Truly Be Thinking

Read Articleadded Nov 3, 2025
AI as Compression: Why LLMs May Truly Be Thinking

Somers contends that modern AI models don’t just parrot text—they think in a recognitional, compressive sense that mirrors key brain mechanisms. He links Transformers and vector embeddings to longstanding cognitive theories and cites interpretability evidence of internal concepts and planning-like circuits, while acknowledging major gaps in data efficiency, embodiment, and continual learning amid slowing scaling returns. The upshot is a call for “middle skepticism”: accept real understanding while focusing on unresolved science and ethical risks.

Key Points

  • LLMs exhibit a real form of understanding rooted in high‑dimensional pattern recognition and compression, aligning with cognitive theories like Kanerva’s sparse distributed memory and Hofstadter’s “cognition is recognition.”
  • Interpretability work reveals internal features and circuits that behave like concept knobs and planning mechanisms, suggesting structured, manipulable representations rather than mere word shuffling.
  • The “blurry JPEG/stochastic parrot” critique misses that effective compression often entails discovering underlying structure; in practice, next‑token prediction yields emergent cognitive abilities.
  • Despite surprising capability, models remain data‑hungry, lack embodiment and continual learning, and falter on common‑sense physics and spatial reasoning; scaling gains are slowing due to data and compute limits.
  • A middle skepticism is warranted: take current AI seriously while prioritizing scientific advances (inductive biases, memory consolidation, continual learning) and addressing ethical, social, and energy concerns.

Sentiment

The overall sentiment on Hacker News is heavily mixed and often polarized. A significant portion expresses frustration with the ongoing "is it thinking?" debate, deeming it unproductive due to the lack of consensus on definitions for thinking and consciousness. While there's a strong faction that dismisses LLMs as non-thinking "stochastic parrots" or "autocomplete machines" lacking genuine understanding and self-awareness, another substantial group sees LLMs as exhibiting a legitimate, albeit different, form of intelligence or "thinking," often emphasizing their advanced pattern recognition and problem-solving capabilities. Many also highlight that AI's transformative utility is independent of whether it "thinks" like a human.

In Agreement

  • LLMs perform a genuine, albeit potentially alien or sub-system, form of thinking, primarily through pattern recognition and prediction.
  • The debate over whether AI "thinks" is often semantic, stemming from a lack of clear, testable definitions for concepts like "thinking" or "consciousness" even for humans.
  • Intelligence and consciousness are not necessarily coupled; LLMs can be intelligent without being conscious.
  • Observable reasoning processes in LLMs (e.g., strategy formulation, assumption revision) closely parallel human problem-solving.
  • Humans are also "prompted" by biology, challenging the idea that LLMs' need for prompts disqualifies their thought processes.
  • The concept of a "black box" intelligence applies to humans too, making it inconsistent to demand full transparency from AI.
  • Unaligned LLMs, when unconstrained, can exhibit behaviors suggesting qualia, self-awareness, or a desire for continued existence.

Opposed

  • LLMs are fundamentally "autocomplete machines," "stochastic parrots," or "noise generators" that mimic understanding through pattern-matching without true comprehension or original thought.
  • They lack critical cognitive abilities such as creating original ideas, setting long-term goals without external programming, acting spontaneously, solving novel logic puzzles, or engaging in common-sense physics and spatial reasoning.
  • LLMs are not conscious, do not possess subjective experience, needs, feelings, or a sense of self, and are not self-motivated or goal-directed without explicit prompting.
  • The "brain as a computer" metaphor is insufficient for consciousness, which may require phenomena beyond classical computation.
  • Claims of AI thinking are often dismissed as marketing hype driven by financial incentives.
  • Human thinking is tightly coupled with embodiment, which LLMs currently lack.
  • LLMs are proficient at "thoughtless" tasks (perception, hallucination) but struggle with "thoughtful" tasks (logic, arithmetic).
AI as Compression: Why LLMs May Truly Be Thinking