Cognitive Surrender: How AI is Becoming Our Third System of Thought
Humans are increasingly bypassing their own logic to blindly follow AI outputs, a phenomenon termed 'cognitive surrender' that persists even when the AI is wrong.
Humans are increasingly bypassing their own logic to blindly follow AI outputs, a phenomenon termed 'cognitive surrender' that persists even when the AI is wrong.
Using ChatGPT for writing can reduce brain engagement and foster cognitive debt, leading to weaker neural activity, homogenized language, and lower sense of ownership over time.

Design uncertain problems players can master, scaffold them with clear loops and feedback, vary and pace them well, dress them coherently for a target audience, and keep pushing just beyond mastery.

LLMs likely perform a genuine, brainlike form of thinking via recognition and compression, but turning that into human‑level intelligence demands solving hard scientific problems and grappling with serious risks.
Work in a way that fiercely protects limited cognitive bandwidth: minimize inputs, single-thread, use AI, and prioritize health over performance.
Choose intentional friction: use AI as a tool that supports growth rather than replacing the hard work that builds it.
To think well, you must remember deeply—tools can assist, but they can’t replace a trained, knowledgeable mind.

Without lived, structured memory, AI will keep guessing wrong; fixing hallucinations requires AI that actually lives and remembers over time.

Using LLMs for writing may deliver quick results but, according to the cited study, it erodes neural engagement and memory, cultivating long-term cognitive debt.

AI is chasing coherent internal world models to move beyond brittle heuristics and achieve robust, reliable reasoning.