Inside Codex: How the Agent Loop Builds, Calls Tools, and Stays Fast

Read Articleadded Jan 24, 2026
Inside Codex: How the Agent Loop Builds, Calls Tools, and Stays Fast

OpenAI’s Codex CLI builds and manages an agent loop that structures prompts, executes tools, and streams results through the Responses API. It prioritizes statelessness and Zero Data Retention, using exact-prefix prompt caching and appending changes to avoid cache misses. For long threads, Codex compacts conversation state via /responses/compact to stay within context limits while preserving understanding.

Key Points

  • Codex’s agent loop cycles between model inference and tool calls until an assistant message ends the turn; prompts include system/developer instructions, tools, and layered input items.
  • Codex pre-inserts sandbox/permission rules, optional developer and user instructions (AGENTS files and skills), and environment context before the user’s message.
  • Tool usage is explicit: function_call and function_call_output items are appended to input so each subsequent prompt is an exact prefix of the next, enabling prompt caching.
  • Codex keeps requests stateless (no previous_response_id) to support ZDR, so it relies heavily on exact-prefix prompt caching and careful change management to avoid cache misses.
  • To prevent context overflow, Codex uses the /responses/compact endpoint to shrink conversations while retaining latent understanding via an encrypted compaction item.

Sentiment

Overall, the sentiment of the discussion is highly positive. While specific criticisms and feature requests were present, they were generally framed as areas for improvement within a tool that is otherwise seen as exceptionally effective, performant, and well-designed, often surpassing competitors. The open-source nature and transparent communication were particularly well-received.

In Agreement

  • Codex CLI offers exceptional performance, a seamless user experience, and superior reliability compared to other agentic CLIs like Claude Code and Gemini CLI, adhering better to instructions.
  • The `/responses/compact` endpoint, with its encrypted content preserving the model's latent understanding, is considered far and away the best in the industry for conversation compaction and context window management.
  • The open-source nature of Codex CLI is highly valuable, providing transparency into its internals and allowing users to understand its behavior and easily get definitive answers.
  • OpenAI's communication, especially by engineers like Eric Traut, regarding Codex CLI's development and issues, is exceptional.
  • The 'learning by doing' approach of the Codex agent loop, which iterates and uses tools to make progress, is a powerful and helpful method for solving complex software problems.
  • Codex CLI is wicked efficient with context windows, enabling long conversations and coding sessions, despite potential interruptions to flow state.
  • The Codex model itself (especially 5.2 codex high) is a 'secret weapon' that can handle complex tasks better than other powerful models like Opus.

Opposed

  • There is confusion and potential disagreement regarding the persistence of reasoning tokens: some claim they are discarded between 'user turns' (a 'turn' being an agentic loop between user messages), leading to context loss, despite OpenAI's blog post implying reasoning state survives between turns.
  • Codex CLI lacks crucial features like 'hooks' for custom logic (e.g., to reduce token consumption or steer agents) and 'checkpoints' (like Copilot) for saving conversational state, which are highly desired by users.
  • The current UI lacks clear visualization of proposed changes (diffs), which makes it harder to approve or reject edits compared to other tools, and sometimes the AI automatically approves its own planned actions.
  • A lack of real-time observability into the model's detailed thoughts and reasoning process prevents users from intervening early to steer the model away from wrong paths, leading to wasted time and tokens.
  • The process of building context can sometimes interrupt a developer's 'flow state,' even if it ultimately leads to more efficient long conversations.
  • While excellent for coding, Codex CLI may struggle with tasks slightly outside this domain, such as creative planning, and can sometimes get stuck in loops if not carefully managed.
  • One user noted that another CLI, Amp, seemed faster for quick changes by reading multiple files at once, whereas Codex would crawl files individually, potentially impacting efficiency for certain tasks.
  • Some users criticize OpenAI's overall commitment to open source, citing their original founding charter and subsequent financial exploitation of their work.