OpenAI Quietly Ships Skills in ChatGPT and Codex CLI

OpenAI quietly shipped support for filesystem-based “skills” in both ChatGPT and Codex CLI, aligning with Anthropic’s lightweight approach. In ChatGPT, built-in skills guide robust document/PDF handling by rendering pages to images and iterating for layout fidelity; the author’s test produced a polished PDF after self-correction. Codex CLI now loads user skills from ~/.codex/skills, and successfully used one to scaffold a working Datasette plugin—evidence that skills are practical and worth standardizing.
Key Points
- OpenAI has introduced filesystem “skills” in ChatGPT (Code Interpreter) and Codex CLI, mirroring Anthropic’s simple folder-with-Markdown pattern.
- ChatGPT’s skills include documents and PDFs, using page rendering to PNGs plus vision models to preserve layout and visuals over raw text extraction.
- A real-world test produced a high-quality PDF after iterative self-checks, including automatic font substitution to handle macrons in “kākāpō.”
- Codex CLI can load user-installed skills from ~/.codex/skills; Willison used this to have Codex generate a functioning Datasette plugin.
- The author calls for a formal, minimal skills spec—potentially overseen by the Agentic AI Foundation—because the pattern is gaining rapid adoption.
Sentiment
The community broadly agrees that skills are a useful and well-designed pattern, with particular enthusiasm for their token efficiency compared to MCP. However, there is notable pushback on novelty claims, with a vocal minority insisting the concept is just rebranded documentation. The tone toward Anthropic is warm and respectful, with many viewing their simple-but-sticky approach as genuinely clever even if not revolutionary.
In Agreement
- Skills provide real value through their lazy-loading model: only a one-line index entry per skill is in the base context, and full instructions load only when needed — unlike MCP which forces all tool definitions into every prompt.
- The filesystem-based, folder-per-skill pattern is simple enough to implement anywhere and enables cross-platform portability, making it a genuinely useful standardization even if the underlying concept is not new.
- Skills are a better abstraction than MCP for many use cases: they work in web UIs, mobile apps, and cloud environments where local MCP servers cannot run.
- The pattern of pairing markdown instructions with Python CLI scripts is an effective way to blend LLM reasoning with deterministic, reliable execution.
- Anthropic continues to produce deceptively simple innovations (MCP, skills, CLAUDE.md) that gain adoption across the ecosystem, and OpenAI catching up validates the approach.
Opposed
- Skills are not a new invention — they are just dynamic prompt extension, a concept that exists in many forms across platforms, and any developer can implement them in 20-30 lines of code.
- The framing of skills as a groundbreaking pattern is marketing hype; it is fundamentally just good documentation and context engineering that developers have always done.
- Calling for a vendor-neutral skills standard or foundation (like the Agentic AI Foundation) feels premature and performative for something this simple, drawing comparisons to Silicon Valley's satirical 'tethics.'
- MCP is already a sufficient packaging mechanism for skills plus code; the distinction between skills and MCP is overstated.