Gemini 3 Pro Comes to Gemini CLI: 5 Ways to Supercharge Your Terminal

Gemini 3 Pro is integrated into the Gemini CLI, bringing advanced reasoning, agentic coding, and multimodal tool use to the terminal. It’s available now for Google AI Ultra subscribers and paid Gemini API users, with broader access rolling out via a waitlist; enable it by updating to CLI v0.16.x and turning on Preview features. The article demonstrates five practical use cases, from full app scaffolding and image-to-code to natural-language shell commands, code documentation, and end-to-end cloud debugging.
Key Points
- Gemini 3 Pro is now available in Gemini CLI, offering state-of-the-art reasoning, agentic coding, multimodal understanding, and advanced tool use.
- Immediate access for Google AI Ultra subscribers and paid Gemini API key holders; Code Assist Enterprise is coming soon and others can join the waitlist.
- Enable by updating Gemini CLI to v0.16.x and toggling Preview features to true; Gemini 3 Pro then becomes the default.
- Five showcased workflows: full app scaffolding (including 3D), image-to-app UI generation, natural-language complex shell commands, auto-generated code documentation, and end-to-end cloud debugging across tools.
- Goal: make the terminal an intelligent partner that accelerates everyday tasks and complex engineering work.
Sentiment
The community is predominantly skeptical. Most commenters prefer competing tools and dismiss the article as marketing content. While some acknowledge Gemini 3 Pro as a capable model, the CLI itself draws criticism for reliability issues, access restrictions, and a demo that inadvertently highlights LLM limitations rather than strengths.
In Agreement
- Gemini 3 Pro's rate limits are described as very generous, with Google claiming only a small fraction of power users will hit them
- Writing the CLI in TypeScript makes sense for deep integration with the JS/TS ecosystem, MCP tooling, and Electron-based editors
- The model is worth trying as a new entrant in the coding agent space
Opposed
- The article is a marketing listicle that should not have reached the HN front page
- The git bisect demo reveals a fundamental LLM problem: the model blindly complied with an incorrect request instead of pushing back
- Gemini CLI is unreliable, frequently crashing or becoming unavailable mid-session
- Access is artificially restricted to paid subscribers despite the model being generally available elsewhere
- Provider-specific CLI tools are a trap that limits choice, privacy, and freedom
- Other tools like Claude Code and Codex with GPT-5 are superior in practice
- Some paying subscribers still hit rate limits constantly