Gemini 3 Pro Comes to Gemini CLI: 5 Ways to Supercharge Your Terminal

Gemini 3 Pro is integrated into the Gemini CLI, bringing advanced reasoning, agentic coding, and multimodal tool use to the terminal. It’s available now for Google AI Ultra subscribers and paid Gemini API users, with broader access rolling out via a waitlist; enable it by updating to CLI v0.16.x and turning on Preview features. The article demonstrates five practical use cases, from full app scaffolding and image-to-code to natural-language shell commands, code documentation, and end-to-end cloud debugging.
Key Points
- Gemini 3 Pro is now available in Gemini CLI, offering state-of-the-art reasoning, agentic coding, multimodal understanding, and advanced tool use.
- Immediate access for Google AI Ultra subscribers and paid Gemini API key holders; Code Assist Enterprise is coming soon and others can join the waitlist.
- Enable by updating Gemini CLI to v0.16.x and toggling Preview features to true; Gemini 3 Pro then becomes the default.
- Five showcased workflows: full app scaffolding (including 3D), image-to-app UI generation, natural-language complex shell commands, auto-generated code documentation, and end-to-end cloud debugging across tools.
- Goal: make the terminal an intelligent partner that accelerates everyday tasks and complex engineering work.
Sentiment
The overall sentiment of the Hacker News discussion is cautiously skeptical and mixed. While there's an initial spark of interest in the new Gemini 3 Pro release, significant criticism quickly emerges, particularly regarding the execution of its launch (access issues), previous negative experiences with the Gemini CLI, and fundamental concerns about the model's ability to accurately interpret complex user intent. Many users express satisfaction with alternative AI coding tools and don't see a compelling reason to switch, suggesting Google's offering faces stiff competition and needs to prove its touted capabilities beyond marketing.
In Agreement
- Initial excitement and intent to try the new Gemini 3 Pro CLI features upon learning of its release.
- Some users find Gemini (or similar TUI agents) to be interchangeable with other LLM tools, suggesting it performs adequately in certain contexts.
- One commenter defended the `git bisect` demo video's execution as technically correct in its sequence of commands, despite critiquing the video format itself.
Opposed
- Immediate user frustration with the launch due to lack of access to Gemini 3 Pro, despite the announcement, leading to a characterization of it as a "half-ass-launch."
- Criticism of previous Gemini CLI iterations for low rate limits, difficulty in steering the model (e.g., constant unwanted code comments), and frequent API unavailability.
- Skepticism about the value proposition, with some users stating they haven't found a compelling reason to switch from their currently favored AI coding tools like Claude Code or Codex CLI.
- General concerns about proprietary model provider CLIs, citing issues with less freedom of choice, privacy implications, and potentially restrictive fine print.
- A detailed critique of the `git bisect` demo video, arguing it highlights a fundamental LLM flaw where the model executes commands literally as requested, even when the user's intent is flawed or the command is inappropriate for the underlying goal, rather than clarifying or suggesting a better approach.
- Technical dissatisfaction with the Gemini CLI being built in Javascript, with a preference expressed for Rust, citing OpenAI's switch as a positive example.