Gemini 3 Pro launches: agentic coding meets multimodal app building

Google introduces Gemini 3 Pro, a milestone model with stronger reasoning, coding, and multimodal performance, available in preview via the Gemini API and Google AI Studio. It powers agentic development through Google Antigravity, new bash tools, and deeper tool use, and enables one-shot “vibe coding” to turn prompts into full apps. With a 1M-token context window and improved visual, spatial, and video reasoning, the API adds thinking-level and media-resolution controls plus stricter thought-signature validation for reliable multi-turn builds.
Key Points
- Gemini 3 Pro launches as Google’s most capable model, surpassing prior versions in reasoning, coding, and multimodal benchmarks, with a 1M-token context window.
- Agentic coding is a core focus: 54.2% on Terminal-Bench 2.0, deep tool use, and integrations with Google Antigravity, Gemini CLI, Android Studio, and popular IDEs.
- Google Antigravity debuts as a free, cross-platform agentic development platform where autonomous agents plan and execute tasks across editor, terminal, and browser.
- The Gemini API adds client- and server-side bash tools, and enables combining Google Search grounding and URL context with structured outputs for robust agent workflows.
- “Vibe coding” enables single-prompt, full app generation; Google AI Studio’s Build mode and annotations streamline going from idea to AI-native app.
Sentiment
The overall sentiment of the Hacker News discussion is cautiously skeptical and critical, heavily influenced by 'AI fatigue' and a general distrust of corporate hype and benchmarks. While some positive early impressions exist regarding specific capabilities, these are often overshadowed by widespread concerns about Google's product longevity, complex billing, and a demand for more transparency regarding model limitations and failure modes. The dominant tone suggests a pragmatic, 'show-me-the-data' approach rather than embracing immediate hype.
In Agreement
- Gemini 3.0 demonstrates significant improvements in spatial understanding and speed, particularly for 3D CAD model generation, with some preliminary evaluations suggesting it's superior to GPT-5 and 5.1.
- The model's ability to generate 3D models from simple sketches is described as 'amazing' and a clear advancement over previous versions.
- There's a sentiment that Gemini 3 is a genuinely impressive release, potentially representing a major leap forward since GPT-4, possibly influencing competitors to accelerate their own releases.
- Gemini 2.5 was already effective as a 'study buddy' for complex courses, implying an expectation of even better performance from Gemini 3.
- The pricing of Gemini 3 and 3 Pro is noted as being more competitive and cheaper than Sonnet 4.5, making it an attractive option for some users.
- Some acknowledge the rapid pace of AI advancement, noting that capabilities once considered 'magic' are now becoming commonplace.
Opposed
- Many users express 'AI fatigue' and skepticism that recent model releases, including Gemini 3, offer significant real-world improvements, often feeling over-hyped.
- There's a demand for transparency regarding failure modes alongside success stories, with some viewing marketing without this as mere advertisement.
- A strong distrust of industry hype, with advice to conduct private benchmarks rather than relying on community comments or marketing claims.
- Skepticism about the validity and utility of current benchmarks (e.g., SWE-Bench), which are often seen as gamed, not reflective of real-world performance, or potentially influenced by training data.
- Frustration with Google's convoluted and confusing subscription and API access models, making it difficult to understand how to use and pay for the models.
- Widespread concern about Google's reputation for discontinuing products, leading many to hesitate investing time in new Google services until their longevity is proven (the 'Google graveyard' effect).
- Questions about Gemini 3 Pro's specific performance on benchmarks like SWE Bench, noting it doesn't lead, and pointing out that even on Terminal-Bench 2.0, other models like GPT 5.1-codex might be superior.
- Some express disappointment that AI integrations continue to rely on older tools like Bash, rather than moving towards new paradigms.
- A general suspicion of vendor lock-in with these new tools, with some users wishing for the ability to self-host or own AI models.