Gemini 2.5 Pro is our most advanced model yet, excelling at coding and complex prompts.
Pro performance
-
Enhanced reasoning
State-of-the-art in key math and science benchmarks.
-
Advanced coding
Easily generate code for web development tasks.
-
Natively multimodal
Understands input across text, audio, images and video.
-
Long context
Explore vast datasets with a 1-million token context window.
Preview
Native audio
Converse in more expressive ways with native audio outputs that capture the subtle nuances of how we speak. Seamlessly switch between 24 languages, all with the same voice.
-
Natural conversation
Remarkable quality, more appropriate expressivity, and prosody, delivered with low latency so you can converse fluidly.
-
Style control
Use natural language prompts to adapt the delivery within the conversation, steer it to adopt accents and produce a range of tones and expressions.
-
Tool integration
Gemini 2.5 can use tools and function calling during dialog allowing it to incorporate real-time information or use custom developer-built tools.
-
Conversation context awareness
Our system is trained to discern and disregard background speech, ambient conversations and other irrelevant audio.
Trusted testers
Gemini 2.5 Pro Deep Think
We’re making Gemini 2.5 Pro even better by introducing an enhanced reasoning mode called Deep Think.
Watch
It uses our latest cutting edge research in reasoning - including parallel thinking techniques - resulting in incredible performance.
Methodology
All Gemini results come from our runs. USAMO 2025: https://matharena.ai. LiveCodeBench V6: * o3 High: Internal runs since numbers are not available in official leaderboard, o4-mini High: https://livecodebench.github.io/leaderboard.html (2/1/2025-5/1/2025). MMMU: Self reported by OpenAI
Vibe-coding nature with 2.5 Pro
Images transformed into code-based representations of its natural behavior.
Watch
Hands-on with 2.5 Pro
See how Gemini 2.5 Pro uses its reasoning capabilities to create interactive simulations and do advanced coding.
Benchmarks
Gemini 2.5 Pro leads common benchmarks by meaningful margins.
Benchmark |
Gemini 2.5
Pro Thinking |
OpenAI
o3 High |
OpenAI
o4-mini High |
Claude
Opus 4 32k thinking |
Grok 3
Beta Extended thinking |
DeepSeek
R1 05-28 |
|
---|---|---|---|---|---|---|---|
Input price
|
$/1M tokens (no caching) |
$1.25 $2.50 > 200k tokens |
$10.00 | $1.10 | $15.00 | $3.00 | $0.55 |
Output price
|
$/1M tokens |
$10.00 $15.00 > 200k tokens |
$40.00 | $4.40 | $75.00 | $15.00 | $2.19 |
Reasoning & knowledge
Humanity's Last Exam (no tools)
|
21.6% | 20.3% | 14.3% | 10.7% | — | 14.0%* | |
Science
GPQA diamond
|
single attempt | 86.4% | 83.3% | 81.4% | 79.6% | 80.2% | 81.0% |
|
multiple attempts | — | — | — | 83.3% | 84.6% | — |
Mathematics
AIME 2025
|
single attempt | 88.0% | 88.9% | 92.7% | 75.5% | 77.3% | 87.5% |
|
multiple attempts | — | — | — | 90.0% | 93.3% | — |
Code generation
LiveCodeBench
(UI: 1/1/2025-5/1/2025)
|
single attempt | 69.0% | 72.0% | 75.8% | 51.1% | — | 70.5% |
Code editing
Aider Polyglot
|
82.2%
diff-fenced
|
79.6%
diff
|
72.0%
diff
|
72.0%
diff
|
53.3%
diff
|
71.6%
|
|
Agentic coding
SWE-bench Verified
|
single attempt | 59.6% | 69.1% | 68.1% | 72.5% | — | — |
|
multiple attempts | 67.2% | — | — | 79.4% | — | 57.6% |
Factuality
SimpleQA
|
54.0% | 48.6% | 19.3% | — | 43.6% | 27.8% | |
Factuality
FACTS grounding
|
87.8% | 69.6% | 62.1% | 77.7% | 74.8% | — | |
Visual reasoning
MMMU
|
single attempt | 82.0% | 82.9% | 81.6% | 76.5% | 76.0% | no MM support |
|
multiple attempts | — | — | — | — | 78.0% | no MM support |
Image understanding
Vibe-Eval (Reka)
|
67.2% | — | — | — | — | no MM support | |
Video understanding
VideoMMMU
|
83.6% | — | — | — | — | no MM support | |
Long context
MRCR v2 (8-needle)
|
128k (average) | 58.0% | 57.1% | 36.3% | — | 34.0% | — |
|
1M (pointwise) | 16.4% | no support | no support | no support | no support | no support |
Multilingual performance
Global MMLU (Lite)
|
89.2% | — | — | — | — | — |
Model information
2.5 Pro | |
Model deployment status | General availability |
Supported data types for input | Text, Image, Video, Audio, PDF |
Supported data types for output | Text |
Supported # tokens for input | 1M |
Supported # tokens for output | 64k |
Knowledge cutoff | January 2025 |
Tool use |
Function calling Structured output Search as a tool Code execution |
Best for |
Reasoning Coding Complex prompts |
Availability |
Gemini app Google AI Studio Gemini API Vertex AI |