LLM Rank.top

Leaderboard · Model · Updated

Gemini 1.5 Pro

Earlier-generation 2M-context Gemini still widely deployed.

Composite 67.9 Rank #19 of 30 Google general-purpose Closed / proprietary Released 2024-05
Try Gemini 1.5 Pro → Compare with GPT-4o → Or route via OpenRouter →

At a glance

ProviderGoogle
Released2024-05
Tiergeneral-purpose
LicenseClosed / proprietary
Modalitiestext, image, audio, video
Context window2M tokens
Max output8.192k tokens
API price · input$1.25 / 1M tokens
API price · output$5.00 / 1M tokens

Benchmark performance

How Gemini 1.5 Pro stacks up against the current leader (GPT-5) and the median model in the leaderboard:

BenchmarkGemini 1.5 ProGPT-5Median
Chatbot Arena Elo 1300 1410 1320
MMLU-Pro 75.8 86.8 78.0
GPQA Diamond 59.1 87.3 65.0
MATH 67.7 96.7 78.3
HumanEval 84.1 95.1 92.0
SWE-Bench Verified N/A 74.9 49.0

Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.

Don't lock in to one provider.

OpenRouter routes your requests across Gemini 1.5 Pro, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)

What does Gemini 1.5 Pro cost in practice?

API pricing is $1.25 per 1M input tokens and $5.00 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:

VolumePer dayPer monthPer year
1M tokens/day$3.13$93.75$1,141
10M tokens/day$31.25$937.50$11,406
100M tokens/day$312.50$9,375$114,063

Strengths & weaknesses

Where it shines

No clear top-tier strengths — this is a mid-pack model.

Where it lags

  • MATH: 67.7 (rank #28 of 29, below average)
  • HumanEval: 84.1 (rank #27 of 30, below average)
  • Arena: 1300 (rank #19 of 27, mid-pack)

Best alternatives

The closest models to Gemini 1.5 Pro by tier and benchmark score:

ModelScore$ in / outContextAction
GPT-4o
OpenAI
66.8 $2.50 / $10.00 128k Try → · vs Gemini
Claude 3.5 Sonnet
Anthropic
69.1 $3.00 / $15.00 200k Try → · vs Gemini
Mistral Large 2
Mistral AI
63.7 $2.00 / $6.00 128k Try → · vs Gemini
GPT-4.1
OpenAI
74.5 $2.00 / $8.00 1M Try → · vs Gemini
Claude 3.7 Sonnet
Anthropic
76.0 $3.00 / $15.00 200k Try → · vs Gemini

Frequently asked questions

Is Gemini 1.5 Pro a good model?

Gemini 1.5 Pro scores 67.9 on the llmrank.top composite (rank #19 of 30). It trails the frontier — for top performance, look at GPT-5 (86.0). Whether it's the right fit depends on your workload — see the use-case discussion on this page.

How much does Gemini 1.5 Pro cost?

Gemini 1.5 Pro is priced at $1.25 per 1M input tokens and $5.00 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $31.25/day or $11,406/year.

What is Gemini 1.5 Pro's context window?

2M tokens. That is large enough to ingest entire codebases or full books in one prompt.

Is Gemini 1.5 Pro open source?

No — it is a closed (proprietary) model accessed only via the provider API.

What is Gemini 1.5 Pro's SWE-Bench score?

Gemini 1.5 Pro scores not reported on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is 1300.

What are the best alternatives to Gemini 1.5 Pro?

Closest alternatives by tier and score: GPT-4o, Claude 3.5 Sonnet, Mistral Large 2. See the alternatives section on this page for side-by-side numbers.


Related: GPT-4o · Claude 3.5 Sonnet · Mistral Large 2 · Full leaderboard

Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.

Try Gemini 1.5 Pro now

Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.

Open Google playground → Try via OpenRouter →