Leaderboard · Model · Updated
GPT-5
OpenAI's flagship reasoning + chat unified model; default ChatGPT model since Aug 2025.
At a glance
| Provider | OpenAI |
|---|---|
| Released | 2025-08 |
| Tier | frontier |
| License | Closed / proprietary |
| Modalities | text, image, audio |
| Context window | 400k tokens |
| Max output | 128k tokens |
| API price · input | $1.25 / 1M tokens |
| API price · output | $10.00 / 1M tokens |
Benchmark performance
How GPT-5 stacks up against the runner-up (o3) and the median model in the leaderboard:
| Benchmark | GPT-5 | o3 | Median |
|---|---|---|---|
| Chatbot Arena Elo | 1410 | 1380 | 1320 |
| MMLU-Pro | 86.8 | 85.7 | 78.0 |
| GPQA Diamond | 87.3 | 87.7 | 65.0 |
| MATH | 96.7 | 96.7 | 78.3 |
| HumanEval | 95.1 | 92.7 | 92.0 |
| SWE-Bench Verified | 74.9 | 71.7 | 49.0 |
Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.
OpenRouter routes your requests across GPT-5, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)
What does GPT-5 cost in practice?
API pricing is $1.25 per 1M input tokens and $10.00 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:
| Volume | Per day | Per month | Per year |
|---|---|---|---|
| 1M tokens/day | $5.63 | $168.75 | $2,053 |
| 10M tokens/day | $56.25 | $1,688 | $20,531 |
| 100M tokens/day | $562.50 | $16,875 | $205,313 |
Strengths & weaknesses
Where it shines
- Arena: 1410 (rank #1 of 27, top tier)
- SWE-Bench: 74.9 (rank #1 of 18, top tier)
- HumanEval: 95.1 (rank #2 of 30, top tier)
Where it lags
No clear weaknesses across published benchmarks.
Best alternatives
The closest models to GPT-5 by tier and benchmark score:
| Model | Score | $ in / out | Context | Action |
|---|---|---|---|---|
| o3 OpenAI |
83.7 | $2.00 / $8.00 | 200k | Try → · vs GPT-5 |
| Grok 4 xAI |
83.6 | $3.00 / $15.00 | 256k | Try → · vs GPT-5 |
| Claude Opus 4.1 Anthropic |
83.6 | $15.00 / $75.00 | 200k | Try → · vs GPT-5 |
| Gemini 2.5 Pro |
80.9 | $1.25 / $10.00 | 2M | Try → · vs GPT-5 |
| o1 OpenAI |
75.7 | $15.00 / $60.00 | 200k | Try → · vs GPT-5 |
Frequently asked questions
Is GPT-5 a good model?
GPT-5 is currently the highest-scoring model on the llmrank.top composite at 86.0 — it leads the field of 30 models on the weighted average across Arena Elo, MMLU-Pro, GPQA, MATH, HumanEval, and SWE-Bench.
How much does GPT-5 cost?
GPT-5 is priced at $1.25 per 1M input tokens and $10.00 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $56.25/day or $20,531/year.
What is GPT-5's context window?
400k tokens. That covers most multi-document workloads.
Is GPT-5 open source?
No — it is a closed (proprietary) model accessed only via the provider API.
What is GPT-5's SWE-Bench score?
GPT-5 scores 74.9% on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is 1410.
What are the best alternatives to GPT-5?
Closest alternatives by tier and score: o3, Grok 4, Claude Opus 4.1. See the alternatives section on this page for side-by-side numbers.
Related: o3 · Grok 4 · Claude Opus 4.1 · Full leaderboard
Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.
Try GPT-5 now
Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.