Leaderboard · Model · Updated
GPT-4o mini
Compact GPT-4 family model for cost-sensitive production workloads.
At a glance
| Provider | OpenAI |
|---|---|
| Released | 2024-07 |
| Tier | fast / cheap |
| License | Closed / proprietary |
| Modalities | text, image |
| Context window | 128k tokens |
| Max output | 16.384k tokens |
| API price · input | $0.15 / 1M tokens |
| API price · output | $0.60 / 1M tokens |
Benchmark performance
How GPT-4o mini stacks up against the current leader (GPT-5) and the median model in the leaderboard:
| Benchmark | GPT-4o mini | GPT-5 | Median |
|---|---|---|---|
| Chatbot Arena Elo | 1273 | 1410 | 1320 |
| MMLU-Pro | 64.9 | 86.8 | 78.0 |
| GPQA Diamond | 40.2 | 87.3 | 65.0 |
| MATH | 70.2 | 96.7 | 78.3 |
| HumanEval | 87.2 | 95.1 | 92.0 |
| SWE-Bench Verified | N/A | 74.9 | 49.0 |
Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.
OpenRouter routes your requests across GPT-4o mini, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)
What does GPT-4o mini cost in practice?
API pricing is $0.15 per 1M input tokens and $0.60 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:
| Volume | Per day | Per month | Per year |
|---|---|---|---|
| 1M tokens/day | $0.38 | $11.25 | $136.88 |
| 10M tokens/day | $3.75 | $112.50 | $1,369 |
| 100M tokens/day | $37.50 | $1,125 | $13,688 |
Strengths & weaknesses
Where it shines
No clear top-tier strengths — this is a mid-pack model.
Where it lags
- MMLU-Pro: 64.9 (rank #28 of 29, below average)
- GPQA: 40.2 (rank #28 of 29, below average)
- MATH: 70.2 (rank #25 of 29, below average)
Best alternatives
The closest models to GPT-4o mini by tier and benchmark score:
| Model | Score | $ in / out | Context | Action |
|---|---|---|---|---|
| Gemini 2.0 Flash |
65.6 | $0.10 / $0.40 | 1M | Try → · vs GPT-4o |
| Claude 3.5 Haiku Anthropic |
56.2 | $0.80 / $4.00 | 200k | Try → · vs GPT-4o |
| o3-mini OpenAI |
72.7 | $1.10 / $4.40 | 200k | Try → · vs GPT-4o |
| Gemini 2.5 Flash |
73.3 | $0.30 / $2.50 | 1M | Try → · vs GPT-4o |
| GPT-5 mini OpenAI |
77.0 | $0.25 / $2.00 | 400k | Try → · vs GPT-4o |
Frequently asked questions
Is GPT-4o mini a good model?
GPT-4o mini scores 61.3 on the llmrank.top composite (rank #26 of 30). It trails the frontier — for top performance, look at GPT-5 (86.0). Whether it's the right fit depends on your workload — see the use-case discussion on this page.
How much does GPT-4o mini cost?
GPT-4o mini is priced at $0.15 per 1M input tokens and $0.60 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $3.75/day or $1,369/year.
What is GPT-4o mini's context window?
128k tokens. For very long inputs, consider a 1M+ context model like Gemini 2.5 Pro or GPT-4.1.
Is GPT-4o mini open source?
No — it is a closed (proprietary) model accessed only via the provider API.
What is GPT-4o mini's SWE-Bench score?
GPT-4o mini scores not reported on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is 1273.
What are the best alternatives to GPT-4o mini?
Closest alternatives by tier and score: Gemini 2.0 Flash, Claude 3.5 Haiku, o3-mini. See the alternatives section on this page for side-by-side numbers.
Related: Gemini 2.0 Flash · Claude 3.5 Haiku · o3-mini · Full leaderboard
Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.
Try GPT-4o mini now
Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.