Leaderboard · Model · Updated
o3-mini
Compact reasoning model — fast and cheap with strong STEM capability.
At a glance
| Provider | OpenAI |
|---|---|
| Released | 2025-01 |
| Tier | fast / cheap |
| License | Closed / proprietary |
| Modalities | text |
| Context window | 200k tokens |
| Max output | 100k tokens |
| API price · input | $1.10 / 1M tokens |
| API price · output | $4.40 / 1M tokens |
Benchmark performance
How o3-mini stacks up against the current leader (GPT-5) and the median model in the leaderboard:
| Benchmark | o3-mini | GPT-5 | Median |
|---|---|---|---|
| Chatbot Arena Elo | 1325 | 1410 | 1320 |
| MMLU-Pro | 79.5 | 86.8 | 78.0 |
| GPQA Diamond | 75.0 | 87.3 | 65.0 |
| MATH | 92.0 | 96.7 | 78.3 |
| HumanEval | 88.0 | 95.1 | 92.0 |
| SWE-Bench Verified | 49.3 | 74.9 | 49.0 |
Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.
OpenRouter routes your requests across o3-mini, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)
What does o3-mini cost in practice?
API pricing is $1.10 per 1M input tokens and $4.40 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:
| Volume | Per day | Per month | Per year |
|---|---|---|---|
| 1M tokens/day | $2.75 | $82.50 | $1,004 |
| 10M tokens/day | $27.50 | $825.00 | $10,038 |
| 100M tokens/day | $275.00 | $8,250 | $100,375 |
Strengths & weaknesses
Where it shines
No clear top-tier strengths — this is a mid-pack model.
Where it lags
- HumanEval: 88.0 (rank #22 of 30, mid-pack)
- SWE-Bench: 49.3 (rank #11 of 18, mid-pack)
Best alternatives
The closest models to o3-mini by tier and benchmark score:
| Model | Score | $ in / out | Context | Action |
|---|---|---|---|---|
| Gemini 2.5 Flash |
73.3 | $0.30 / $2.50 | 1M | Try → · vs o3-mini |
| GPT-5 mini OpenAI |
77.0 | $0.25 / $2.00 | 400k | Try → · vs o3-mini |
| Gemini 2.0 Flash |
65.6 | $0.10 / $0.40 | 1M | Try → · vs o3-mini |
| GPT-4o mini OpenAI |
61.3 | $0.15 / $0.60 | 128k | Try → · vs o3-mini |
| Claude 3.5 Haiku Anthropic |
56.2 | $0.80 / $4.00 | 200k | Try → · vs o3-mini |
Frequently asked questions
Is o3-mini a good model?
o3-mini scores 72.7 on the llmrank.top composite (rank #14 of 30). It is competitive with the leaders (GPT-5 tops the board at 86.0). Whether it's the right fit depends on your workload — see the use-case discussion on this page.
How much does o3-mini cost?
o3-mini is priced at $1.10 per 1M input tokens and $4.40 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $27.50/day or $10,038/year.
What is o3-mini's context window?
200k tokens. That covers most multi-document workloads.
Is o3-mini open source?
No — it is a closed (proprietary) model accessed only via the provider API.
What is o3-mini's SWE-Bench score?
o3-mini scores 49.3% on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is 1325.
What are the best alternatives to o3-mini?
Closest alternatives by tier and score: Gemini 2.5 Flash, GPT-5 mini, Gemini 2.0 Flash. See the alternatives section on this page for side-by-side numbers.
Related: Gemini 2.5 Flash · GPT-5 mini · Gemini 2.0 Flash · Full leaderboard
Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.
Try o3-mini now
Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.