LLM Rank.top

Leaderboard · Model · Updated

GPT-4o mini

Compact GPT-4 family model for cost-sensitive production workloads.

Composite 61.3 Rank #26 of 30 OpenAI fast / cheap Closed / proprietary Released 2024-07
Try GPT-4o mini → Compare with Gemini 2.0 Flash → Or route via OpenRouter →

At a glance

ProviderOpenAI
Released2024-07
Tierfast / cheap
LicenseClosed / proprietary
Modalitiestext, image
Context window128k tokens
Max output16.384k tokens
API price · input$0.15 / 1M tokens
API price · output$0.60 / 1M tokens

Benchmark performance

How GPT-4o mini stacks up against the current leader (GPT-5) and the median model in the leaderboard:

BenchmarkGPT-4o miniGPT-5Median
Chatbot Arena Elo 1273 1410 1320
MMLU-Pro 64.9 86.8 78.0
GPQA Diamond 40.2 87.3 65.0
MATH 70.2 96.7 78.3
HumanEval 87.2 95.1 92.0
SWE-Bench Verified N/A 74.9 49.0

Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.

Don't lock in to one provider.

OpenRouter routes your requests across GPT-4o mini, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)

What does GPT-4o mini cost in practice?

API pricing is $0.15 per 1M input tokens and $0.60 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:

VolumePer dayPer monthPer year
1M tokens/day$0.38$11.25$136.88
10M tokens/day$3.75$112.50$1,369
100M tokens/day$37.50$1,125$13,688

Strengths & weaknesses

Where it shines

No clear top-tier strengths — this is a mid-pack model.

Where it lags

  • MMLU-Pro: 64.9 (rank #28 of 29, below average)
  • GPQA: 40.2 (rank #28 of 29, below average)
  • MATH: 70.2 (rank #25 of 29, below average)

Best alternatives

The closest models to GPT-4o mini by tier and benchmark score:

ModelScore$ in / outContextAction
Gemini 2.0 Flash
Google
65.6 $0.10 / $0.40 1M Try → · vs GPT-4o
Claude 3.5 Haiku
Anthropic
56.2 $0.80 / $4.00 200k Try → · vs GPT-4o
o3-mini
OpenAI
72.7 $1.10 / $4.40 200k Try → · vs GPT-4o
Gemini 2.5 Flash
Google
73.3 $0.30 / $2.50 1M Try → · vs GPT-4o
GPT-5 mini
OpenAI
77.0 $0.25 / $2.00 400k Try → · vs GPT-4o

Frequently asked questions

Is GPT-4o mini a good model?

GPT-4o mini scores 61.3 on the llmrank.top composite (rank #26 of 30). It trails the frontier — for top performance, look at GPT-5 (86.0). Whether it's the right fit depends on your workload — see the use-case discussion on this page.

How much does GPT-4o mini cost?

GPT-4o mini is priced at $0.15 per 1M input tokens and $0.60 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $3.75/day or $1,369/year.

What is GPT-4o mini's context window?

128k tokens. For very long inputs, consider a 1M+ context model like Gemini 2.5 Pro or GPT-4.1.

Is GPT-4o mini open source?

No — it is a closed (proprietary) model accessed only via the provider API.

What is GPT-4o mini's SWE-Bench score?

GPT-4o mini scores not reported on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is 1273.

What are the best alternatives to GPT-4o mini?

Closest alternatives by tier and score: Gemini 2.0 Flash, Claude 3.5 Haiku, o3-mini. See the alternatives section on this page for side-by-side numbers.


Related: Gemini 2.0 Flash · Claude 3.5 Haiku · o3-mini · Full leaderboard

Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.

Try GPT-4o mini now

Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.

Open OpenAI playground → Try via OpenRouter →