Leaderboard · Model · Updated
Phi-4
14B small open model — punches well above its weight on STEM benchmarks.
At a glance
| Provider | Microsoft |
|---|---|
| Released | 2024-12 |
| Tier | open-weights |
| License | Open-weights (MIT) |
| Modalities | text |
| Context window | 16.384k tokens |
| Max output | 16.384k tokens |
| API price · input | $0.07 / 1M tokens |
| API price · output | $0.14 / 1M tokens |
| Hugging Face | microsoft/phi-4 |
Benchmark performance
How Phi-4 stacks up against the current leader (GPT-5) and the median model in the leaderboard:
| Benchmark | Phi-4 | GPT-5 | Median |
|---|---|---|---|
| Chatbot Arena Elo | N/A | 1410 | 1320 |
| MMLU-Pro | 70.4 | 86.8 | 78.0 |
| GPQA Diamond | 56.1 | 87.3 | 65.0 |
| MATH | 80.4 | 96.7 | 78.3 |
| HumanEval | 82.6 | 95.1 | 92.0 |
| SWE-Bench Verified | N/A | 74.9 | 49.0 |
Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.
OpenRouter routes your requests across Phi-4, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)
What does Phi-4 cost in practice?
API pricing is $0.07 per 1M input tokens and $0.14 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:
| Volume | Per day | Per month | Per year |
|---|---|---|---|
| 1M tokens/day | $0.11 | $3.15 | $38.33 |
| 10M tokens/day | $1.05 | $31.50 | $383.25 |
| 100M tokens/day | $10.50 | $315.00 | $3,833 |
Strengths & weaknesses
Where it shines
No clear top-tier strengths — this is a mid-pack model.
Where it lags
- HumanEval: 82.6 (rank #28 of 30, below average)
- MMLU-Pro: 70.4 (rank #22 of 29, mid-pack)
- GPQA: 56.1 (rank #19 of 29, mid-pack)
Best alternatives
The closest models to Phi-4 by tier and benchmark score:
| Model | Score | $ in / out | Context | Action |
|---|---|---|---|---|
| Qwen2.5-Coder 32B Alibaba |
68.8 | $0.18 / $0.18 | 131.072k | Try → · vs Phi-4 |
| DeepSeek V3 DeepSeek |
68.0 | $0.27 / $1.10 | 128k | Try → · vs Phi-4 |
| DeepSeek R1 DeepSeek |
75.4 | $0.55 / $2.19 | 128k | Try → · vs Phi-4 |
| Llama 3.1 405B Instruct Meta |
65.7 | $2.70 / $2.70 | 128k | Try → · vs Phi-4 |
| Qwen2.5 72B Instruct Alibaba |
65.6 | $0.35 / $0.40 | 131.072k | Try → · vs Phi-4 |
Frequently asked questions
Is Phi-4 a good model?
Phi-4 scores 71.2 on the llmrank.top composite (rank #15 of 30). It is competitive with the leaders (GPT-5 tops the board at 86.0). Whether it's the right fit depends on your workload — see the use-case discussion on this page.
How much does Phi-4 cost?
Phi-4 is priced at $0.07 per 1M input tokens and $0.14 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $1.05/day or $383.25/year.
What is Phi-4's context window?
16.384k tokens. For very long inputs, consider a 1M+ context model like Gemini 2.5 Pro or GPT-4.1.
Is Phi-4 open source?
Yes — released under the MIT license, weights are downloadable from the provider or Hugging Face.
What is Phi-4's SWE-Bench score?
Phi-4 scores not reported on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is not reported.
What are the best alternatives to Phi-4?
Closest alternatives by tier and score: Qwen2.5-Coder 32B, DeepSeek V3, DeepSeek R1. See the alternatives section on this page for side-by-side numbers.
Related: Qwen2.5-Coder 32B · DeepSeek V3 · DeepSeek R1 · Full leaderboard
Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.
Try Phi-4 now
Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.