LLM Rank.top

Leaderboard · Model · Updated

DeepSeek R1

Open-weights reasoning model rivalling closed frontier models at a fraction of the price.

Composite 75.4 Rank #11 of 30 DeepSeek open-weights Open-weights (MIT) Released 2025-01
Try DeepSeek R1 → Compare with Phi-4 → Or route via OpenRouter →

At a glance

ProviderDeepSeek
Released2025-01
Tieropen-weights
LicenseOpen-weights (MIT)
Modalitiestext
Context window128k tokens
Max output32.768k tokens
API price · input$0.55 / 1M tokens
API price · output$2.19 / 1M tokens
Hugging Facedeepseek-ai/DeepSeek-R1

Benchmark performance

How DeepSeek R1 stacks up against the current leader (GPT-5) and the median model in the leaderboard:

BenchmarkDeepSeek R1GPT-5Median
Chatbot Arena Elo 1357 1410 1320
MMLU-Pro 84.0 86.8 78.0
GPQA Diamond 71.5 87.3 65.0
MATH 97.3 96.7 78.3
HumanEval 92.0 95.1 92.0
SWE-Bench Verified 49.2 74.9 49.0

Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.

Don't lock in to one provider.

OpenRouter routes your requests across DeepSeek R1, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)

What does DeepSeek R1 cost in practice?

API pricing is $0.55 per 1M input tokens and $2.19 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:

VolumePer dayPer monthPer year
1M tokens/day$1.37$41.10$500.05
10M tokens/day$13.70$411.00$5,001
100M tokens/day$137.00$4,110$50,005

Strengths & weaknesses

Where it shines

  • MATH: 97.3 (rank #1 of 29, top tier)
  • MMLU-Pro: 84.0 (rank #7 of 29, above average)
  • HumanEval: 92.0 (rank #9 of 30, above average)

Where it lags

  • SWE-Bench: 49.2 (rank #12 of 18, mid-pack)

Best alternatives

The closest models to DeepSeek R1 by tier and benchmark score:

ModelScore$ in / outContextAction
Phi-4
Microsoft
71.2 $0.07 / $0.14 16.384k Try → · vs DeepSeek
Qwen2.5-Coder 32B
Alibaba
68.8 $0.18 / $0.18 131.072k Try → · vs DeepSeek
DeepSeek V3
DeepSeek
68.0 $0.27 / $1.10 128k Try → · vs DeepSeek
Llama 3.1 405B Instruct
Meta
65.7 $2.70 / $2.70 128k Try → · vs DeepSeek
Qwen2.5 72B Instruct
Alibaba
65.6 $0.35 / $0.40 131.072k Try → · vs DeepSeek

Frequently asked questions

Is DeepSeek R1 a good model?

DeepSeek R1 scores 75.4 on the llmrank.top composite (rank #11 of 30). It is competitive with the leaders (GPT-5 tops the board at 86.0). Whether it's the right fit depends on your workload — see the use-case discussion on this page.

How much does DeepSeek R1 cost?

DeepSeek R1 is priced at $0.55 per 1M input tokens and $2.19 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $13.70/day or $5,001/year.

What is DeepSeek R1's context window?

128k tokens. For very long inputs, consider a 1M+ context model like Gemini 2.5 Pro or GPT-4.1.

Is DeepSeek R1 open source?

Yes — released under the MIT license, weights are downloadable from the provider or Hugging Face.

What is DeepSeek R1's SWE-Bench score?

DeepSeek R1 scores 49.2% on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is 1357.

What are the best alternatives to DeepSeek R1?

Closest alternatives by tier and score: Phi-4, Qwen2.5-Coder 32B, DeepSeek V3. See the alternatives section on this page for side-by-side numbers.


Related: Phi-4 · Qwen2.5-Coder 32B · DeepSeek V3 · Full leaderboard

Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.

Try DeepSeek R1 now

Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.

Open DeepSeek playground → Try via OpenRouter →