LLM Rank.top

Leaderboard · Model · Updated

Qwen2.5-Coder 32B

Coder-tuned Qwen2.5 — among the strongest open coding models at <50B.

Composite 68.8 Rank #17 of 30 Alibaba open-weights Open-weights (Apache-2.0) Released 2024-11
Try Qwen2.5-Coder 32B → Compare with DeepSeek V3 → Or route via OpenRouter →

At a glance

ProviderAlibaba
Released2024-11
Tieropen-weights
LicenseOpen-weights (Apache-2.0)
Modalitiestext
Context window131.072k tokens
Max output8.192k tokens
API price · input$0.18 / 1M tokens
API price · output$0.18 / 1M tokens
Hugging FaceQwen/Qwen2.5-Coder-32B-Instruct

Benchmark performance

How Qwen2.5-Coder 32B stacks up against the current leader (GPT-5) and the median model in the leaderboard:

BenchmarkQwen2.5-Coder 32BGPT-5Median
Chatbot Arena Elo N/A 1410 1320
MMLU-Pro 68.4 86.8 78.0
GPQA Diamond 40.0 87.3 65.0
MATH 83.1 96.7 78.3
HumanEval 92.7 95.1 92.0
SWE-Bench Verified N/A 74.9 49.0

Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.

Don't lock in to one provider.

OpenRouter routes your requests across Qwen2.5-Coder 32B, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)

What does Qwen2.5-Coder 32B cost in practice?

API pricing is $0.18 per 1M input tokens and $0.18 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:

VolumePer dayPer monthPer year
1M tokens/day$0.18$5.40$65.70
10M tokens/day$1.80$54.00$657.00
100M tokens/day$18.00$540.00$6,570

Strengths & weaknesses

Where it shines

  • HumanEval: 92.7 (rank #6 of 30, top tier)

Where it lags

  • GPQA: 40.0 (rank #29 of 29, below average)
  • MMLU-Pro: 68.4 (rank #25 of 29, below average)

Best alternatives

The closest models to Qwen2.5-Coder 32B by tier and benchmark score:

ModelScore$ in / outContextAction
DeepSeek V3
DeepSeek
68.0 $0.27 / $1.10 128k Try → · vs Qwen2.5-Coder
Phi-4
Microsoft
71.2 $0.07 / $0.14 16.384k Try → · vs Qwen2.5-Coder
Llama 3.1 405B Instruct
Meta
65.7 $2.70 / $2.70 128k Try → · vs Qwen2.5-Coder
Qwen2.5 72B Instruct
Alibaba
65.6 $0.35 / $0.40 131.072k Try → · vs Qwen2.5-Coder
Llama 3.3 70B Instruct
Meta
64.7 $0.23 / $0.40 128k Try → · vs Qwen2.5-Coder

Frequently asked questions

Is Qwen2.5-Coder 32B a good model?

Qwen2.5-Coder 32B scores 68.8 on the llmrank.top composite (rank #17 of 30). It trails the frontier — for top performance, look at GPT-5 (86.0). Whether it's the right fit depends on your workload — see the use-case discussion on this page.

How much does Qwen2.5-Coder 32B cost?

Qwen2.5-Coder 32B is priced at $0.18 per 1M input tokens and $0.18 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $1.80/day or $657.00/year.

What is Qwen2.5-Coder 32B's context window?

131.072k tokens. For very long inputs, consider a 1M+ context model like Gemini 2.5 Pro or GPT-4.1.

Is Qwen2.5-Coder 32B open source?

Yes — released under the Apache-2.0 license, weights are downloadable from the provider or Hugging Face.

What is Qwen2.5-Coder 32B's SWE-Bench score?

Qwen2.5-Coder 32B scores not reported on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is not reported.

What are the best alternatives to Qwen2.5-Coder 32B?

Closest alternatives by tier and score: DeepSeek V3, Phi-4, Llama 3.1 405B Instruct. See the alternatives section on this page for side-by-side numbers.


Related: DeepSeek V3 · Phi-4 · Llama 3.1 405B Instruct · Full leaderboard

Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.

Try Qwen2.5-Coder 32B now

Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.

Open Alibaba playground → Try via OpenRouter →