LLM Rank.top

Leaderboard · Model · Updated

Codestral 25.01

Mistral's coding specialist — top-tier FIM/autocomplete latency for IDE integration.

Composite Rank #30 of 30 Mistral AI general-purpose Closed / proprietary Released 2025-01
Try Codestral 25.01 → Compare with Command R+ → Or route via OpenRouter →

At a glance

ProviderMistral AI
Released2025-01
Tiergeneral-purpose
LicenseClosed / proprietary
Modalitiestext
Context window256k tokens
Max output8.192k tokens
API price · input$0.30 / 1M tokens
API price · output$0.90 / 1M tokens

Benchmark performance

How Codestral 25.01 stacks up against the current leader (GPT-5) and the median model in the leaderboard:

BenchmarkCodestral 25.01GPT-5Median
Chatbot Arena Elo N/A 1410 1320
MMLU-Pro N/A 86.8 78.0
GPQA Diamond N/A 87.3 65.0
MATH N/A 96.7 78.3
HumanEval 86.6 95.1 92.0
SWE-Bench Verified N/A 74.9 49.0

Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.

Don't lock in to one provider.

OpenRouter routes your requests across Codestral 25.01, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)

What does Codestral 25.01 cost in practice?

API pricing is $0.30 per 1M input tokens and $0.90 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:

VolumePer dayPer monthPer year
1M tokens/day$0.60$18.00$219.00
10M tokens/day$6.00$180.00$2,190
100M tokens/day$60.00$1,800$21,900

Strengths & weaknesses

Where it shines

No clear top-tier strengths — this is a mid-pack model.

Where it lags

  • HumanEval: 86.6 (rank #25 of 30, below average)

Best alternatives

The closest models to Codestral 25.01 by tier and benchmark score:

ModelScore$ in / outContextAction
Command R+
Cohere
47.0 $2.50 / $10.00 128k Try → · vs Codestral
Mistral Large 2
Mistral AI
63.7 $2.00 / $6.00 128k Try → · vs Codestral
GPT-4o
OpenAI
66.8 $2.50 / $10.00 128k Try → · vs Codestral
Gemini 1.5 Pro
Google
67.9 $1.25 / $5.00 2M Try → · vs Codestral
Claude 3.5 Sonnet
Anthropic
69.1 $3.00 / $15.00 200k Try → · vs Codestral

Frequently asked questions

Is Codestral 25.01 a good model?

Codestral 25.01 is ranked among 30+ frontier and open LLMs on llmrank.top. Public benchmark coverage is sparse, so we can't compute a composite score — see the benchmark table below for the data that is available.

How much does Codestral 25.01 cost?

Codestral 25.01 is priced at $0.30 per 1M input tokens and $0.90 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $6.00/day or $2,190/year.

What is Codestral 25.01's context window?

256k tokens. That covers most multi-document workloads.

Is Codestral 25.01 open source?

No — it is a closed (proprietary) model accessed only via the provider API.

What is Codestral 25.01's SWE-Bench score?

Codestral 25.01 scores not reported on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is not reported.

What are the best alternatives to Codestral 25.01?

Closest alternatives by tier and score: Command R+, Mistral Large 2, GPT-4o. See the alternatives section on this page for side-by-side numbers.


Related: Command R+ · Mistral Large 2 · GPT-4o · Full leaderboard

Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.

Try Codestral 25.01 now

Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.

Open Mistral AI playground → Try via OpenRouter →