LLM Rank.top

Leaderboard · Model · Updated

Command R+

Cohere's RAG-tuned 104B model — strong multilingual and tool use.

Composite 47.0 Rank #29 of 30 Cohere general-purpose Open-weights (CC-BY-NC-4.0) Released 2024-08
Try Command R+ → Compare with Mistral Large 2 → Or route via OpenRouter →

At a glance

ProviderCohere
Released2024-08
Tiergeneral-purpose
LicenseOpen-weights (CC-BY-NC-4.0)
Modalitiestext
Context window128k tokens
Max output4.096k tokens
API price · input$2.50 / 1M tokens
API price · output$10.00 / 1M tokens
Hugging FaceCohereForAI/c4ai-command-r-plus-08-2024

Benchmark performance

How Command R+ stacks up against the current leader (GPT-5) and the median model in the leaderboard:

BenchmarkCommand R+GPT-5Median
Chatbot Arena Elo 1216 1410 1320
MMLU-Pro 53.0 86.8 78.0
GPQA Diamond 42.5 87.3 65.0
MATH 30.7 96.7 78.3
HumanEval 70.7 95.1 92.0
SWE-Bench Verified N/A 74.9 49.0

Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.

Don't lock in to one provider.

OpenRouter routes your requests across Command R+, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)

What does Command R+ cost in practice?

API pricing is $2.50 per 1M input tokens and $10.00 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:

VolumePer dayPer monthPer year
1M tokens/day$6.25$187.50$2,281
10M tokens/day$62.50$1,875$22,813
100M tokens/day$625.00$18,750$228,125

Strengths & weaknesses

Where it shines

No clear top-tier strengths — this is a mid-pack model.

Where it lags

  • Arena: 1216 (rank #27 of 27, below average)
  • MMLU-Pro: 53.0 (rank #29 of 29, below average)
  • MATH: 30.7 (rank #29 of 29, below average)

Best alternatives

The closest models to Command R+ by tier and benchmark score:

ModelScore$ in / outContextAction
Mistral Large 2
Mistral AI
63.7 $2.00 / $6.00 128k Try → · vs Command
GPT-4o
OpenAI
66.8 $2.50 / $10.00 128k Try → · vs Command
Gemini 1.5 Pro
Google
67.9 $1.25 / $5.00 2M Try → · vs Command
Claude 3.5 Sonnet
Anthropic
69.1 $3.00 / $15.00 200k Try → · vs Command
GPT-4.1
OpenAI
74.5 $2.00 / $8.00 1M Try → · vs Command

Frequently asked questions

Is Command R+ a good model?

Command R+ scores 47.0 on the llmrank.top composite (rank #29 of 30). It trails the frontier — for top performance, look at GPT-5 (86.0). Whether it's the right fit depends on your workload — see the use-case discussion on this page.

How much does Command R+ cost?

Command R+ is priced at $2.50 per 1M input tokens and $10.00 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $62.50/day or $22,813/year.

What is Command R+'s context window?

128k tokens. For very long inputs, consider a 1M+ context model like Gemini 2.5 Pro or GPT-4.1.

Is Command R+ open source?

Yes — released under the CC-BY-NC-4.0 license, weights are downloadable from the provider or Hugging Face.

What is Command R+'s SWE-Bench score?

Command R+ scores not reported on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is 1216.

What are the best alternatives to Command R+?

Closest alternatives by tier and score: Mistral Large 2, GPT-4o, Gemini 1.5 Pro. See the alternatives section on this page for side-by-side numbers.


Related: Mistral Large 2 · GPT-4o · Gemini 1.5 Pro · Full leaderboard

Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.

Try Command R+ now

Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.

Open Cohere playground → Try via OpenRouter →