LLM Rank.top

Leaderboard · Model · Updated

Grok 4

xAI's reasoning model — leading on Humanity's Last Exam at release.

Composite 83.6 Rank #3 of 30 xAI frontier Closed / proprietary Released 2025-07
Try Grok 4 → Compare with Claude Opus 4.1 → Or route via OpenRouter →

At a glance

ProviderxAI
Released2025-07
Tierfrontier
LicenseClosed / proprietary
Modalitiestext, image
Context window256k tokens
Max output64k tokens
API price · input$3.00 / 1M tokens
API price · output$15.00 / 1M tokens

Benchmark performance

How Grok 4 stacks up against the current leader (GPT-5) and the median model in the leaderboard:

BenchmarkGrok 4GPT-5Median
Chatbot Arena Elo 1378 1410 1320
MMLU-Pro 86.6 86.8 78.0
GPQA Diamond 87.7 87.3 65.0
MATH 95.0 96.7 78.3
HumanEval 93.0 95.1 92.0
SWE-Bench Verified 72.0 74.9 49.0

Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.

Don't lock in to one provider.

OpenRouter routes your requests across Grok 4, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)

What does Grok 4 cost in practice?

API pricing is $3.00 per 1M input tokens and $15.00 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:

VolumePer dayPer monthPer year
1M tokens/day$9.00$270.00$3,285
10M tokens/day$90.00$2,700$32,850
100M tokens/day$900.00$27,000$328,500

Strengths & weaknesses

Where it shines

  • GPQA: 87.7 (rank #2 of 29, top tier)
  • MMLU-Pro: 86.6 (rank #3 of 29, top tier)
  • HumanEval: 93.0 (rank #4 of 30, top tier)

Where it lags

No clear weaknesses across published benchmarks.

Best alternatives

The closest models to Grok 4 by tier and benchmark score:

ModelScore$ in / outContextAction
Claude Opus 4.1
Anthropic
83.6 $15.00 / $75.00 200k Try → · vs Grok
o3
OpenAI
83.7 $2.00 / $8.00 200k Try → · vs Grok
GPT-5
OpenAI
86.0 $1.25 / $10.00 400k Try → · vs Grok
Gemini 2.5 Pro
Google
80.9 $1.25 / $10.00 2M Try → · vs Grok
o1
OpenAI
75.7 $15.00 / $60.00 200k Try → · vs Grok

Frequently asked questions

Is Grok 4 a good model?

Grok 4 scores 83.6 on the llmrank.top composite (rank #3 of 30). It is in the top tier (the leader, GPT-5, scores 86.0). Whether it's the right fit depends on your workload — see the use-case discussion on this page.

How much does Grok 4 cost?

Grok 4 is priced at $3.00 per 1M input tokens and $15.00 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $90.00/day or $32,850/year.

What is Grok 4's context window?

256k tokens. That covers most multi-document workloads.

Is Grok 4 open source?

No — it is a closed (proprietary) model accessed only via the provider API.

What is Grok 4's SWE-Bench score?

Grok 4 scores 72.0% on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is 1378.

What are the best alternatives to Grok 4?

Closest alternatives by tier and score: Claude Opus 4.1, o3, GPT-5. See the alternatives section on this page for side-by-side numbers.


Related: Claude Opus 4.1 · o3 · GPT-5 · Full leaderboard

Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.

Try Grok 4 now

Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.

Open xAI playground → Try via OpenRouter →