LLM Rank.top

Leaderboard · Model · Updated

Claude 3.7 Sonnet

Hybrid reasoning Sonnet — toggleable extended thinking for hard problems.

Composite 76.0 Rank #9 of 30 Anthropic general-purpose Closed / proprietary Released 2025-02
Try Claude 3.7 Sonnet → Compare with GPT-4.1 → Or route via OpenRouter →

At a glance

ProviderAnthropic
Released2025-02
Tiergeneral-purpose
LicenseClosed / proprietary
Modalitiestext, image
Context window200k tokens
Max output64k tokens
API price · input$3.00 / 1M tokens
API price · output$15.00 / 1M tokens

Benchmark performance

How Claude 3.7 Sonnet stacks up against the current leader (GPT-5) and the median model in the leaderboard:

BenchmarkClaude 3.7 SonnetGPT-5Median
Chatbot Arena Elo 1340 1410 1320
MMLU-Pro 83.5 86.8 78.0
GPQA Diamond 71.8 87.3 65.0
MATH 89.0 96.7 78.3
HumanEval 92.0 95.1 92.0
SWE-Bench Verified 62.3 74.9 49.0

Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.

Don't lock in to one provider.

OpenRouter routes your requests across Claude 3.7 Sonnet, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)

What does Claude 3.7 Sonnet cost in practice?

API pricing is $3.00 per 1M input tokens and $15.00 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:

VolumePer dayPer monthPer year
1M tokens/day$9.00$270.00$3,285
10M tokens/day$90.00$2,700$32,850
100M tokens/day$900.00$27,000$328,500

Strengths & weaknesses

Where it shines

  • HumanEval: 92.0 (rank #8 of 30, above average)
  • MMLU-Pro: 83.5 (rank #8 of 29, above average)

Where it lags

No clear weaknesses across published benchmarks.

Best alternatives

The closest models to Claude 3.7 Sonnet by tier and benchmark score:

ModelScore$ in / outContextAction
GPT-4.1
OpenAI
74.5 $2.00 / $8.00 1M Try → · vs Claude
Claude Sonnet 4
Anthropic
80.7 $3.00 / $15.00 200k Try → · vs Claude
Grok 3
xAI
81.7 $3.00 / $15.00 1M Try → · vs Claude
Claude 3.5 Sonnet
Anthropic
69.1 $3.00 / $15.00 200k Try → · vs Claude
Gemini 1.5 Pro
Google
67.9 $1.25 / $5.00 2M Try → · vs Claude

Frequently asked questions

Is Claude 3.7 Sonnet a good model?

Claude 3.7 Sonnet scores 76.0 on the llmrank.top composite (rank #9 of 30). It is competitive with the leaders (GPT-5 tops the board at 86.0). Whether it's the right fit depends on your workload — see the use-case discussion on this page.

How much does Claude 3.7 Sonnet cost?

Claude 3.7 Sonnet is priced at $3.00 per 1M input tokens and $15.00 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $90.00/day or $32,850/year.

What is Claude 3.7 Sonnet's context window?

200k tokens. That covers most multi-document workloads.

Is Claude 3.7 Sonnet open source?

No — it is a closed (proprietary) model accessed only via the provider API.

What is Claude 3.7 Sonnet's SWE-Bench score?

Claude 3.7 Sonnet scores 62.3% on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is 1340.

What are the best alternatives to Claude 3.7 Sonnet?

Closest alternatives by tier and score: GPT-4.1, Claude Sonnet 4, Grok 3. See the alternatives section on this page for side-by-side numbers.


Related: GPT-4.1 · Claude Sonnet 4 · Grok 3 · Full leaderboard

Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.

Try Claude 3.7 Sonnet now

Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.

Open Anthropic playground → Try via OpenRouter →