LLM Rank.top

Leaderboard · Model · Updated

Claude 3.5 Haiku

Compact, low-latency Claude for high-volume tasks.

Composite 56.2 Rank #28 of 30 Anthropic fast / cheap Closed / proprietary Released 2024-11
Try Claude 3.5 Haiku → Compare with GPT-4o mini → Or route via OpenRouter →

At a glance

ProviderAnthropic
Released2024-11
Tierfast / cheap
LicenseClosed / proprietary
Modalitiestext
Context window200k tokens
Max output8.192k tokens
API price · input$0.80 / 1M tokens
API price · output$4.00 / 1M tokens

Benchmark performance

How Claude 3.5 Haiku stacks up against the current leader (GPT-5) and the median model in the leaderboard:

BenchmarkClaude 3.5 HaikuGPT-5Median
Chatbot Arena Elo 1240 1410 1320
MMLU-Pro 65.0 86.8 78.0
GPQA Diamond 41.6 87.3 65.0
MATH 69.4 96.7 78.3
HumanEval 88.1 95.1 92.0
SWE-Bench Verified 40.6 74.9 49.0

Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.

Don't lock in to one provider.

OpenRouter routes your requests across Claude 3.5 Haiku, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)

What does Claude 3.5 Haiku cost in practice?

API pricing is $0.80 per 1M input tokens and $4.00 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:

VolumePer dayPer monthPer year
1M tokens/day$2.40$72.00$876.00
10M tokens/day$24.00$720.00$8,760
100M tokens/day$240.00$7,200$87,600

Strengths & weaknesses

Where it shines

No clear top-tier strengths — this is a mid-pack model.

Where it lags

  • Arena: 1240 (rank #26 of 27, below average)
  • MMLU-Pro: 65.0 (rank #27 of 29, below average)
  • GPQA: 41.6 (rank #27 of 29, below average)

Best alternatives

The closest models to Claude 3.5 Haiku by tier and benchmark score:

ModelScore$ in / outContextAction
GPT-4o mini
OpenAI
61.3 $0.15 / $0.60 128k Try → · vs Claude
Gemini 2.0 Flash
Google
65.6 $0.10 / $0.40 1M Try → · vs Claude
o3-mini
OpenAI
72.7 $1.10 / $4.40 200k Try → · vs Claude
Gemini 2.5 Flash
Google
73.3 $0.30 / $2.50 1M Try → · vs Claude
GPT-5 mini
OpenAI
77.0 $0.25 / $2.00 400k Try → · vs Claude

Frequently asked questions

Is Claude 3.5 Haiku a good model?

Claude 3.5 Haiku scores 56.2 on the llmrank.top composite (rank #28 of 30). It trails the frontier — for top performance, look at GPT-5 (86.0). Whether it's the right fit depends on your workload — see the use-case discussion on this page.

How much does Claude 3.5 Haiku cost?

Claude 3.5 Haiku is priced at $0.80 per 1M input tokens and $4.00 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $24.00/day or $8,760/year.

What is Claude 3.5 Haiku's context window?

200k tokens. That covers most multi-document workloads.

Is Claude 3.5 Haiku open source?

No — it is a closed (proprietary) model accessed only via the provider API.

What is Claude 3.5 Haiku's SWE-Bench score?

Claude 3.5 Haiku scores 40.6% on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is 1240.

What are the best alternatives to Claude 3.5 Haiku?

Closest alternatives by tier and score: GPT-4o mini, Gemini 2.0 Flash, o3-mini. See the alternatives section on this page for side-by-side numbers.


Related: GPT-4o mini · Gemini 2.0 Flash · o3-mini · Full leaderboard

Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.

Try Claude 3.5 Haiku now

Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.

Open Anthropic playground → Try via OpenRouter →