LLM Rank.top

Leaderboard · Model · Updated

o1

First-generation OpenAI reasoning model (predecessor to o3).

Composite 75.7 Rank #10 of 30 OpenAI frontier Closed / proprietary Released 2024-12
Try o1 → Compare with Gemini 2.5 Pro → Or route via OpenRouter →

At a glance

ProviderOpenAI
Released2024-12
Tierfrontier
LicenseClosed / proprietary
Modalitiestext, image
Context window200k tokens
Max output100k tokens
API price · input$15.00 / 1M tokens
API price · output$60.00 / 1M tokens

Benchmark performance

How o1 stacks up against the current leader (GPT-5) and the median model in the leaderboard:

Benchmarko1GPT-5Median
Chatbot Arena Elo 1355 1410 1320
MMLU-Pro 83.5 86.8 78.0
GPQA Diamond 78.0 87.3 65.0
MATH 94.8 96.7 78.3
HumanEval 89.5 95.1 92.0
SWE-Bench Verified 48.9 74.9 49.0

Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.

Don't lock in to one provider.

OpenRouter routes your requests across o1, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)

What does o1 cost in practice?

API pricing is $15.00 per 1M input tokens and $60.00 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:

VolumePer dayPer monthPer year
1M tokens/day$37.50$1,125$13,688
10M tokens/day$375.00$11,250$136,875
100M tokens/day$3,750$112,500$1,368,750

Strengths & weaknesses

Where it shines

  • GPQA: 78.0 (rank #6 of 29, above average)
  • MATH: 94.8 (rank #6 of 29, above average)

Where it lags

  • SWE-Bench: 48.9 (rank #14 of 18, mid-pack)

Best alternatives

The closest models to o1 by tier and benchmark score:

ModelScore$ in / outContextAction
Gemini 2.5 Pro
Google
80.9 $1.25 / $10.00 2M Try → · vs o1
Claude Opus 4.1
Anthropic
83.6 $15.00 / $75.00 200k Try → · vs o1
Grok 4
xAI
83.6 $3.00 / $15.00 256k Try → · vs o1
o3
OpenAI
83.7 $2.00 / $8.00 200k Try → · vs o1
GPT-5
OpenAI
86.0 $1.25 / $10.00 400k Try → · vs o1

Frequently asked questions

Is o1 a good model?

o1 scores 75.7 on the llmrank.top composite (rank #10 of 30). It is competitive with the leaders (GPT-5 tops the board at 86.0). Whether it's the right fit depends on your workload — see the use-case discussion on this page.

How much does o1 cost?

o1 is priced at $15.00 per 1M input tokens and $60.00 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $375.00/day or $136,875/year.

What is o1's context window?

200k tokens. That covers most multi-document workloads.

Is o1 open source?

No — it is a closed (proprietary) model accessed only via the provider API.

What is o1's SWE-Bench score?

o1 scores 48.9% on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is 1355.

What are the best alternatives to o1?

Closest alternatives by tier and score: Gemini 2.5 Pro, Claude Opus 4.1, Grok 4. See the alternatives section on this page for side-by-side numbers.


Related: Gemini 2.5 Pro · Claude Opus 4.1 · Grok 4 · Full leaderboard

Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.

Try o1 now

Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.

Open OpenAI playground → Try via OpenRouter →