Leaderboard · Model · Updated
GPT-4.1
1M-token context model optimized for coding and long-document tasks.
At a glance
| Provider | OpenAI |
|---|---|
| Released | 2025-04 |
| Tier | general-purpose |
| License | Closed / proprietary |
| Modalities | text, image |
| Context window | 1M tokens |
| Max output | 32.768k tokens |
| API price · input | $2.00 / 1M tokens |
| API price · output | $8.00 / 1M tokens |
Benchmark performance
How GPT-4.1 stacks up against the current leader (GPT-5) and the median model in the leaderboard:
| Benchmark | GPT-4.1 | GPT-5 | Median |
|---|---|---|---|
| Chatbot Arena Elo | 1380 | 1410 | 1320 |
| MMLU-Pro | 80.1 | 86.8 | 78.0 |
| GPQA Diamond | 66.3 | 87.3 | 65.0 |
| MATH | 87.0 | 96.7 | 78.3 |
| HumanEval | 92.0 | 95.1 | 92.0 |
| SWE-Bench Verified | 54.6 | 74.9 | 49.0 |
Numbers compiled from provider technical reports and Chatbot Arena. See methodology for the composite-score formula.
OpenRouter routes your requests across GPT-4.1, GPT-5, Claude, Gemini, and 100+ other models behind a single API key — pay-as-you-go, no monthly minimum. Try OpenRouter → (affiliate · supports this site)
What does GPT-4.1 cost in practice?
API pricing is $2.00 per 1M input tokens and $8.00 per 1M output tokens. Assuming a 50/50 input/output split, here is what that looks like at three workload sizes:
| Volume | Per day | Per month | Per year |
|---|---|---|---|
| 1M tokens/day | $5.00 | $150.00 | $1,825 |
| 10M tokens/day | $50.00 | $1,500 | $18,250 |
| 100M tokens/day | $500.00 | $15,000 | $182,500 |
Strengths & weaknesses
Where it shines
- Arena: 1380 (rank #6 of 27, above average)
Where it lags
No clear weaknesses across published benchmarks.
Best alternatives
The closest models to GPT-4.1 by tier and benchmark score:
| Model | Score | $ in / out | Context | Action |
|---|---|---|---|---|
| Claude 3.7 Sonnet Anthropic |
76.0 | $3.00 / $15.00 | 200k | Try → · vs GPT-4.1 |
| Claude 3.5 Sonnet Anthropic |
69.1 | $3.00 / $15.00 | 200k | Try → · vs GPT-4.1 |
| Claude Sonnet 4 Anthropic |
80.7 | $3.00 / $15.00 | 200k | Try → · vs GPT-4.1 |
| Gemini 1.5 Pro |
67.9 | $1.25 / $5.00 | 2M | Try → · vs GPT-4.1 |
| Grok 3 xAI |
81.7 | $3.00 / $15.00 | 1M | Try → · vs GPT-4.1 |
Frequently asked questions
Is GPT-4.1 a good model?
GPT-4.1 scores 74.5 on the llmrank.top composite (rank #12 of 30). It is competitive with the leaders (GPT-5 tops the board at 86.0). Whether it's the right fit depends on your workload — see the use-case discussion on this page.
How much does GPT-4.1 cost?
GPT-4.1 is priced at $2.00 per 1M input tokens and $8.00 per 1M output tokens (USD). For a 10M-token-per-day workload split 50/50, that works out to roughly $50.00/day or $18,250/year.
What is GPT-4.1's context window?
1M tokens. That is large enough to ingest entire codebases or full books in one prompt.
Is GPT-4.1 open source?
No — it is a closed (proprietary) model accessed only via the provider API.
What is GPT-4.1's SWE-Bench score?
GPT-4.1 scores 54.6% on SWE-Bench Verified — the benchmark that measures real-world GitHub issue resolution. Its Chatbot Arena Elo is 1380.
What are the best alternatives to GPT-4.1?
Closest alternatives by tier and score: Claude 3.7 Sonnet, Claude 3.5 Sonnet, Claude Sonnet 4. See the alternatives section on this page for side-by-side numbers.
Related: Claude 3.7 Sonnet · Claude 3.5 Sonnet · Claude Sonnet 4 · Full leaderboard
Spotted out-of-date numbers? Open an issue — corrections usually ship within 24h.
Try GPT-4.1 now
Direct link to the official playground — or use OpenRouter to A/B test it against any other model on a single API key.