- 19
- April
Claude Opus 4.7 went GA (General Availability) on April 16, 2026 — immediately available on Claude.ai, Anthropic API, Amazon Bedrock, Google Vertex AI, and Microsoft Foundry. Pricing stays at $5/$25 per 1M tokens (same as Opus 4.6), but the model is stronger across nearly every dimension — beating both GPT-5.4 and Gemini 3.1 Pro on 12 out of 14 benchmarks.
The biggest talking points: Coding +13% on SWE-bench, Vision 3.3x (supports images up to 2,576px), a new xhigh effort level between high/max, and /ultrareview in Claude Code for deep code reviews (3 free sessions/month for Pro/Max). The catch — Opus 4.7 uses 0-35% more tokens for the same task, meaning actual costs can rise even though per-token pricing is unchanged.
Claude Opus 4.7 in Numbers
Table of Contents
What Is Claude Opus 4.7?
Claude Opus 4.7 is Anthropic's flagship frontier model, released GA on April 16, 2026 — just ~6 weeks after Opus 4.6 (March 5). Its market positioning is clear: "Mythos that's safe for general use."
Anthropic had unveiled Claude Mythos Preview on April 7 — a model so good at security it autonomously found a 17-year-old FreeBSD zero-day — but deemed "too dangerous" to release GA. Opus 4.7 is "Mythos's sibling, retrained with safeguards from Project Glasswing" so it can be used widely.
Platforms Supporting Opus 4.7 at GA
| Platform | How to Access | Best For |
|---|---|---|
| Claude.ai (web + desktop) | Log in and select Opus 4.7 in the model picker | General users, Pro, Max |
| Anthropic API | Model ID: claude-opus-4-7 | Developers, startups |
| Amazon Bedrock | Via AWS Console — multi-region support | Organizations on AWS |
| Google Vertex AI | Via GCP Console | Organizations on GCP |
| Microsoft Foundry | Via Azure AI Foundry | Organizations on Azure |
Multi-cloud advantage: No vendor lock-in worry — organizations on any major cloud can use Opus 4.7 through that cloud directly, no need to switch to the API.
4 Key New Features of Opus 4.7
The new capabilities fall into 4 main groups — Coding, Vision, Effort Control, Code Review.
1. Coding +13% — SWE-bench Verified Jumps from 80.8% → 87.6%
Opus 4.7 clearly improves on software engineering tasks, especially hard and long-running ones (long agentic tasks). On Rakuten-SWE-Bench — a real production code benchmark — Opus 4.7 resolves 3x more tasks than Opus 4.6, with double-digit gains in both Code Quality and Test Quality.
Another important addition: Opus 4.7 performs self-verification on long agentic tasks — running code, checking output, and fixing itself without needing human guidance (at the cost of more tokens — see pricing section).
2. Vision 3.3x — Supports Images up to 2,576px / 3.75 MP
Previous Claude models capped images at ~800 pixels, making detailed diagrams, specs, or 4K screenshots appear blurry. Opus 4.7 pushes the long side to 2,576 pixels (~3.75 MP) — more than triple. This unlocks serious vision-heavy use cases:
- Screenshot / Computer Use — read dense UI text accurately
- Document understanding — invoices, POs, receipts
- Technical diagrams — system architecture, wireframes, ER diagrams
- Charts + graphs — Excel/dashboard screenshots with small numbers
3. xhigh — New Effort Level Between High/Max
Previously there were 4 levels: none → low → medium → high → max. The issue: "high" was sometimes too shallow, "max" too slow and expensive. Anthropic added xhigh as an intermediate — use it when you need "deeper than high but not as heavy as max."
Selection guide: FAQ → none/low, basic analysis → medium, routine coding/reasoning → high, multi-step complex problems → xhigh, research/hardest proofs → max.
4. /ultrareview — New Claude Code Command
Claude Code gets a new command /ultrareview that opens a dedicated review session — reads all the changes and surfaces bugs + design issues that a rigorous reviewer would catch. Anthropic offers 3 free sessions/month for Pro and Max subscribers.
Benchmarks — vs GPT-5.4 + Gemini 3.1 Pro
Opus 4.7 wins 12 of 14 benchmarks Anthropic reported — clear leadership in coding, agentic tools, and reasoning, with tight competition from Gemini 3.1 Pro on multimodal reasoning.
| Benchmark | Opus 4.7 | Opus 4.6 | GPT-5.4 | Gemini 3.1 Pro |
|---|---|---|---|---|
| SWE-bench Verified | 87.6% | 80.8% | - | 80.6% |
| SWE-bench Pro | 64.3% | 53.4% | 57.7% | 54.2% |
| GPQA Diamond | 94.2% | 91.3% | 94.4% | 94.3% |
| MCP-Atlas | +14.6pp | baseline | - | - |
| CharXiv-R | +13.6pp | baseline | - | - |
| Rakuten-SWE (production) | 3x tasks resolved | baseline | - | - |
Key Insight: GPQA Diamond (94.2%) is approaching saturation — all competitors are within 0.2 percentage points. The benchmarks that matter most are SWE-bench Pro + MCP-Atlas + CharXiv-R, where Opus 4.7 leads by wide margins.
/ultrareview in Claude Code — Deep Code Review
/ultrareview is a new slash command in Claude Code that opens a dedicated review session — different from regular chat in that it:
- Reads all the changes (git diff + surrounding context) before advising
- Finds complex bugs — race conditions, edge cases, security holes
- Critiques design decisions — architecture, naming, abstraction
- Assigns priority to each issue (critical → nit-pick)
Best-Fit Use Cases
| Scenario | Why /ultrareview Helps |
|---|---|
| Pre-PR review | Review a PR before sending to teammates — saves feedback round-trips |
| Critical paths (payment, auth) | High-stakes code — more review eyes help |
| Legacy refactor | Old code against new conventions — review pinpoints what to fix |
| Security review | Catches SQL injection, XSS, IDOR, race conditions humans easily miss |
Vision 3.3x — Read Documents 3x Clearer
Previously, Claude capped images at ~800 pixels, making long invoices, PDF spec sheets, or 4K screenshots "blurry" to the model. Opus 4.7 fixes this — supporting 2,576 pixels on the long side (~3.75 MP).
| Use Case | How It Changes |
|---|---|
| Computer Use (agentic) | Reads UI with tiny buttons + text accurately → automation doesn't miss |
| Documents / Invoices | Read long invoices, POs, receipts in one shot — great for integrating with ERP systems |
| Technical Diagrams | Architecture diagrams, wireframes, ER diagrams with tiny labels all readable |
| Charts + Dashboards | Excel/Power BI screenshots with small numbers — Opus 4.7 reads them clearly |
ERP opportunity: Opus 4.7 + Vision 3.3x = much better AI integration with accounting document work. Example: drop a delivery-slip screenshot → AI extracts as JSON → auto-posts a goods receipt in ERP.
Pricing + Token Consumption Warning
Per-Token Price Unchanged
Opus 4.7 costs $5/$25 per 1M tokens (input/output) — identical to Opus 4.6. Anthropic chose not to raise prices to stay competitive with GPT-5.4 and Gemini 3.1, which are similarly priced.
But Beware — Token Usage Rises 0-35%
The catch users must watch: Opus 4.7 consumes 1.0-1.35x the tokens of Opus 4.6 for the same query, because:
- It performs self-verification on long tasks (runs-checks-fixes itself)
- Deeper reasoning at
xhigh/maxlevels - Vision 3.3x = bigger images = more tokens
Warning: If an organization has existing Opus 4.6 prompts and doesn't adjust them — costs may rise up to 35% even though per-token pricing is unchanged. Mitigation: test the same prompts on 4.6 vs 4.7 and compare token counts before switching production.
Should You Upgrade from Opus 4.6?
| Workload Type | Recommendation | Reason |
|---|---|---|
| Complex coding | Upgrade now | SWE-bench Pro +10.9pp, Rakuten-SWE 3x tasks |
| Long agentic tasks | Upgrade now | Self-verification cuts human round-trips |
| Vision (documents + charts) | Upgrade now | 2,576px / 3.75MP opens entire new use cases |
| Code review | Upgrade now | Get /ultrareview as bonus |
| General chat / Q&A | No rush | 4.6 is fine; Sonnet 4.6 is even cheaper (1/5) |
| High-volume batch | Test tokens first | If cost rises 35% = monthly bill balloons |
Rule of thumb: Already using Opus → upgrade to 4.7 (free new features). Already using Sonnet → stay (no Sonnet 4.7 yet). Lighter workloads — Sonnet 4.6 is ~5x cheaper than Opus 4.7.
Impact on Organizations + ERP
Organizations already using Claude (via Claude.ai, API, or any major cloud) get Opus 4.7 benefits right away, especially in 4 dimensions:
| Dimension | Opportunity from Opus 4.7 | What to Do |
|---|---|---|
| ERP document workflow | Vision 3.3x → accurate reading of invoices, POs, receipts | Test AI → ERP integration (e.g. goods receipt) |
| IT / dev teams building ERP | Coding +13% + /ultrareview → faster PR reviews | Let developers try /ultrareview — 3 free sessions/mo |
| Finance / accounting | Better vision for bank statements, reports + deeper reasoning | Test AI + accounting workflows on Opus 4.7 |
| Cost control | xhigh effort level gives finer cost-per-task tuning | Pick effort level by use case — easy work → low, complex → xhigh |
ERP Perspective: Opus 4.7 + a solid ERP foundation = competitive advantage. While competitors still use Excel + manual data entry, organizations that connect AI to a well-designed ERP will process documents, reconcile books, and audit many times faster — the opportunity Saeree ERP is actively developing.
Claude Opus 4.7 isn't just an "upgrade" from its predecessor — it's a clear signal that Anthropic has taken parts of the Mythos model (held back from GA) and distilled them into something everyone can use. Organizations that learn it fast have the advantage — especially those with an ERP foundation ready to plug AI into.
- Sureeraya Limpaibul, Managing Director, Grand Linux Solution
Further Reading
- 10 Days of AI You Missed (9-18 April 2026) — overview of AI news during Songkran
- Claude Mythos Preview — the "sibling" model of Opus 4.7
- Claude Opus 4.6 & Sonnet 4.6 — the previous generation
- Claude Code Review — to use
/ultrareview - ChatGPT vs Claude vs Gemini — 3-way AI comparison
- What Is MCP — Model Context Protocol used in Opus 4.7
- AI vs Human — perspectives for organizations
