The Implicator's LLM Meter for April 26 ranked Anthropic's Claude first at 86, down two points from the previous week, while Google's Gemini rose three points to 84 after Cloud Next '26 announcements and a reported Siri integration. OpenAI's ChatGPT moved to 81 after the GPT-5.5 release, and Mistral, Grok and DeepSeek also posted gains. The weekly scorecard evaluates large language models from an enterprise-buyer perspective, using product launches, reliability, pricing, distribution and compliance signals.
Key Takeaways
- Claude remained first at 86, but fell two points after Anthropic's Claude Code postmortem.
- Gemini rose three points to 84 after Cloud Next '26 and a reported Siri role.
- ChatGPT reached 81 after GPT-5.5 launched across paid ChatGPT, Codex and the API.
- DeepSeek V4 shifted pricing pressure with V4-Pro listed at $1.74 input and $3.48 output.
AI-generated summary, reviewed by an editor. More on our AI guidelines.
Claude stays first after a reliability setback
Anthropic remained in the lead, but the score fell after the company said in an April 24 engineering postmortem that three product-layer changes had degraded Claude Code, Claude Agent SDK and Claude Cowork during March and April. The company said the API was not affected and reset usage limits for all subscribers.
The incidents included a March 4 reduction in Claude Code's default reasoning effort, a March 26 caching bug and an April 16 system-prompt change that Anthropic said reduced one coding-evaluation score by about three percent. The scorecard also noted Claude.ai errors on April 25, a structured-outputs incident on April 22 and April 23, and file-upload problems on April 20.
Gemini gains on cloud distribution
Gemini rose to 84 after Google used Cloud Next '26 to announce a broader enterprise-agent package. Google said the event included the Gemini Enterprise Agent Platform, Agent Designer, agent activity inboxes, long-running agents, Skills, Projects, Agentic Data Cloud and an eighth-generation TPU.
The scorecard also cited Google Cloud CEO Thomas Kurian's statement that Gemini will power the next generation of Siri, plus an April 22 public preview for Deep Research and Deep Research Max agents. Gemini 3.1 Pro continued to roll out across Vertex AI, Gemini Enterprise, Gemini CLI and Android Studio with a one-million-token context window.
Get Implicator.ai in your inbox
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
OpenAI and DeepSeek change the price discussion
OpenAI's ChatGPT score rose to 81 after the company shipped GPT-5.5 on April 23 to ChatGPT Plus, Pro, Business and Enterprise, along with Codex. API availability followed April 24 at $5 per million input tokens and $30 per million output tokens, with a one-million-token context window. Implicator's GPT-5.5 coverage said the model still trails Anthropic's Mythos on key coding benchmarks.
DeepSeek moved to 16 after releasing V4-Pro and V4-Flash on April 24. The scorecard listed V4-Pro pricing at $1.74 per million input tokens and $3.48 per million output tokens, below GPT-5.5 on posted API rates. DeepSeek's Western enterprise score remained low because of unresolved compliance, data-residency and procurement issues.
Smaller challengers also improved
Mistral rose one point to 70 after reports that xAI had discussed a partnership with Mistral and Cursor. Grok rose three points to 38 after xAI's $20 billion Series E, its Colossus 2 infrastructure claims and Grok 4.20 Multi-Agent Beta availability in the enterprise API.
The April 26 table left Claude in first place, but the weekly movement narrowed the gap between Anthropic and Google. It also showed a broader shift in buyer considerations. Model quality still matters, but April's changes came from product reliability, enterprise distribution, token pricing and the ability to pass procurement review.
Frequently Asked Questions
What is The Implicator's LLM Meter?
It is a weekly enterprise-buyer scorecard for major large language model families. The April 26 edition ranked Claude, Gemini, ChatGPT, Mistral, Grok and DeepSeek.
Which model ranked first on April 26?
Claude ranked first with a score of 86. That was down two points from the previous week after Anthropic disclosed three Claude Code-related product issues.
Why did Gemini gain ground?
Gemini rose to 84 after Google announced enterprise-agent products at Cloud Next '26 and cited a role powering the next generation of Siri.
What changed for ChatGPT?
ChatGPT rose to 81 after OpenAI released GPT-5.5 across paid ChatGPT tiers, Codex and the API. The listed API price was $5 input and $30 output per million tokens.
Why did DeepSeek still rank low?
DeepSeek V4 improved the pricing and benchmark picture, but the scorecard cited unresolved Western compliance, data-residency, certification and procurement questions.
AI-generated summary, reviewed by an editor. More on our AI guidelines.


IMPLICATOR