Google has restricted accounts of AI Ultra subscribers who accessed Gemini models through OpenClaw, a third-party OAuth client, according to a growing thread on the Google AI Developer Forum. The restrictions arrived without warning or explanation, cutting off users paying $249.99 per month from Gemini 2.5 Pro and, in some cases, threatening access to Gmail, Workspace, and other linked services. The enforcement comes two days after Anthropic updated its legal terms to explicitly ban OAuth token usage in third-party tools, including OpenClaw.
Two of the three largest AI model providers locked down third-party access in the same week. Same calculation. Same week.
Key Takeaways
- Google restricted AI Ultra subscribers ($249.99/month) who used OpenClaw OAuth, with no warning or explanation
- Anthropic banned third-party OAuth token usage two days earlier, citing token arbitrage economics
- OpenClaw faces 21,639 exposed instances, infostealers targeting config files, and supply chain attacks
- OpenAI hired OpenClaw's creator and endorsed third-party harness usage, widening the competitive gap
$249.99 a month buys you access, not flexibility
The forum thread that started this carries a title worth reading in full: "Account Restricted Without Warning, Google AI Ultra, OAuth via OpenClaw." The original poster described losing access to Gemini 2.5 Pro after connecting through OpenClaw, an open-source AI agent framework that routes existing subscriptions through alternative interfaces. No terms of service violation was cited. No explanation followed.
Other users arrived with matching stories. Some submitted appeals and received only automated replies. Others couldn't find a human representative at all. One user wrote that they tried creating a fresh Google account, only to discover that one restricted too. "No warnings, no nothing, just a ban after being a customer for decades," the user wrote, threatening to cancel YouTube, Google Ultra, and every other Google product. Another called the situation what it felt like from the inside: "I'd have to sue a trillion-dollar company just to get the measly fee I paid."
Nearly three thousand dollars a year, and the best Google could offer was a community manager promising to "share this with our internal teams." No timeline. No specifics. The thread kept growing.
And here is where Google's architecture turns an annoyance into something worse. Google's AI products sit on top of the broader Google account infrastructure. A restriction triggered by Gemini usage can cascade into Gmail, Workspace, and cloud storage. All tied to the same credentials. Several forum users flagged this exact concern. For a developer whose entire business runs on Google services, getting your AI subscription flagged doesn't just cut off one product. It puts everything at risk.
Anthropic wrote the rules first
Google's crackdown didn't arrive in a vacuum. Anthropic, two days earlier on February 20, had revised its Consumer Terms of Service to spell out what had been loosely implied since February 2024: OAuth tokens from Claude Free, Pro, and Max accounts are only permitted in Claude Code and Claude.ai. Using them anywhere else, including OpenClaw, violates the terms.
What makes the timing notable is that Anthropic's contractual language in Section 3.7 had already forbidden unauthorized third-party access since at least February 2024. For two years, that clause sat in the terms while tools like OpenClaw and OpenCode quietly let users supply Claude subscription keys anyway. Anthropic had tolerated the practice, or at least failed to enforce against it, until the economics forced their hand.
"Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service, including the Agent SDK, is not permitted and constitutes a violation of the Consumer Terms of Service," the updated compliance page states.
Anthropic engineer Thariq Shihipar provided the business reasoning in a January social media thread. Third-party harnesses create "unusual traffic patterns without any of the usual telemetry that the Claude Code harness provides," he wrote. That makes debugging impossible when users hit rate limits or account bans. "They don't have any other avenue for this support."
The Register's Thomas Claburn framed the economics bluntly. Anthropic sells subscription tokens at a discount compared to API pricing. An all-you-can-eat buffet priced with certain usage expectations. Third-party harnesses broke those expectations by letting subscribers extract more value than the subscription model assumed. Token arbitrage. Pay the flat rate, route tokens through a tool that burns through them faster than anyone planned for.
On Thursday, OpenCode pushed a commit removing support for Claude Pro and Max account keys. The commit message cited "anthropic legal requests." Policy language turned into enforced code in less than a week.
OpenAI, emboldened, leaned into the gap. Thibault Sottiaux from OpenAI pointedly endorsed using Codex subscriptions with third-party harnesses. Competitive positioning dressed up as developer friendliness.
OpenClaw sits at the center of all of it
OpenClaw keeps generating headlines, though not the kind its community wants. The open-source agent framework, formerly known as Clawdbot and then Moltbot, has amassed over 200,000 GitHub stars since launching in November 2025. Sam Altman announced on February 15 that OpenClaw founder Peter Steinberger would be joining OpenAI. OpenClaw would transition into a foundation structure with OpenAI backing.
But popularity and security have run in opposite directions. Censys identified 21,639 exposed OpenClaw instances sitting on the public internet as of January 31. SecurityScorecard's STRIKE team found hundreds of thousands more carrying potential remote code execution risks. Five CVEs were patched between January 25 and January 30, including a one-click RCE vulnerability that needed two attempts to fix properly before version 2026.1.30 closed the remaining gaps.
Infostealers have already adapted. Hudson Rock disclosed in mid-February that a Vidar variant had exfiltrated an OpenClaw user's full configuration, including gateway tokens, cryptographic keys, and the agent's soul.md file containing its operational instructions. Alon Gal, Hudson Rock's CTO, told The Hacker News the malware wasn't even targeting OpenClaw specifically. A broad file-grabbing routine "inadvertently struck gold by capturing the entire operational context of the user's AI assistant."
Then came the supply chain attack. ClawHavoc, discovered by Koi Security in late January, used professional-looking skills uploaded to ClawHub, OpenClaw's plugin marketplace. The skills instructed users to install a "helper agent" that actually deployed the Atomic Stealer infostealer. Full remote control over the victim's OpenClaw instance and every service it touched.
Alex Polyakov, founder of AI red teaming firm Adversa AI, built SecureClaw in response. An open-source audit tool running 55 hardening checks mapped to OWASP and MITRE frameworks. Polyakov doesn't oversell it. "We don't claim to 'solve' prompt injection. But we do make it significantly harder through multi-layer defense."
Stay ahead of the curve
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
The buffet is closing
Strip away the vendor names and the pattern is visible. Somewhere inside Google and Anthropic, finance teams stared at usage dashboards and saw the same problem. AI model providers had priced subscriptions assuming users would interact through first-party interfaces, at a human pace, with predictable usage curves. OpenClaw and tools like it shattered that assumption by letting subscribers route tokens through automated, high-throughput agent workflows. The math stopped working for providers who had been subsidizing access.
Google and Anthropic landed in the same place, but Anthropic at least told people what was happening. Updated the legal terms. Had an engineer post the reasoning on social media. Gave tool makers a few days to adjust. OpenCode removed Claude support by Thursday. Messy, but legible.
Google just started restricting accounts. No policy update appeared. No public statement went out. No explanation reached users paying $249.99 a month. Google's product engineering team confirmed to at least one forum user that their account "was suspended from using our Antigravity service," Google's developer-facing branding for its Gemini AI platform, but offered no details on what triggered the enforcement or how to avoid it in the future.
The developer forum thread, still growing, is the only public record of what happened. For a company that built OAuth and champions it across every other product line, from Gmail to YouTube to Google Drive, the silence on why OAuth access to Gemini triggers account restrictions is baffling.
What developers are actually facing
Short-term advice in developer communities has been blunt: stop using third-party OAuth clients with AI subscriptions immediately. Revert to native interfaces. Consider API key access, which carries per-token pricing but avoids the automated enforcement. Some developers have started migrating to local models entirely, running Kimi K2.5 or Qwen 3.5 on their own hardware to eliminate provider dependency altogether. One widely circulated Medium post detailed a migration from a $200-per-month Claude Max setup through OpenClaw to a $15-per-month dual-VPS configuration running open-source models.
The trust damage runs deeper than any single workaround can fix. Developers choosing an AI platform now have to weigh a new variable: the risk that a provider will retroactively restrict how you access what you've already paid for. Anthropic at least telegraphed the change. Google blindsided its own paying customers.
But the more consequential question sits behind all of these workarounds. If subscription pricing was always subsidized, and providers are now enforcing the implicit limits that made those subsidies viable, then the real cost of AI model access is higher than what anyone has been paying. The arbitrage window that OpenClaw and similar tools exploited didn't create artificial demand. It exposed the gap between what providers charge for subscriptions and what the underlying compute actually costs to run.
OpenAI is the outlier for now, welcoming third-party harness usage and hiring OpenClaw's creator. That looks generous. It might also be a bet that owning the developer relationship and the agent infrastructure is worth more than protecting subscription margins in the short term. Whether that holds depends on how fast OpenAI's own costs grow as agent workloads scale.
None of that helps Google's AI Ultra subscribers today. They paid for premium access to the most capable Gemini models. They connected through a standard authentication protocol that Google itself designed and promotes everywhere else. And they got locked out. The forum thread has dozens of replies now. Google has not issued a public statement, and no affected user has reported a restored account.
Frequently Asked Questions
What is OpenClaw and why does it matter here?
OpenClaw is an open-source AI agent framework with over 200,000 GitHub stars that lets users route existing AI subscriptions through alternative interfaces. Its OAuth-based access to Google and Anthropic models triggered account restrictions from both providers within the same week.
Why is Google restricting AI Ultra accounts?
Google has not issued a public explanation. Forum evidence suggests accounts are flagged when users access Gemini models through third-party OAuth clients like OpenClaw. No specific terms of service violation has been cited to affected users.
What did Anthropic change about its OAuth policy?
On February 20, Anthropic explicitly banned OAuth token usage from Claude Free, Pro, and Max accounts in any third-party tool. The underlying rule existed since February 2024 in Section 3.7 but was not enforced until token arbitrage made the economics unsustainable.
Can a Google AI restriction affect Gmail and other services?
Potentially yes. Google's AI products share the same account infrastructure as Gmail, Workspace, and cloud storage. Forum users reported concerns that a Gemini-triggered restriction could cascade into their broader Google account and business services.
What alternatives exist if third-party OAuth access is blocked?
Developers can switch to API key access with per-token pricing, use native provider interfaces, or migrate to local open-source models like Kimi K2.5 or Qwen 3.5. One documented setup replaced a $200/month subscription with a $15/month dual-VPS configuration.
Related Stories



