China's rare earth export controls collapsed a Trump-Xi summit and sent markets down 2%. Beijing's processing monopoly—not just mine output—gives it leverage tariffs can't quickly counter. The pattern: announced deals, staggered supply, repeat.
Rishi Sunak will advise rival AI giants Microsoft and Anthropic under ringfencing rules approved by a watchdog closing in days. The appointments test whether political expertise compounds across competitors as market warnings mount and enforcement weakens.
Sunak takes Microsoft and Anthropic adviser roles under watchdog curbs
Rishi Sunak will advise rival AI giants Microsoft and Anthropic under ringfencing rules approved by a watchdog closing in days. The appointments test whether political expertise compounds across competitors as market warnings mount and enforcement weakens.
Former UK prime minister Rishi Sunak will advise both Microsoft and Anthropic while remaining a backbench MP—posts cleared with a two-year lobbying ban and strict limits on UK policy work, according to ACoBA’s formal advice letter. The timing lands as the Bank of England warns AI-driven valuations look stretched, raising the temperature around Big Tech’s political ties and market risk.
Sunak’s remit at both companies is pitched as high-level strategy on macroeconomic and geopolitical trends. Microsoft also gets him on stage at its annual summit. Anthropic gets counsel from the host of the UK’s 2023 AI Safety Summit. Both firms say the roles are “internally focused” and ringfenced from lobbying. Sunak says he’ll donate the pay to The Richmond Project, the numeracy charity he founded with his wife, Akshata Murty.
Key Takeaways
• Sunak cleared to advise Microsoft and Anthropic with two-year lobbying ban, approved by watchdog closing October 13 after being called "next to useless"
• Bank of England warns AI valuations stretched to dotcom levels; top 5 S&P firms hit 50-year concentration high at 30% of index
• AI lobbying surged 141% as companies spent $2.71M combined in 2024, hiring Biden officials and former PMs as regulatory infrastructure
• Ringfencing rules face stress test: political value flows through strategic perspective on regulatory thinking, not direct government contact
What’s actually new
This isn’t one politician taking one post. It’s AI companies formalizing a political bench—hiring people who understand how rules get written and how they are enforced. Anthropic has already brought on Biden-era policy hands such as Tarun Chhabra and Elizabeth Kelly. Sunak extends that approach internationally: a Conservative leader with firsthand experience launching the AI Safety Institute and convening governments and labs. That network travels well. So does the playbook.
The watchdog conditions are boilerplate but notable. Sunak can’t lobby UK officials for two years from leaving office, can’t draw on privileged information, and can’t advise on UK-specific policy. The committee also flagged his one-to-one meetings with Bill Gates while in No. 10 and his role in shepherding Microsoft’s UK investment announcements. Anthropic, it noted, was already a government stakeholder on public-service AI.
The ringfencing problem
“Internally focused” sounds clean on paper. Reality is messier.
Sunak’s edge is not a phone call to Whitehall. It’s judgment—what arguments land in Brussels, which trade-off regulators will accept, where compliance can become a moat, and how sovereign AI plans reshape procurement. That knowledge doesn’t require outreach to be valuable. It shapes roadmaps, partnerships, and messaging. Firewalls block emails, not intuition.
For Anthropic, the hire underwrites its safety-first brand. The company sells governance as part of the product, and senior political advisers signal exactly that to risk-averse buyers. It’s credibility by résumé. For Microsoft, the prize is wider. The company straddles consumer AI, enterprise software, and a hyperscale cloud exposed to diverging national rules. A seasoned political operator helps navigate that patchwork. No lobbying needed.
Advising rivals, in practice
On paper, the conflicts are manageable. Microsoft backs OpenAI; Anthropic is its fiercest enterprise rival; both run on clouds that include Azure. These roles are scoped to geopolitics and macro trends, not model choices or deal strategy. That is the theory.
In practice, context leaks. Understanding one lab’s constraints—compute, capital, compliance—inevitably informs how you frame risks and timelines at another. The separation exists in job descriptions, not in cognitive compartments. It’s a line to watch.
Valuations, risk, and timing
The backdrop is frothy. On October 8, the Bank of England’s Financial Policy Committee warned that AI-linked equity valuations “appear stretched,” and that a sharp correction would hit the broader economy. Concentration risk is high. Expectations are higher. One disappointment can ripple across the complex.
Sunak’s appointments landed days later. Anthropic’s latest private-market valuation has soared—The Times puts it around $183 billion. Microsoft’s market cap bakes in substantial AI optionality. None of this implies imminent trouble. It does mean shareholders and regulators will scrutinize how companies wield political access while markets price perfection. Volatility will test the narrative.
The new post-office pipeline
Tech has long recruited ex-officials. The pace and purpose feel different now. AI companies are assembling policy shops in-house, competing for the same ex-ministers and national-security aides they’ll soon brief across the table. It’s a structural response to regulatory heat and a bet that institutional memory is a moat.
Sunak’s trajectory—Goldman Sachs adviser in July, now dual AI roles—fits the mold. The novelty is the speed with which AI labs and platforms have professionalized their political operations. They are hiring like think tanks.
Three tests ahead
First, enforcement. ACoBA’s rules matter only if companies treat them as hard constraints when outcomes get tight. Second, market discipline. If AI’s premium deflates, the appetite for marquee advisers may cool quickly. Third, rulemaking. Governments could tighten the revolving-door code as AI lobbying intensifies. Those choices will set the norm for who can advise whom, and on what.
Sunak’s dual posts are an early case study. They probe how far “ringfenced” can stretch, whether political capital compounds across rivals, and how fast ethics rules written for an earlier tech era can adapt. That’s the real story.
Why this matters
AI firms are building policy muscle at speed, turning institutional know-how into a competitive asset as rules, procurement, and subsidies shape winners.
Ringfencing curbs direct lobbying, but not judgment; political insight can still influence product, partnerships, and go-to-market in ways that matter.
🔥
Your competitors read this at breakfast.
Join them. Free daily AI updates.
Unsubscribe anytime. We're cool like that.
❓ Frequently Asked Questions
Q: What is ACOBA and why does its closure matter?
A: The Advisory Committee on Business Appointments oversees UK ministers' post-government jobs. It closes October 13, 2025 after 50 years. Its former chair called it "dead in the water, next to useless" because it has no enforcement power—only voluntary compliance. Functions transfer to Civil Service Commission and Independent Adviser on Ministerial Standards, but whether they'll gain actual enforcement teeth remains unclear.
Q: How can Sunak advise competing AI companies without conflicts?
A: Microsoft backs OpenAI while Anthropic competes against it for enterprise deals. The roles are scoped to macroeconomic and geopolitical trends, not deal strategy. Both companies use Azure infrastructure, which theoretically reduces direct conflict. However, understanding one lab's constraints—on compute, capital, or compliance—inevitably shapes how you assess the other. The separation exists in job descriptions, not cognitive compartments.
Q: What happens if Sunak violates the two-year lobbying ban?
A: Consequences remain unclear. The system depends on voluntary compliance and transparency—there's no monitoring mechanism. Financial penalties require proving ministerial code breach. ACOBA had no investigatory capacity. Its successor bodies inherit the same constraints unless government grants statutory authority. Former chair Lord Pickles wrote in 2022 that without "recognisable compliance regime including sanctions," rules lack credibility.
Q: How do Anthropic and OpenAI differ in their political hiring strategies?
A: Anthropic hired multiple Biden-era officials including Tarun Chhabra and Elizabeth Kelly, plus Sunak's former chief of staff Liam Booth-Smith. The roster signals governance credibility and regulatory expertise. OpenAI pursued closer ties to Trump's circle instead. Anthropic positions itself as the safety-focused alternative, using personnel to reinforce that brand with risk-averse enterprise buyers. Different networks reflect different market positioning strategies.
Q: Why are AI companies hiring politicians faster than previous tech waves?
A: AI lobbying participation jumped 141% year-over-year (458 to 648 companies). Combined spending by OpenAI, Anthropic, and Cohere hit $2.71 million in 2024—four times their 2023 total. Companies recognized that institutional knowledge compounds as regulation tightens. Former officials provide strategic context on regulatory thinking that informs product roadmaps and partnerships without requiring direct government contact. It's infrastructure building before rules harden.
Bilingual tech journalist slicing through AI noise at implicator.ai. Decodes digital culture with a ruthless Gen Z lens—fast, sharp, relentlessly curious. Bridges Silicon Valley's marble boardrooms, hunting who tech really serves.
China's rare earth export controls collapsed a Trump-Xi summit and sent markets down 2%. Beijing's processing monopoly—not just mine output—gives it leverage tariffs can't quickly counter. The pattern: announced deals, staggered supply, repeat.
A $2B Nvidia chip buyer operated from empty offices while China announced rare-earth export curbs. The collision exposes how supply chain controls function more as leverage than barriers while billions in restricted tech moves through legal gaps.
OpenAI complained to EU regulators about Microsoft's anticompetitive conduct—despite Microsoft being its largest investor with $13 billion committed. The move signals OpenAI's shift from AI supplier to platform owner, using regulators as leverage.
Washington promotes AI in classrooms. New data shows 19% of high schoolers report romantic AI relationships, 42% use it for mental health support. The more schools deploy AI, the more students feel disconnected from teachers—and from reality.