Anthropic CEO tells staff the AI safety company will take Gulf state money despite calling such moves dangerous to democracy. Internal memo reveals how competitive pressure forced the 'ethical' AI leader to abandon principles for cash access.
Six months after promising $500 billion for AI infrastructure at the White House, SoftBank and OpenAI haven't built anything. Meanwhile, OpenAI cut a separate $30 billion deal with Oracle, sidelining its biggest investor.
AMD cracked the code to run professional AI image generation on regular 24GB laptops. No more cloud subscriptions or expensive workstations needed. Local processing changes the game for creators and businesses.
As Europe Sets the Pace on AI Regulation, Tech Giants Choose Diverging Paths
Microsoft will sign Europe's new AI code. Meta refuses, calling it overreach. This split reveals how tech giants plan to handle the global wave of AI regulation—and one approach will win.
👉 Microsoft will sign the EU's voluntary AI code while Meta refuses, calling it regulatory overreach that will stunt innovation.
📊 Companies face fines up to €35 million or 7% of global revenue for violating the AI Act starting August 2, 2025.
🏭 Over 40 European companies including Airbus and Mercedes-Benz asked the EU to delay implementation by two years.
🌍 Europe's rules could become global standards through the "Brussels Effect" as companies apply EU requirements worldwide.
🚀 This split shows tech companies choosing different regulatory strategies—cooperation versus confrontation—as AI oversight spreads globally.
Microsoft will probably sign Europe's new AI code of practice. Meta definitely won't. That split tells you everything about how differently tech companies plan to handle the coming wave of AI regulation.
The divide became clear this week when Microsoft President Brad Smith told Reuters his company will "likely" sign the European Union's voluntary AI guidelines. Meanwhile, Meta's global affairs chief Joel Kaplan announced on LinkedIn that his company is taking a hard pass.
"Europe is heading down the wrong path on AI," Kaplan wrote, calling the code "regulatory overreach" that will "stunt" innovation. Microsoft took the opposite approach, with Smith emphasizing cooperation: "Our goal is to find a way to be supportive."
What Europe Actually Wants
The EU published its code of practice on July 10. The 13-page document outlines how companies should comply with the broader AI Act before mandatory rules kick in next month. Think of it as a voluntary test run before the real enforcement begins.
The code covers three main areas. First, transparency requirements force companies to document their AI models and training data. Second, copyright compliance means clear internal policies about how training data gets obtained and used under EU law. Third, the most advanced AI models face additional safety and security obligations.
Companies that sign get a streamlined compliance path. Those that don't face more regulatory scrutiny when enforcement starts. The EU calls it "reduced administrative burden and increased legal certainty" for signatories.
The Stakes Are Real
The voluntary label doesn't mean weak enforcement. The AI Act behind this code has teeth. Break the rules and face fines up to €35 million or 7% of your global revenue. For advanced AI models, penalties can hit €15 million or 3% of worldwide sales.
These fines caught companies' attention. Google, Facebook, OpenAI, Anthropic, Mistral, and thousands of other firms operating in Europe must follow these rules. Compliance starts August 2, 2025.
The schedule is tight. Companies can't delay much longer. Firms that put AI models on the market before August 2025 get until August 2027 to comply. New models face immediate requirements.
Industry Pushback
Meta isn't fighting alone. Over 40 European companies wrote to the EU last month asking for a two-year delay. Big names signed on: Airbus, Mercedes-Benz, Philips, and ASML Holding.
They make the same argument as Meta. The rules create uncertainty and will slow innovation. European companies fear they'll lose ground to competitors in countries with looser rules.
The timing hurts because it goes against the US trend. Trump removed AI regulations while Europe adds more oversight. This split forces global companies to pick sides.
Early Adopters vs Holdouts
Some companies decided cooperation beats confrontation. OpenAI announced its commitment the day after the code was published, stating it "reflects our commitment to providing capable, accessible and secure AI models for Europeans."
Mistral also signed early, joining what industry observers call a small but growing group of voluntary adopters. These companies bet that working with regulators now will pay off when mandatory enforcement begins.
The split shows different theories about regulatory strategy. Microsoft and OpenAI are betting collaboration builds goodwill and influence. Meta is betting resistance forces Europe to reconsider its approach.
Global Implications
Europe's rules won't stay in Europe. The EU often sets global standards through the "Brussels Effect." Companies that need EU market access frequently apply European rules worldwide instead of running separate systems.
The AI code connects to other international efforts like the G7 Hiroshima AI Process and national AI strategies. European rules could become global benchmarks.
That's why the stakes are high. Companies aren't just picking how to handle European rules. They're picking sides for a future where AI oversight might be everywhere.
What Comes Next
The EU won't budge on timing despite industry pressure. Internal Market Commissioner Thierry Breton insisted the framework will proceed as scheduled. EU authorities will review the code's adequacy and formally endorse it by August 2, with implementation following immediately.
Companies face a stark choice. Sign the voluntary code and get predictable compliance requirements. Skip it and face case-by-case regulatory scrutiny when mandatory rules take effect.
The split shows different regulatory strategies emerging among AI leaders. Microsoft chose cooperation. Meta chose confrontation. One approach will prove smarter than the other.
Why this matters:
This split shows how AI companies will handle regulation globally—Microsoft's cooperative bet might pay off if European rules spread worldwide.
Meta's pushback reflects tech industry fears that Europe's rules will become the global standard, slowing innovation everywhere to meet the toughest requirements.
❓ Frequently Asked Questions
Q: What exactly do companies have to do under this AI code?
A: Companies must publish summaries of their training data, create policies for EU copyright compliance, and document their AI models. The most advanced models face extra safety requirements including risk assessments and governance frameworks. It covers transparency, copyright, and safety obligations.
Q: When do these rules actually start being enforced?
A: The voluntary code is active now, but mandatory enforcement begins August 2, 2025. Companies that put AI models on the market before that date get until August 2027 to comply. New models launched after August 2025 must comply immediately.
Q: How big are these fines really?
A: Up to €35 million or 7% of global annual revenue, whichever is higher. For the biggest AI models, fines can reach €15 million or 3% of worldwide turnover. For a company like Meta (2023 revenue: $134 billion), 7% would be about $9.4 billion.
Q: Besides Microsoft and Meta, which companies have taken sides?
A: OpenAI and Mistral signed the voluntary code early. Over 40 European companies including Airbus, Mercedes-Benz, Philips, and ASML asked for a two-year delay. Google, Anthropic, and others haven't announced their positions yet.
Q: What happens if you skip the voluntary code?
A: Companies face "case-by-case regulatory scrutiny" when mandatory rules start. Those who sign get a "simplified compliance path" with predictable requirements. Basically, signers get an easier audit process while holdouts get individual investigations.
Q: Why is Meta so strongly against this?
A: Meta calls it "regulatory overreach" that goes beyond the original AI Act's scope. They worry it creates legal uncertainties and will "throttle development" of advanced AI in Europe, hurting European companies that depend on these models.
Q: What is the "Brussels Effect" mentioned in the article?
A: When companies apply EU rules globally instead of maintaining separate systems for different markets. It happened with GDPR privacy rules. If companies find it easier to use one standard worldwide, EU rules become the global standard.
Q: How does this compare to what's happening in the US?
A: Complete opposite direction. The Trump administration removed AI regulations while Europe adds more oversight. This regulatory split forces global companies to choose between cooperation with European rules or resistance hoping for change.
Tech journalist. Lives in Marin County, north of San Francisco. Got his start writing for his high school newspaper. When not covering tech trends, he's swimming laps, gaming on PS4, or vibe coding through the night.
Anthropic CEO tells staff the AI safety company will take Gulf state money despite calling such moves dangerous to democracy. Internal memo reveals how competitive pressure forced the 'ethical' AI leader to abandon principles for cash access.
Trump's new AI plan promises deregulation for infrastructure but demands political neutrality from AI models. Companies seeking federal contracts must prove their systems aren't biased—a move that could slow US progress while China races ahead.
Washington blinked first. After trying to choke China's AI ambitions with chip bans, the U.S. reversed course in July 2025. The reason? Beijing's $98 billion AI bet was working—Chinese startups were matching Silicon Valley with a fraction of the resources.
Microsoft used Chinese engineers to maintain Pentagon's most sensitive systems for nearly a decade. Top Defense officials had no idea. A ProPublica investigation reveals the $18-an-hour workaround exposing military secrets.