Google Bows to EU AI Rules as Meta Refuses to Blink

Google will sign EU's AI rules while Meta refuses, creating a strategic split in Big Tech. The divide reveals two different approaches to Europe's regulatory push as new AI laws take effect August 2.

Google Signs EU AI Rules While Meta Holds Out

💡 TL;DR - The 30 Seconds Version

👉 Google will sign the EU's AI code of practice while Meta refuses, splitting Big Tech's approach to Europe's new AI regulations taking effect August 2.

📊 The economic stakes are massive: Google estimates AI could boost Europe's economy by €1.4 trillion annually by 2034 if deployed properly.

🏭 Companies have until August 2 to prepare for initial requirements, then two years to achieve full compliance with the EU's AI Act.

🌍 The split reflects broader US-EU tensions over digital regulation, with Brussels treating its AI rules as a "red line" despite White House pressure.

🚀 This divide could reshape competitive dynamics if Brussels rewards early adopters while punishing holdouts during enforcement.

Google will sign the European Union's AI code of practice just days before new regulations take effect, while Meta refuses to budge. The split reveals two very different strategies for handling Europe's aggressive push to regulate artificial intelligence.

The voluntary code helps companies comply with the EU's AI Act, which becomes law August 2. Google joins OpenAI and French company Mistral in signing, while Meta stands alone among major players in outright refusal. Microsoft will likely sign too, according to company president Brad Smith.

But Google isn't exactly thrilled about it. Kent Walker, Google's global affairs president, said the company will sign "with the hope" the code promotes European access to AI tools. In other words: Google doesn't want to do this, but sees no choice.

The Rules That Split Big Tech

The EU's AI Act bans what it calls "unacceptable risk" uses like social scoring and cognitive manipulation. It also sets requirements for "high-risk" applications including facial recognition and AI used in education or employment. Companies must register their systems and meet quality management standards.

The code of practice gets more specific. Companies can't train AI on pirated content. They must provide updated documentation about their tools. When content owners ask them not to use their works, they have to comply.

Google expressed concerns that departures from EU copyright law and requirements exposing trade secrets "could chill European model development and deployment." The company worries about slowing approvals and harming Europe's competitiveness.

Meta's Hard Line

Meta took a harder stance. Joel Kaplan, the company's chief legal officer, called the code "overreach" that would "throttle the development and deployment of frontier AI models in Europe." He said it introduces legal uncertainties and goes beyond what the AI Act requires.

"Europe is heading down the wrong path on AI," Kaplan wrote. The company sees the rules as stunting European innovation rather than promoting it.

This isn't just corporate posturing. The economic stakes are massive. Google estimates AI could boost Europe's economy by €1.4 trillion annually by 2034 - if deployed properly.

The Bigger Battle

The AI code sits within a broader US-EU fight over digital regulation. The White House recently said the two economies "intend to address unjustified digital trade barriers" - diplomatic speak for "we want you to ease up."

The EU isn't budging. A commission spokesperson said this week that Brussels won't move on "our right to regulate autonomously in the digital space." It's treating digital rules as a red line that won't be crossed regardless of US pressure.

This puts companies in an awkward spot. They need to maintain relationships with both Brussels and Washington while navigating conflicting regulatory demands.

Different Strategies, Same Problem

Google's approach amounts to regulated compliance - sign the code but voice concerns loudly. The company submitted feedback highlighting potential problems while agreeing to follow the rules. It's betting that engagement beats confrontation in the long run.

Meta chose defiance. The company calculates that standing firm sends a stronger message about regulatory overreach. Meta thinks other companies might join them and that Brussels will back down.

Each approach has downsides. Google might end up legitimizing regulations it considers problematic. Meta might find itself frozen out of European markets or facing tougher enforcement.

Microsoft seems to be hedging, with Smith saying the company will "likely" sign without making firm commitments either way.

What Companies Actually Do

The code is voluntary, which sounds meaningless but isn't. Companies that sign commit to specific practices around transparency, safety and copyright compliance. Those that don't risk being seen as uncooperative when regulators start enforcing the AI Act in earnest.

The real test comes when enforcement begins. Will Brussels treat signatories more favorably? Will non-signatories face stricter scrutiny? The answers will determine whether Google's diplomatic approach or Meta's defiance proves smarter.

Major European companies are already pushing back. CEOs from Airbus, BNP Paribas and other big firms want Brussels to pause implementation for two years. They say the overlapping rules create too much confusion.

The Clock Runs Out

August 2 is the deadline for the first wave of requirements. After that, companies get two years to reach full compliance. The timeline forces quick decisions about regulatory strategy.

Google's decision to sign while voicing concerns gives it a seat at the table for future discussions. Meta's refusal keeps its options open but might limit its influence on how rules get interpreted.

The split also reflects different business models and regulatory histories. Google has extensive experience managing European regulators through antitrust battles and privacy rules. Meta has faced consistent EU pressure over content moderation and data practices.

Why this matters:

• This isn't just about one code - it's a preview of how US tech companies will handle Europe's growing regulatory ambitions across AI, data, and digital markets

• The split strategy could backfire if Brussels punishes holdouts while rewarding early adopters, potentially reshaping competitive dynamics in European AI markets

❓ Frequently Asked Questions

Q: What exactly does signing this code require companies to do?

A: Companies must document their AI training data, stop using pirated content, respect content owners' opt-out requests, conduct safety testing, and report system capabilities to EU regulators. They also need to implement cybersecurity measures and provide transparency about how their models work.

Q: Which companies have signed and which haven't?

A: Google, OpenAI, and France's Mistral have signed. Microsoft will "likely" sign according to president Brad Smith. Meta is the only major AI company that has outright refused. The code was developed by 13 independent experts appointed by the European Commission.

Q: What happens to companies that don't sign the voluntary code?

A: Nothing immediately, since it's voluntary. But non-signatories risk being seen as uncooperative when EU regulators start enforcing the mandatory AI Act. They might face stricter scrutiny or tougher penalties if they violate the actual law later.

Q: When do these rules actually start affecting companies?

A: August 2, 2025 for companies with "systemic risk" AI models. Full AI Act compliance is required within two years. The code of practice is voluntary and can be signed anytime, but companies want to show good faith before enforcement begins.

Q: How does EU AI regulation compare to other countries?

A: The EU's AI Act is considered the world's strictest AI regulation. The US has voluntary guidelines and executive orders but no comprehensive law. China regulates AI but focuses more on content control than safety requirements.

Q: Can companies change their minds about signing later?

A: Yes, but it would look bad politically. Companies that refuse now could sign later if pressure mounts. Those who sign could theoretically withdraw, but that would anger EU regulators right before enforcement begins.

Q: Why is Meta taking such a different approach than Google?

A: Meta has faced more EU regulatory pressure over content moderation and privacy violations, making it more resistant to new rules. Google has more experience working with European regulators through antitrust cases and has broader business interests to protect.

Q: What penalties could companies face under the full AI Act?

A: Fines up to €35 million or 7% of global annual revenue, whichever is higher. For banned AI uses, penalties reach €35 million or 7% of revenue. For other violations, fines can be €15 million or 3% of revenue.

European Companies Demand AI Act Pause
45 major European companies including ASML, Airbus, and Mercedes-Benz demand EU pause its AI Act for two years. They say unclear rules threaten competitiveness while US rivals pull ahead. Brussels faces revolt from its own industry.
🇪🇺 Europe’s AI Regulation Timeline: The State of Play - From Madrid to Malta: Who’s Ready to Police AI?
Europe’s AI Regulatory Sprint Spain and Malta lead the pack in preparing for the EU’s AI Act enforcement starting August 2025, with their regulatory bodies already conducting preliminary investigations and ready for implementation.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.