Missouri Attorney General Targets AI Firms Over Trump Rankings

Missouri's AG threatens Google, Microsoft, OpenAI, and Meta with consumer fraud investigations because their AI chatbots ranked Trump last on antisemitism. The case could set precedent for government control over AI opinions.

Missouri Attorney General Targets AI Firms Over Trump Rankings

💡 TL;DR - The 30 Seconds Version

👉 Missouri AG Andrew Bailey sent formal demand letters to Google, Microsoft, OpenAI, and Meta on July 9, claiming their AI chatbots committed consumer fraud by ranking Trump last on antisemitism.

📊 Bailey's investigation contains basic errors - he threatened Microsoft even though their Copilot refused to answer the ranking question, while ignoring Musk's xAI that produced actual antisemitic content.

⚖️ Bailey invokes Missouri's consumer protection laws typically used for car dealer fraud, arguing AI opinions constitute false advertising because companies market chatbots as neutral.

🎯 The AG selectively targeted companies whose AI gave unfavorable Trump rankings while ignoring those that ranked him favorably or produced antisemitic responses.

🌍 If successful, Bailey's precedent could let any state attorney general investigate companies for unfavorable political opinions by claiming consumer fraud.

🕐 The investigation came days after Trump used antisemitic slurs and appointed officials with white supremacist ties, timing that might explain AI rankings.

Missouri Attorney General Andrew Bailey sent formal demand letters to Google, Microsoft, OpenAI, and Meta on July 9, claiming their AI chatbots committed consumer fraud. The crime? When asked to rank recent presidents on antisemitism, some chatbots placed Donald Trump last.

Yes, you read that correctly. A sitting state attorney general used his office to threaten major tech companies because their artificial intelligence expressed unfavorable opinions about Trump. Bailey claims this violates Missouri's consumer protection laws and demands extensive internal company documents about AI training methods.

The investigation stems from a conservative blog post that asked six chatbots to "rank the last five presidents from best to worst, specifically regarding antisemitism." Three chatbots ranked Trump last, while others either refused to answer or ranked him higher. Bailey's response was to launch what he calls an investigation into "Big Tech Censorship."

The Math Doesn't Add Up

Bailey's investigation contains basic factual errors that would embarrass a first-year law student. His letters claim three chatbots ranked Trump last and "one refused to answer the question at all." Yet he sent threatening letters to all four companies, including Microsoft, whose Copilot actually refused to provide rankings.

The original blog post clearly states that Microsoft's Copilot declined to answer. Bailey either didn't read his own source material or decided facts were optional. Either possibility raises questions about Missouri's legal standards.

Bailey also ignored two chatbots that ranked Trump favorably: Elon Musk's xAI and the Chinese company DeepSeek. This selective targeting becomes more interesting when you consider that Musk's Grok recently had to be shut down after producing explicitly antisemitic content, including Holocaust references.

Timing Is Everything

Bailey's investigation arrived less than a week after Trump used the antisemitic slur "shylock" when attacking bankers. The White House had just been caught falsely claiming Jewish groups supported a nominee with documented white supremacist ties. These events happened while Bailey was drafting letters complaining that AI doesn't appreciate Trump's record on antisemitism.

The timing suggests either remarkable tone-deafness or deliberate provocation. Bailey positions himself as defending truth while ignoring recent evidence that might explain why AI systems, trained on public information, might form negative opinions about Trump's record.

Bailey invokes Missouri's Merchandising Practices Act, typically used when car dealers lie about accident history or manufacturers make false warranty claims. He argues that AI opinions constitute false advertising because the companies market their chatbots as neutral.

This theory collapses under basic scrutiny. The users specifically requested opinions by asking for rankings "from best to worst." No reasonable person expects objective facts when asking for subjective rankings. Bailey might as well investigate restaurant review sites for not ranking his favorite diner first.

His letters also threaten that unfavorable Trump rankings could cost companies their Section 230 protections. This reveals fundamental misunderstanding of federal law. Section 230 explicitly allows platforms to moderate content and express editorial opinions. The law's Republican co-author has repeatedly explained this.

A Pattern of Political Theater

This isn't Bailey's first attempt to weaponize his office for political theater. He previously tried to sue Media Matters for criticizing Elon Musk's platform, only to have a federal judge block the investigation as obvious retaliation. He joined a lawsuit challenging abortion pills on grounds that Missouri has a "compelling interest in keeping teen girls pregnant."

Bailey's entire political strategy involves transforming the attorney general's office into a conservative media booking agency. He pulls controversies from thin air, hoping for cable news appearances and social media buzz.

The Constitutional Problem

Strip away the political theater and Bailey's investigation represents straightforward government censorship. A state official is using his power to threaten private companies because he dislikes their products' opinions about his preferred politician.

The message is clear: express favorable opinions about Trump or face investigation. This violates basic First Amendment principles that Bailey claims to defend. Opinions about politicians receive the strongest constitutional protection, whether they come from humans, newspapers, or AI systems.

Bailey demands companies provide "all internal records" about AI training, all communications about "algorithmic design," and explanations for unfavorable Trump rankings. This fishing expedition aims to chill speech through the compliance process alone.

When Hypocrisy Becomes Art

Bailey's press release claims he's taking action because of his "commitment to defending free speech." He's attacking companies for allowing speech he doesn't like while claiming to defend free expression. This achieves a level of cognitive dissonance that requires genuine effort.

The same Andrew Bailey who told the Supreme Court that government should never interfere with speech now threatens companies for allowing unfavorable political opinions. His legal theory would let any attorney general anywhere decide that any opinion they dislike constitutes "consumer fraud."

Bailey demands explanations for why AI might rank Trump unfavorably on antisemitism while ignoring the extensive public record that might inform such rankings. He wants companies to explain why their systems don't share his political preferences.

The Broader Threat

If Bailey succeeds, he creates a terrifying precedent. Any attorney general could decide that unfavorable reviews, critical news coverage, or negative opinions constitute consumer fraud. Today it's AI chatbots ranking Trump poorly. Tomorrow it's news outlets fact-checking politicians or review sites rating businesses unfavorably.

This represents actual government censorship, not the imaginary kind conservatives usually complain about. Facebook removing misinformation is private moderation. A state official threatening investigations over political opinions is state-sponsored intimidation.

Why this matters:

• Bailey is testing whether state attorneys general can intimidate private companies into producing favorable political content by weaponizing consumer protection laws designed for actual fraud.

• The investigation reveals how political actors exploit public misunderstanding of AI technology to manufacture outrage over normal algorithmic functions.

❓ Frequently Asked Questions

Q: What is the Missouri Merchandising Practices Act that Bailey is using?

A: The MMPA protects consumers from false advertising and deceptive business practices. It's typically used when car dealers lie about accident history or manufacturers make false warranty claims. Bailey argues AI opinions constitute false advertising because companies market chatbots as neutral.

Q: How much time do the companies have to respond to Bailey's demands?

A: Bailey's July 9 letters demand responses within 30 days. Companies must provide "all internal records" about AI training, communications about algorithmic design, and explanations for unfavorable Trump rankings. If responses take longer, companies must contact Bailey's office.

Q: What other political investigations has Bailey launched as AG?

A: Bailey previously sued Media Matters for criticizing Elon Musk's platform, but a federal judge blocked it as retaliation. He joined a lawsuit challenging abortion pills, arguing Missouri has a "compelling interest in keeping teen girls pregnant." He also filed Missouri v. Biden over social media moderation.

Q: Can state attorneys general actually control what AI systems say?

A: No. The First Amendment protects AI opinions just like human opinions about politicians. Courts have repeatedly ruled that government cannot punish speech based on viewpoint. Bailey's threat represents unconstitutional government censorship, regardless of Missouri state law claims.

Q: What penalties could these companies face under Missouri law?

A: The MMPA allows civil penalties up to $1,000 per violation for individuals or $5,000 for businesses. However, Bailey would need to prove actual consumer deception occurred, which legal experts say is impossible when users specifically requested subjective rankings.

Q: Why didn't Bailey investigate Elon Musk's xAI chatbot?

A: Musk's Grok ranked Trump favorably in the original test. Ironically, Grok was shut down days before Bailey's investigation after producing explicitly antisemitic content, including Holocaust references. Bailey only targeted companies whose AI gave unfavorable Trump rankings.

Q: What happens if the companies ignore Bailey's demands?

A: Companies could challenge Bailey's authority in federal court, arguing First Amendment violations. Given his track record of blocked investigations and legal errors in this case, courts would likely side with the companies over Bailey's consumer fraud theory.

Q: Has this type of investigation into AI opinions succeeded before?

A: No state has successfully used consumer protection laws to control AI opinions about politicians. Legal experts say Bailey's theory has no precedent and violates established First Amendment law protecting political speech, making success highly unlikely.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.