In February 2020, Apple's head of fraud engineering texted a colleague about child sexual abuse material on iCloud. Eric Friedman didn't hedge. He called iCloud "the greatest platform for distributing child porn" and said Apple had "chosen to not know in enough places where we really cannot say."
Five years later, a West Virginia attorney general put that message in a lawsuit.
This all landed in the same week. Zuckerberg showed up to a Los Angeles courtroom on Wednesday, combative according to NBC News, fielding questions about what Instagram does to kids. The next day, West Virginia sued Apple over child abuse material in iCloud. And in New Mexico, filings from Meta's ongoing case showed employees warning that expanding end-to-end encryption would sharply reduce the company's ability to detect child abuse.
Zuckerberg told the LA jury he cared about "the wellbeing of teens and kids" using Meta's platforms. He insisted Instagram existed to be "useful," not to maximize engagement. The internal documents that surfaced this week read differently.
Two of the most valuable companies on Earth made the same calculated bet. Privacy branding and encryption defaults mattered more than the ability to detect child exploitation. Their own employees said so. They shipped it anyway.
The Breakdown
- Apple filed 267 CSAM reports to NCMEC in 2023. Google filed 1.47 million. Meta filed 30.6 million.
- Internal documents from both companies show employees warned that encryption would blind them to child abuse.
- Courts in West Virginia, New Mexico, and Los Angeles are simultaneously testing corporate liability for child safety failures.
- Apple built and killed its own CSAM scanner (NeuralHash). Meta's encryption rollout blinded its detection systems.
The gap you can measure
267.
That was Apple's child abuse report count for 2023, sent to the National Center for Missing and Exploited Children. The entire year, all products, every country. 267. Google? 1.47 million. Meta? 30.6 million. All three companies operate cloud storage and messaging services. All three handle encrypted data. Only one filed fewer CSAM reports than a mid-size school district.
The tools to close that gap have existed for over a decade. Microsoft developed PhotoDNA in 2009 with Dartmouth College. The system compares mathematical fingerprints of images against a registry of confirmed abuse material. If a match hits, it flags automatically. Microsoft provides the technology free to qualified organizations. Google adopted it. So did Reddit, Snap, and Dropbox.
Apple did not.
Apple tried building its own version in 2021. NeuralHash would scan iCloud photos against a known database of abuse images. Privacy groups pushed back hard. The same technology, they said, could hand governments a surveillance backdoor. Apple killed the whole project by 2022. Craig Federighi told The Wall Street Journal the company would shift to preventing abuse "before it occurs." Federighi runs Apple's software division.
The replacement? Communication Safety. It blurs nudity on children's devices. It works in Messages, FaceTime, AirDrop. Enabled by default for child accounts in family configurations. And it does nothing about adults storing and sharing CSAM through iCloud.
The distinction matters more than you might think. Communication Safety tells kids when something harmful shows up on their screen. That's where it stops. The content stays put. Nobody gets notified. Apple built a warning label for the viewer and left the library wide open.
The rug and the rocks
Meta began rolling out default end-to-end encryption on Messenger in December 2023. One Meta employee's reaction was accidentally precise.
"There goes our CSER numbers next year," the employee wrote, according to CNBC's reporting on the New Mexico filings. CSER is Meta's Community Standards Enforcement Report, its internal scorecard. Same employee, same thread. Meta had "put a big rug down to cover the rocks."
The rocks were substantial. Court filings in the New Mexico case indicate Meta's systems flagged millions of child abuse reports annually before the Messenger encryption change. That volume represented what the company could catch because its safety systems could still read message content. Encryption erased that capability.
Meta's staff flagged the consequences years before the rollout. A senior staffer in Global Affairs wrote in a February 2019 memo that "without robust mitigations, E2EE on Messenger will mean we are significantly less able to prevent harm against children." Another internal document from June 2019 went further. "We will never find all of the potential harm we do today on Messenger when our security systems can see the messages themselves."
The encryption plan went public in spring 2019. It took Zuckerberg's team nearly five years to ship it. Staff memos warned the whole time this was a terrible idea. Didn't matter.
Apple's path ran parallel. Friedman's 2020 text about choosing "not to know" predated the NeuralHash announcement by a year. The company diagnosed the problem and built a partial fix. Then killed it under pressure. West Virginia's AG alleges Apple "knowingly and intentionally designed its products with deliberate indifference to the highly preventable harms."
Same destination for both. Willful ignorance with a product launch attached.
Stay ahead of the curve
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
Privacy doesn't require blindness
The defense that encryption and child safety are incompatible collapses under a single comparison. Google files 1.47 million CSAM reports per year. Google also encrypts its messaging products end to end.
How? Because scanning stored images for known CSAM fingerprints and encrypting message transit are separate operations. PhotoDNA doesn't read your messages. It matches image hashes against a registry of confirmed abuse material. The way Apple and Meta frame the encryption debate folds two separate problems into one excuse. A convenient one.
The Electronic Frontier Foundation backed Apple's encryption decisions, and the argument carries weight. Encryption is "the best method we have to protect privacy online," EFF's Thorin Klosowski wrote, "which is especially important for young people." The group fought scanning mandates too. Automated image detection, the EFF argued, leads to false positives and unwarranted investigations.
We know this because governments have weaponized surveillance tech against their own citizens, more than once, and they'll keep doing it. But the choice between encryption and nothing? That's Apple and Meta's framing, not reality. You can protect messages in transit without refusing to hash stored images against known CSAM databases. Reddit scans for CSAM. Snap scanned for it while offering disappearing messages. The companies that settled their lawsuits did more detection work than the ones still in court. Not a coincidence. Signal encrypts everything and stores nothing on its servers. That's one end. Google encrypts messages in transit but still scans what users store for CSAM. That's the other. Apple and Meta occupy the strangest position on that spectrum. They store billions of images on their servers and refuse to scan them. The refusal, they insist, is principled.
Consider what Apple already built. Communication Safety scans images on children's devices locally, without sending data to Apple's servers. The company proved on-device detection works when it wants to ship it. For iCloud, most users haven't enabled Advanced Data Protection, the optional setting that encrypts stored photos end to end. Without it, images sit on Apple's infrastructure in a form the company could theoretically scan. Apple passed. The engineering was available. They chose the marketing.
And the privacy defense stumbles on its own evidence. When your fraud engineering lead texts that your platform is "the greatest" for distributing child exploitation material, and your response is to encrypt the evidence rather than address the underlying problem, the privacy is protecting a liability sheet. Not the person holding the phone.
Both Apple and Meta had the engineering resources to protect privacy and detect abuse at the same time. They picked the path that demanded less work and generated fewer uncomfortable numbers. You can guess which priority won.
Who pays when the verdicts come
Courts in three states are now testing whether "we chose not to know" counts as a business decision or an admission of liability. The outcomes could reshape how encryption and abuse detection coexist across platforms serving billions.
West Virginia wants injunctive relief forcing Apple to implement effective CSAM detection on iCloud. Not fines that vanish into quarterly earnings. Mandated scanning technology on a platform used by more than a billion people. New Mexico's AG wants Meta held accountable for what he calls misleading statements about platform safety. The LA trial could set precedent that Instagram's design choices are legally responsible for harm to minors.
McCuskey, the West Virginia AG, told reporters he expects other states to follow. The political math makes that likely. Child safety prosecutions against major tech companies generate bipartisan applause in ways antitrust cases never will. TikTok and Snap already settled with the plaintiff in the LA case before trial. That leaves Meta and YouTube as the remaining defendants, both cornered by their own internal records.
Zuckerberg's testimony offered a preview of the corporate defense playbook. He claimed Meta had moved away from engagement metrics toward "utility." He accused the plaintiffs' lawyers of mischaracterizing his prior statements more than a dozen times, according to The New York Times. Combative and defensive is not the posture of a company that feels secure about its record. The judge in LA had to warn spectators not to record proceedings with AI glasses after members of Zuckerberg's entourage were spotted wearing Meta's Ray-Bans inside the courthouse. Even the courtroom became a product demo.
For Apple, the exposure stings differently. The company built its identity around privacy. "What happens on your iPhone stays on your iPhone," read the billboard campaign. If a court rules that this commitment created a haven for child exploitation material, the brand damage dwarfs whatever financial penalties follow. Apple's response this week, pointing reporters toward Communication Safety and parental controls, revealed a company anxious to explain away a 267-to-1.47-million gap with feature announcements.
Meta can point to 30.6 million CSAM reports as evidence of past effort. But those pre-encryption numbers have become prosecution exhibits. They prove Meta knew exactly how much abuse its systems could catch, then chose an architecture that couldn't.
And the West Virginia suit didn't emerge from nowhere. By 2024, some 2,680 child sexual abuse survivors had already sued Apple in a California class action, arguing the killed CSAM scanner gave abusers a green light. A separate North Carolina case was filed for a nine-year-old. McCuskey is the latest AG to pile on.
The tell is in the timestamps
Return to that December 2023 message from the Meta employee. "There goes our CSER numbers." Written the same month Meta publicly celebrated default encryption on Messenger. The blog post and the internal admission arrived simultaneously. The company knew what it was losing and announced the loss as a feature.
Friedman sent that text in February 2020. A year and a half later, Apple rolled out NeuralHash. By late 2022 it was dead. Two years building a fix for a problem the company's own fraud chief had already diagnosed. Then they threw it out.
The pattern keeps repeating. Both companies treated child safety like a messaging concern rather than an engineering obligation. They chose not to see, then dressed up the blindness as a principled stand. Attorneys general in three states are now asking courts to decide what "we chose not to know" really means.
The next AGs to file won't even need new arguments. Apple and Meta already wrote them.
Frequently Asked Questions
What is CSAM and why are tech companies required to report it?
CSAM stands for child sexual abuse material. Under US federal law, tech companies must report detected instances to the National Center for Missing and Exploited Children. Companies use tools like Microsoft's PhotoDNA to match images against a database of confirmed abuse material. Apple filed 267 such reports in 2023, compared to Google's 1.47 million and Meta's 30.6 million.
What was Apple's NeuralHash and why did the company cancel it?
NeuralHash was Apple's system announced in 2021 to scan iCloud photos for known child abuse images. Privacy advocates argued the technology could become a government surveillance backdoor. Apple shelved the project by 2022 and replaced it with Communication Safety, which warns children about nudity on their devices but does not detect, report, or remove CSAM from iCloud.
How does end-to-end encryption affect child abuse detection?
End-to-end encryption scrambles messages so the platform itself cannot read them. When Meta began rolling out default encryption on Messenger in December 2023, it lost the ability to scan message content for abuse material. Court filings in the New Mexico case indicate Meta's pre-encryption systems caught millions of abuse reports annually from Messenger.
What is PhotoDNA and which companies use it?
PhotoDNA is a hashing tool developed by Microsoft and Dartmouth College in 2009 that identifies known CSAM by comparing image fingerprints against a confirmed abuse registry. Microsoft provides it free. Google, Reddit, Snap, and Dropbox all use it. Apple does not and has never adopted it or a comparable alternative since canceling NeuralHash.
What could courts force Apple and Meta to change?
West Virginia is seeking injunctive relief requiring Apple to implement CSAM detection on iCloud, potentially mandating scanning for over a billion users. New Mexico wants Meta held accountable for misleading safety claims. The LA trial could establish that Instagram's design choices are legally responsible for harming minors. Other state attorneys general are expected to file similar suits.



