Zuckerberg Overruled 18 Experts on Beauty Filters at Trial

Zuckerberg Called Child Safety 'Paternalistic.' This Time, a Jury Heard It.

Zuckerberg overruled 18 experts on beauty filters and called child safety 'paternalistic.' The landmark trial sidesteps Section 230 with product liability law.

Two members of Mark Zuckerberg's entourage walked him into Los Angeles Superior Court on Wednesday wearing Meta Ray-Ban glasses, the AI-equipped frames that can quietly record video of everything in view. Zuckerberg himself wore a navy suit and gray tie. No glasses. The judge immediately threatened contempt for anyone caught using smart glasses to record inside her courtroom.

Nice to see this judge has as much contempt for these bullshit meta glasses as I do! You record me without my permission, I am punching those glasses off your face. I don’t support your surveillance state or the invasion of my privacy! www.cnbc.com/2026/02/18/m...

[image or embed]

— The Resistance CO (@theresistanceco.bsky.social) February 18, 2026 at 11:24 AM

Zuckerberg was there to defend his company against charges of building products designed to addict children. His team showed up wearing the company's newest surveillance product. The irony wrote itself.

Then Mark Lanier, the plaintiff's attorney, projected a frowning stick figure on a screen. Simple drawing, brutal question. Should a reasonable company help a vulnerable child, ignore them, or prey on them?

"I think a reasonable company should try to help the people that use its services," Zuckerberg replied.

Four hours of testimony chewed through that answer. Lanier walked the jury through internal emails, engagement targets, an expert panel the company paid for and then ignored, and age verification systems that didn't exist for Instagram's first decade. Zuckerberg had one move. The same five words, worn thin by repetition. "You're mischaracterizing what I'm saying." Reporters lost count somewhere past a dozen. By late morning it had stopped working as a defense and started working as evidence.

The documents spoke for themselves. This trial, the first of some 1,600 consolidated cases to reach a jury, is being compared to Big Tobacco's courtroom reckoning in the 1990s. TikTok and Snap settled before opening statements. Meta and YouTube chose to fight. On Wednesday, the jury heard directly from the man who built the machine.

The Argument

  • Zuckerberg overruled 18 unanimous experts who found beauty filters harm teenage girls, calling the ban 'paternalistic'
  • Internal documents show Meta set teen engagement as a top priority while 4 million children under 13 used Instagram without age checks
  • The trial sidesteps Section 230 by framing Instagram as a defective product, borrowing from tobacco-era product liability law
  • More than 1,600 plaintiffs are watching this bellwether case after TikTok and Snap settled before opening statements

The 18 experts nobody listened to

The moment that may define this trial for the jury had nothing to do with algorithms or infinite scroll. It centered on beauty filters.

Meta actually banned beauty filters on Instagram back in 2019 after the evidence got hard to ignore. Girls were distorting their own faces. So the company paid for a formal review. Eighteen experts evaluated the feature. Every one of them concluded that beauty filters cause harm to teenage girls. Not a split decision. Not a close call. Unanimous.

Meta lifted the ban anyway.

When Lanier pressed Zuckerberg on why the company overruled every expert on its own panel, the answer came wrapped in the language of liberty. The decision came down to free expression, Zuckerberg told the jury. Banning the filters felt "paternalistic."

"It sounds like something I would say and something I feel," he testified. "It feels a little overbearing."

Eighteen researchers. Zero dissent. A formal finding that the feature causes measurable harm to adolescent girls. And the CEO of the company that paid for the study characterized protecting those girls as overbearing. At Meta, a unanimous expert panel carried less weight than a gut feeling about freedom.

Meta did add limited guardrails after lifting the ban. It stopped creating beauty filters internally and stopped recommending third-party filters to Instagram users. But the feature stayed live. Anyone could still apply them, including children who had lied about their age to create accounts.

Zuckerberg claimed other experts had considered the ban a suppression of free speech. He did not name them. What the jury saw instead was a pattern that runs through Meta's entire defense: every question about product safety gets reframed as a question about personal freedom. A filter of its own kind.

The corporate filter

Meta filters more than selfies. For years the company has run its own decisions through a kind of corporate beauty filter. Harmful product choices go in. Principled-sounding language comes out.

The internal record that Lanier placed before the jury tells a story that Meta's public statements do not. A 2015 email from Zuckerberg to senior executives listed his goals for the year: time spent on Meta's apps should increase by 12%, and the "teen trend" should be reversed. A separate 2017 memo from a senior executive was blunter. "Mark has decided the top priority for the company is teens."

On the stand, Zuckerberg acknowledged that he once gave teams engagement targets. He insisted the company no longer operates that way. But the internal record doesn't stop in 2017. Documents presented during earlier testimony showed that Instagram chief Adam Mosseri set specific daily usage goals: 40 minutes per user in 2023, climbing to 46 minutes by 2026. Mosseri testified last week that Instagram is not "clinically addictive." He compared heavy usage to watching too much television. Sixteen hours in a single day, he told the jury, was not addiction. Just "problematic use."

Meta's own 2015 data estimated that roughly 4 million children under 13 were using Instagram in the United States, nearly a third of all American kids aged 10 to 12. Instagram did not begin requiring a birthday at signup until late 2019. For four years after executives circulated that internal estimate, the company's age restrictions existed on paper and essentially nowhere else.

Nick Clegg ran Meta's global affairs at the time. Before joining the company, he served as the UK's deputy prime minister. His internal email was blunt. The company's age limits were "unenforced," making it "difficult to claim we're doing all we can." That message surfaced through legal discovery. It was not the kind of thing Meta puts in press releases.

Internal documents shown to the jury revealed that Zuckerberg's own comms team had coached him on how to look human. The advice, pulled from internal slides, was to come across as "authentic, direct, human, insightful, real" and avoid seeming "fake, robotic, corporate, cheesy." A 2017 note told him to show more empathy on child safety. Zuckerberg didn't fight any of that. "I'm actually well known to be sort of bad at this," he told the courtroom.

The defensiveness inside Menlo Park is visible even in the courtroom strategy. Meta's public statement about the trial focuses on one narrow question: whether Instagram was "a substantial factor" in KGM's mental health struggles. The company's attorneys told the jury her problems predated social media. Her difficult home life, they argued, was the real cause. It is the posture of a company that feels cornered but refuses to say so. Narrow the frame. Blame the user. Hope the jury skips the emails.

Why a courtroom changes the math

Zuckerberg has sat in chairs like this before, or chairs that looked similar. Congress called him in twice. The 2024 session got ugly when senators forced him to stand and look at parents holding framed photos of dead teenagers. He apologized. Then everybody went home and nothing changed. Five-minute turns per senator. No follow-up questions. The news cycle burned through it by the weekend.

Juries don't work like that.

Lanier has no clock. What he did on Wednesday was slow, deliberate, and hard to watch if you were sitting at the defense table. A 2015 email went up on the screen. Then a 2017 memo. Then a 2019 reversal. Each time Lanier circled back with the question phrased a little differently. That kind of grinding wears through rehearsed answers. He has weeks of testimony left.

On Joe Rogan's podcast, Zuckerberg once bragged that he could fire his own board and put himself right back in the chair. Millions of listeners heard it as confidence. The jury heard it as something else. A man telling you, under oath, that nobody at his company can stop him from doing whatever he wants. On Rogan he was loose and expansive. In court he gave clipped answers and kept reaching for that "mischaracterizing" line. Both versions are now in the record.

But the structural shift in this trial matters more than any single exchange. The plaintiffs' legal team sidestepped Section 230, the 1996 law that has shielded tech companies from content-based liability for three decades. Their argument is not about what users posted on Instagram. It is about the product itself. The algorithm, the infinite scroll, the autoplay, the beauty filters. All of it, they contend, was engineered to exploit vulnerabilities in developing brains.

This is the legal theory that has Silicon Valley's attorneys genuinely concerned. For 30 years, tech companies pointed to Section 230 as a near-absolute shield. Users made the content. Platforms merely hosted it. That defense collapses when the accusation is about scroll mechanics, autoplay loops, notification patterns, and beauty filters designed to keep a thumb moving. Product features, not user posts. The legal framework borrows from product liability, the same playbook that brought down tobacco manufacturers. If you design a feature knowing it harms users and ship it anyway, you own the harm.

More than 1,600 plaintiffs are watching this case. Over 350 families. Over 250 school districts. TikTok and Snap already settled. If this jury sides with KGM, the settlement pressure on every remaining case becomes massive. Meta posted nearly $60 billion in revenue last quarter. The company can afford to write checks. Whether it can absorb the design changes a verdict might force is a different question entirely.

Fifty feet of filtered selfies

At the end of his questioning, Lanier did something the jury will carry with them long after the legal arguments fade. He and six other people unrolled a 50-foot collage of selfies that KGM had posted on Instagram over the years. Many of the images used beauty filters. The same beauty filters that 18 experts said caused harm. The ones Zuckerberg decided to keep because removing them felt "overbearing."

KGM, now 20, sat in the courtroom during parts of the testimony, directly across from the man who runs the platform she joined at nine years old. Meta's own data showed millions of children her age on Instagram. The company did not ask users for their birthdate for another four years.

Lanier asked Zuckerberg whether Meta had ever investigated KGM's account for signs of unhealthy behavior. Zuckerberg did not answer.

Bereaved parents packed the gallery, same as every morning since the trial started. Joann Bogard was one of them. Seven a.m., courtroom lottery, morning session. Her boy was 15. A choking challenge on social media killed him in 2019. At the break she stepped into the hallway. Heavy, she said. A lot of emotions in that room.

The company that gives users tools to reshape their faces also reshapes the language around its own decisions. Safety becomes paternalism. Addiction becomes problematic use. A unanimous expert finding becomes a matter of free expression. Those filters have worked before, in front of cameras and senators and podcasters. They work less well before a jury that has spent weeks reading the internal emails. In Judge Carolyn Kuhl's courtroom, you can't even wear the glasses.

Frequently Asked Questions

What is the Zuckerberg social media addiction trial about?

A 20-year-old woman identified as KGM alleges that Instagram and YouTube were intentionally designed to be addictive, worsening her depression and suicidal thoughts after she started using the apps at age nine. The case is the first of some 1,600 consolidated lawsuits to reach a jury.

Why did Meta lift the ban on beauty filters despite expert findings?

Zuckerberg testified that banning the filters felt 'paternalistic' and 'overbearing.' He said the decision came down to free expression, even though all 18 experts on Meta's own review panel concluded the feature causes harm to teenage girls.

How does this trial get around Section 230 protections?

The plaintiffs argue Instagram is a defective product rather than a platform hosting harmful content. By focusing on design features like infinite scroll, autoplay, and beauty filters, the case uses product liability law instead of content-based claims that Section 230 would shield.

What internal documents were shown during Zuckerberg's testimony?

Lanier presented a 2015 email where Zuckerberg set goals to increase user time by 12% and reverse a declining teen trend. A 2017 memo stated teens were the company's top priority. Internal data showed 4 million children under 13 on Instagram, and a Nick Clegg email called age limits 'unenforced.'

What happens if the jury rules against Meta in this case?

A verdict for the plaintiff could trigger massive settlement pressure across 1,600 pending cases from families and school districts. Beyond financial payouts, Meta could face court-ordered design changes to features like infinite scroll, autoplay, and beauty filters.

EU Finds TikTok's Infinite Scroll and Algorithm Violate Digital Safety Law
European Union regulators on Friday issued a preliminary ruling that TikTok's core design features, including infinite scroll, autoplay, and its recommendation algorithm, violate the bloc's Digital Se
OpenAI Rolls Out Age Detection for ChatGPT Before Adult Mode Launch
You open ChatGPT at 11 p.m. to ask about a calculus problem. Three months on the platform. You told the truth about your age when you signed up, but you're 34, and OpenAI's new algorithm is watching h
The First AI Harm Settlements Arrive. The Harder Questions Remain.
For two years, Silicon Valley operated under an assumption that felt like law: chatbots are software, and software has immunity. The same rules that protect search engines and social feeds would prote

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.