Gmail made AI drafting free in early 2025, resetting the market. Startups now push deeper automation or compete on price against zero. The shake-out shows which AI features justify premium pricing—and which were always commodity infrastructure.
OpenAI flipped Sora's copyright policy from opt-out to opt-in within 72 hours of launch. The reversal—plus a new revenue-sharing model—reveals the collision between AI companies' burn rates, Hollywood's legal firepower, and the race to monetize generative video.
BlackRock's $40 billion bet on AI data centers collides with stubborn physics: chips demand exponential power, cooling consumes half the energy budget, and grids struggle to keep pace. The construction boom is real. So are the constraints.
OpenAI flipped Sora's copyright policy from opt-out to opt-in within 72 hours of launch. The reversal—plus a new revenue-sharing model—reveals the collision between AI companies' burn rates, Hollywood's legal firepower, and the race to monetize generative video.
🔄 OpenAI reversed Sora's copyright policy within 72 hours—from opt-out to opt-in control for rightsholders, plus a new revenue-sharing model for studios that permit character use.
📊 The company burns $2.5 billion annually while generating $4.3 billion in H1 2025 revenue, forcing monetization decisions as users create far more video content than projected.
🎬 Disney opted out immediately after launch, while Sora's feed filled with SpongeBob, Pokémon, and Star Wars content—triggering the rapid policy shift.
💰 OpenAI hired Meta's Fidji Simo, who built a $55 billion ad business, signaling that advertising infrastructure is coming despite not being in the current announcement.
⚖️ The reversal shows "move fast and break things" fails when the broken things are IP rights—product liability for generated content lands immediately, unlike training data litigation.
🌍 Revenue-sharing for AI-generated content could establish cross-industry norms, transforming studios from litigants into participants in AI remix culture if the model works.
A viral launch met a legal wall. After debuting Sora 2 with an opt-out approach to copyrighted characters on Tuesday, OpenAI reversed course by Friday: rightsholders will now get opt-in control with added knobs, and a revenue-share is on the table, per Sam Altman’s Sora update. The app still rocketed to No. 1 on Apple’s charts despite being invite-only. The sequence tells its own story.
🔥
Your competitors read this at breakfast.
Join them. Free daily AI updates.
Unsubscribe anytime. We're cool like that.
What changed—and why it matters
OpenAI first treated copyrighted characters like training data—available unless blocked. Sora’s feed filled with SpongeBob, Pokémon, and Star Wars riffs within hours. Disney opted out immediately. Others waited, watched, and calculated. Then came the pivot.
Altman said rightsholders liked the promise of “interactive fan fiction” but wanted control—up to and including “not at all.” The new policy mirrors Sora’s likeness rules: opt-in, with more granular constraints. OpenAI also floated paying those who allow character use. It’s a notable shift. And fast.
The speed wasn’t accidental. Neither was the message.
The burn-rate math
OpenAI’s finances add pressure. The company generated roughly $4.3 billion in the first half of 2025, while burning about $2.5 billion annually, largely on R&D. It just ran a $6.6 billion employee tender at a $500 billion valuation. Those numbers force choices.
Altman was blunt: video generation must pay for itself. Users are producing far more content than forecast, often for tiny audiences. That strains compute budgets. A revenue-share serves two goals—legal insulation and a path to monetize creation without throttling it. The math bites.
Hiring Fidji Simo, who helped build Meta’s ad juggernaut, hints at the next lever. Advertising isn’t in the blog post. But the scaffolding—social feed, behavioral signals, brand-safe partnerships—is.
Permission versus forgiveness at scale
“Move fast and break things” works until the thing is someone else’s IP. OpenAI’s opt-out test looked like web-scrape logic applied to generation: let content appear unless a studio blocks it. That era is over.
The distinction is crucial. Training data litigation crawls through courts; product liability for generated content lands immediately. Sora’s safety rails already block explicit violence, self-harm, and unverified celebrity likeness. Blocking copyrighted characters requires either a rights database or conservative generation rules that curtail creativity. Competitors are looser today. OpenAI led with permissiveness, then flinched when the risk crystallized. It had to.
📰
We read 100 articles. You read one email.
Five minutes. Zero BS. Daily AI news.
No spam. Cancel anytime.
The disinformation angle heightened stakes. In three days, users fabricated ballot-stuffing, immigration raids, and bomb scenes that never happened. Experts warned of the “liar’s dividend,” where real footage is dismissed as fake because convincing fakes are ubiquitous. Watermarks can be edited out. Trust erodes. Quickly.
The revenue-sharing bet
Turning antagonists into partners is the point. If studios can earn when fans legally generate with their characters, takedowns become licensing. That’s the Content ID lesson, ported to generative video. Precedent matters.
Success requires plumbing. OpenAI needs accurate, dynamic rights registries, enforcement tools, and UI that nudges users into licensed lanes without killing spontaneity. It also needs pricing that covers compute while leaving margin for payouts. None of that is trivial. But it’s more scalable than whack-a-mole lawsuits. Incentives beat warnings.
The bet extends beyond Sora. If revenue-sharing stabilizes AI-assisted remix culture, it could reshape creator economics across platforms. If it fails, courts will decide the rules instead—and slowly.
Competitive pressure, nakedly stated
OpenAI’s media chief, Varun Shetty, has said the team didn’t want “too many guardrails” that would dampen creativity or cede ground to rivals. Translation: Google’s Veo and Meta’s Vibes are running similar plays. Market share matters before norms harden. That’s the calculus.
The reversal acknowledges a second reality: Hollywood’s lawyers move faster than AI safety reviews. Studios can issue demands in hours. Safety regimes evolve in weeks. OpenAI chose to iterate in public. It got burned, then adapted. That is now the operating model.
The AGI detour question
OpenAI brands itself as an AGI lab. Sora’s social feed—where users drop Sam Altman into Grand Theft Auto set pieces—can look like a detour. Leadership argues video is a wedge into “virtual world-building,” aligned with longer-term goals. Maybe.
The tension is real. Researchers attracted by grand science may resist funneling breakthroughs into viral feeds. Yet revenue today funds ambition tomorrow. Every frontier lab is converging on the same compromise: consumer stickiness now, research later. It’s not hypocrisy. It’s the cost of scale.
What to watch next
Three things will show whether this pivot sticks. First, how many major studios opt in—and at what price. Second, whether OpenAI can build rights infrastructure that’s both permissive and safe. Third, how quickly competitors match the policy and the payouts. Watch the pipes, not the memes.
Why this matters
The pace of copyright adaptation will decide whether “permissionless innovation” survives contact with AI video—or collapses under injunctions and compute costs.
A workable revenue-share for generated content could set cross-industry norms, shifting creators and studios from litigants to participants in AI remix culture.
⚡
AI moved fast today. Did you?
Daily. No fluff. What matters in AI today.
No spam. Unsubscribe anytime.
❓ Frequently Asked Questions
Q: What is Sora 2 and how does it work?
A: Sora 2 is OpenAI's AI video generator that creates up to 10-second clips from text prompts. Users type descriptions like "a cat riding a skateboard" and the tool produces realistic video with synchronized audio. The app includes a social feed where users can remix others' videos and insert their own likeness into scenes—a feature called "cameos."
Q: Why does video generation cost OpenAI so much money?
A: Video requires massively more compute than text or images. Each 10-second Sora clip processes thousands of frames with complex physics modeling and audio synthesis. OpenAI burns $2.5 billion annually largely on R&D and compute infrastructure. Users are creating far more videos than projected, often for tiny audiences, which strains costs without generating proportional revenue.
Q: How would the revenue-sharing model actually work?
A: OpenAI hasn't released specifics, but the model likely mirrors YouTube's Content ID system: studios opt in and specify usage rules for their characters. When users generate videos featuring those characters, OpenAI shares subscription or potential advertising revenue with rightsholders. Altman said implementation requires "trial and error" but will start soon.
Q: What are Google and Meta doing differently with copyright?
A: Google's Veo 3 and Meta's Vibes currently allow similar generations without formal opt-in requirements for copyrighted characters. OpenAI's head of media partnerships said they avoided "too many guardrails" initially to stay competitive. The industry hasn't standardized copyright policies yet, but OpenAI's reversal may pressure competitors to adopt similar restrictions.
Q: What's the "liar's dividend" mentioned in the article?
A: The liar's dividend is when realistic fake content becomes so common that people dismiss authentic footage as AI-generated. If convincing fabrications are everywhere, bad actors can claim real evidence against them is fake. UC Berkeley professor Hany Farid said video was the "final bastion" of trustworthy evidence—Sora erodes that.
Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm.
E-Mail: marcus@implicator.ai
Mercor's new benchmark tested AI on real work from Goldman Sachs, McKinsey, and top law firms. GPT-5 scored 64%—impressive on paper, but not enough for autonomous operation. The gap between 'helpful sometimes' and 'production-ready' remains wide.
OpenAI employees sold $6.6 billion at a $500 billion valuation—but left $3.4 billion authorized shares unsold. The company calls it confidence. It could be caution about a restructuring still awaiting regulatory approval while burning $2.5B per half.
OpenAI's new Sora 2 app hit #1 in 24 hours—flooded with copyrighted characters and deepfakes. The company's inverting copyright norms: rightsholders must opt out, not opt in. Disney did. Most didn't. Now the legal questions multiply.
An AI studio created a photorealistic digital actress and announced agencies were negotiating to sign her. Within 48 hours, SAG-AFTRA condemned it and actors revolted. The speed of rejection reveals where Hollywood draws the line on AI.