OpenAI's Sora bet: copyright first, permission later

OpenAI's Sora 2 demands permission for your face but lets Spider-Man appear unless studios opt out. The split policy arrives as year-end conversion deadlines loom and competitors launch rival video feeds—testing fair use through product, not precedent.

OpenAI Sora 2: Permission for faces, opt-out for copyright

💡 TL;DR - The 30 Seconds Version

👉 OpenAI released Sora 2 Tuesday with asymmetric IP rules: explicit permission required for any person's likeness, but copyrighted characters appear unless studios opt out after videos are made.

📱 The iOS social app creates 10-second AI videos with verified "cameo" avatars friends can use with permission. Invite-only launch in US and Canada gives each user 4 additional invites.

📊 Studios and talent agencies learned last week their copyrighted material appears in Sora by default. No blanket opt-outs—rights holders report violations individually.

⚔️ Competitive pressure intensifies as Google integrates Veo 3 into YouTube and Meta launches Vibes, while OpenAI faces year-end deadlines where investors can claw back funding if restructuring stalls.

🎯 Sora 2 adds synchronized audio and better physics, but the social feed gambles on unproven demand—neither OpenAI nor Meta can show users want AI-generated content from friends.

⚖️ The split policy tests fair-use boundaries courts are still defining, positioning OpenAI as responsible on likeness while aggressive on copyright—a stance that anticipates litigation.

OpenAI released Sora 2 on Tuesday with an asymmetric policy choice that exposes its strategic priorities: your face needs explicit permission, but Spider-Man doesn't. The company's new video generator and accompanying iOS social app treat individual likeness as sacred while making copyright holders opt out after the fact—a stance that reflects both competitive desperation and calculated risk tolerance as OpenAI races toward a year-end corporate structure deadline.

The copyright approach mirrors what OpenAI launched for image generation in April, when ChatGPT promptly flooded the internet with Studio Ghibli-style memes. Studios and talent agencies received notice over the past week that their copyrighted characters will appear in Sora-generated videos unless they explicitly request removal. No blanket opt-outs allowed—rights holders report violations one by one.

What's actually new

Sora 2 adds synchronized audio and video, including dialogue in multiple languages, plus what OpenAI claims is meaningfully better physics simulation—a person doing a backflip on a paddle board, complete with proper fluid dynamics. The model generates multi-shot sequences automatically rather than requiring manual editing. It's positioned as the "GPT-3.5 moment for video," jumping from the February 2024 research preview (the "GPT-1 moment") to something ready for mass adoption.

The social app resembles TikTok's vertical feed but centers on a "cameo" feature: users record a short video to create a verified digital avatar that friends can insert into AI-generated clips—with permission. Each cameo use notifies the likeness owner, who can delete the video or revoke access. The app launches invite-only in the US and Canada, with each user receiving four additional invites. Videos max out at 10 seconds. An Android version sits in the "eventually" category.

Three controls shape the experience: you decide who can use your cameo (just friends or everyone), you see all videos featuring your likeness including drafts, and you're a "co-owner" with deletion rights. OpenAI can't generate anyone from a prompt or photo unless they've submitted a verified cameo first. Public figures face the same barrier—they must upload their own cameo to appear in any Sora video.

Credit: OpenAI

OpenAI's IP stance follows a "forgiveness over permission" playbook that Georgetown Law's Kristelia García calls predictable given competitive intensity. Google connected its Veo 3 video generator to YouTube recently, giving it distribution through the platform's massive user base for short-form videos. Meta launched Vibes last week, its own AI video social feed. OpenAI faces pressure on two fronts: catch YouTube's network effects and prove demand before competitors do.

The company distinguishes copyright from likeness with clarity that suggests legal advice shaped the policy. Chief Strategy Officer Jason Kwon's statement—"Our general approach has been to treat likeness and copyright distinctly"—lands as corporate positioning for inevitable litigation. This summer, judges sided with Meta and Anthropic in separate fair-use cases when training data was transformed into something meaningfully different. OpenAI appears to be testing whether generation follows the same logic as training.

Hollywood's reaction will determine if the bet works. Disney and Universal sued Midjourney in June for allegedly stealing copyright work to train its image generator. Trump signaled support for AI fair use over the summer, saying systems can learn from articles without contract negotiations, but 400-plus actors, directors, and musicians pushed back with an open letter. OpenAI lobbied the Trump administration this spring alongside Google to declare training on copyright material fair use—a move that strained relationships the company needs for Hollywood adoption.

VP of Media Partnerships Varun Shetty's line—"If there are folks that do not want to be part of this ecosystem, we can work with them"—translates to reactive cleanup, not proactive consent. The company has agreements with some studios to block their characters upon request, but the default posture is inclusion unless explicitly told otherwise.

Social ambition vs. unproven demand

OpenAI is making its biggest product bet on a behavioral assumption: people want feeds of AI-generated content from friends, not just human-created videos. Thomas Dimson, a software engineer on the team, admitted internal skepticism about an AI-generated feed until the cameo feature reframed it as connection-focused. The pitch emphasizes "strengthen and form new connections" through "fun, magical Cameo flows," positioning against what Dimson calls other platforms "drifting away from this idea of connections and friends."

That's the theory. The evidence is absent. Meta and OpenAI are both betting on AI video social, but neither can point to user behavior that validates the model. Snap's stock dropped 9.2% Tuesday on the news, reflecting investor belief that Sora represents competitive threat—but Snap was already under pressure from TikTok and Instagram, losing $400 million in the first half of the year with single-digit growth and massive stock-based compensation. The market priced in displacement risk before anyone proved demand exists.

The app's design choices reveal tension between aspiration and reality. OpenAI prioritizes "creativity and active participation, not passive scrolling"—but builds continuous scroll that parents must manually disable for teens. The feed ships with "steerable ranking" so users can "tell the algorithm exactly what you're in the mood for," treating preference control as a feature rather than admitting recommendation algorithms struggle with AI content signals.

The conversion deadline pressure

Sora 2 arrives while OpenAI awaits approval from California and Delaware attorneys general for its shift toward traditional for-profit structure. If the conversion doesn't complete by year-end, some investors can claw back their promised investment. That timeline explains why OpenAI is launching invite-only rather than waiting for broader access—the company needs usage data and market validation before the deadline, even if the product isn't ready for full release.

Competitive timing compounds the pressure. OpenAI launched the original Sora in December 2024 as a research preview with limited access. Nine months later, Google integrated Veo 3 into YouTube and Meta launched Vibes. The gap between capability demonstration and product execution gave competitors time to move. Bill Peebles, who leads the Sora team, framed the release as catching up to the "ChatGPT moment for video generation"—a comparison that acknowledges OpenAI hasn't yet achieved for video what it managed for text.

The safety architecture—C2PA metadata, visible watermarking, reverse-image search tools, prompt filtering across multiple video frames, transcript scanning for policy violations—builds on systems from ChatGPT image generation and Sora 1. OpenAI disabled screen recording to control video distribution and requires verification before generating anyone's likeness. These controls position the company as responsible in likeness protection while maintaining aggressive copyright posture—a split that anticipates regulatory scrutiny.

Why this matters:

• OpenAI's asymmetric IP policy—permission for people, opt-out for fictional characters—tests fair-use boundaries courts are still defining, setting precedent through product launch rather than legal clarity.

• The social video bet assumes demand exists without evidence, risking significant development resources on behavioral change that may not materialize even as competition intensifies and corporate structure deadlines loom.

❓ Frequently Asked Questions

Q: What's the year-end deadline OpenAI is racing against?

A: OpenAI is waiting for approval from California and Delaware attorneys general to convert from its current structure to a traditional for-profit company. If the conversion doesn't complete by December 31, 2025, some investors can claw back their promised investment. This pressure explains the invite-only launch rather than waiting for broader readiness.

Q: How does the cameo feature actually work?

A: You record a short video following on-screen instructions to verify your identity and capture your likeness. This creates a digital avatar that friends can insert into their AI-generated videos—but only with your permission. You're notified each time someone uses your cameo, see all videos featuring it (including drafts), and can delete any video or revoke access anytime.

Q: Can I access Sora 2 right now?

A: The iOS app is invite-only in the US and Canada. OpenAI prioritizes access for heavy users of the original Sora model, then ChatGPT Pro subscribers, Plus and Team plan users, and eventually free users. Each person who gets in receives 4 additional invites to share. Android release timeline is unspecified.

Q: What happens if a studio doesn't opt out of having their characters appear?

A: Their copyrighted characters will appear in Sora-generated videos by default. Studios must report violations individually—OpenAI won't accept blanket opt-outs across all of a studio's work. This mirrors the approach OpenAI took with ChatGPT image generation in April, which promptly generated Studio Ghibli-style content until asked to stop.

Q: How is Sora 2 different from Meta's Vibes or Google's Veo 3?

A: Google integrated Veo 3 directly into YouTube, giving it distribution through an existing massive user base for short-form videos. Meta's Vibes launched last week as a standalone AI video feed. Sora 2 bets on the cameo feature—letting friends insert verified avatars into each other's videos—as its differentiator, but all three are testing whether users want AI-generated social feeds.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.