OpenAI’s Sora 2 launches into copyright chaos

OpenAI's new Sora 2 app hit #1 in 24 hours—flooded with copyrighted characters and deepfakes. The company's inverting copyright norms: rightsholders must opt out, not opt in. Disney did. Most didn't. Now the legal questions multiply.

OpenAI Sora 2 Launches Amid Copyright, Deepfake Concerns

Deepfakes and Pikachu flood new AI app. Move-fast strategy tests legal boundaries

OpenAI's Sora 2 launched Tuesday. By Wednesday morning, the app's feed featured SpongeBob as Hitler, Pikachu stealing from CVS, and Sam Altman shoplifting GPUs from Target. The company's newest AI video generator—now paired with a TikTok-style social app—hit number one in the iOS App Store's photo and video category within 24 hours. It also revealed how far OpenAI's willing to go in its copyright stance: unless rightsholders explicitly opt out, their work can appear in user-generated videos.

The approach inverts standard copyright practice. Disney opted out immediately. Most others didn't respond or haven't yet acted. The result: a feed dominated by familiar characters the model was clearly trained on, despite OpenAI's stated guardrails.

What changed—and why it matters

Sora 2 represents a technical leap over the February 2024 original. The model now generates synchronized audio, handles complex physics more reliably (though still imperfectly), and maintains continuity across multiple shots. Videos cap at 60 seconds. The most significant addition: "Cameo," which lets users insert verified likenesses—their own or others who've given permission—into AI-generated scenes.

The standalone Sora app packages these capabilities into an invite-only social feed. Users create 10-second clips from text prompts or photos, remix others' videos, and scroll through AI-generated content. The experience mirrors TikTok's addictive scroll, except every frame is synthetic.

What's actually new isn't just technical capability—Runway, Google's Veo, and others have been advancing in parallel. The shift is strategic. OpenAI paired a more capable model with a social distribution system and an aggressive copyright posture, then made it simple enough for mass adoption.

OpenAI's opt-out model places the enforcement burden on rightsholders. The Wall Street Journal reported the company informed studios they'd need to opt out of having their content appear in Sora outputs. Disney did. Warner Bros. and Sony Music didn't respond to media requests for comment.

The legal distinction matters. Mark McKenna, a UCLA law professor who directs the Institute for Technology, Law, and Policy, draws a sharp line between training inputs and generated outputs. "Training AI models on legitimately acquired copyright material can be considered fair use," he told NBC News. "Outputting visual material is a harder copyright question."

OpenAI faces existing copyright litigation from authors including Ta-Nehisi Coates and newspapers including The New York Times. Competitor Anthropic recently settled similar claims for $1.5 billion. The outputs from Sora 2—pixel-accurate Rick and Morty scenes, Nintendo characters in countless scenarios—suggest the training data included substantial copyrighted material.

The company's response through a spokesperson: "We're working with rightsholders to understand their preferences for how their content appears across our ecosystem, including Sora."

McKenna characterizes the approach as calculated risk. "The opt-out is clearly a 'move fast and break things' mindset," he said.

The deepfake infrastructure

Altman made his verified likeness available to all Sora users. The feed responded predictably. Videos show him stealing GPUs, serving Pikachu at Starbucks, asking pigs if they're "enjoying their slop." Some critique OpenAI's copyright stance through the medium itself—Pikachu and SpongeBob characters begging Altman to stop training on them.

The Cameo feature requires one-time biometric capture: users record themselves reading numbers, then turning their head through multiple angles. OpenAI claims "tons of validation" prevents impersonation. Users control who can generate videos using their likeness through four settings: only me, people I approve, mutuals, or everyone.

The safeguards face obvious challenges. Watermarks can be cropped. Metadata indicating AI generation—which OpenAI acknowledges isn't a "silver bullet"—disappears when videos migrate to other platforms. Users can't delete exported copies of videos featuring their likeness, only versions within Sora itself.

TechCrunch reporter Amanda Silberling tested the feature. When she recorded her first attempt wearing a tank top, the app rejected it as violating guidelines—bare shoulders apparently too risqué. After changing to a t-shirt, the system approved her biometric data. The generated video of her "discussing baseball" inferred she was a Phillies fan from her Philadelphia IP address and ChatGPT history, speaking in a voice unlike hers but in a bedroom matching hers exactly.

"Every day I wake up to new horrors beyond my comprehension," a commenter wrote when she shared the result.

The productization advantage

Multiple observers noted OpenAI's consistent edge in turning AI capabilities into viral products. Meta launched "Vibes," a similar AI video feed, days before Sora 2. M.G. Siegler, writing at Spyglass, called it "half-baked and obtuse to use" compared to Sora's "stupidly simple" interface.

The simplicity drives engagement. Siegler described getting "sucked in for at least a half hour" each time he opened the app, "remixing everything and anything that pops into my head." He compared the moment to Vine's early days—another short-form video platform that demonstrated unexpected creative potential before Twitter shuttered it.

The addictiveness is by design. OpenAI includes cooldown periods for teen accounts after extended scrolling. Adult accounts receive "nudges" to take breaks. The app periodically prompts users: "How does using Sora impact your mood?"

What comes next

The technical trajectory is clear: more realistic physics, longer clips, better consistency across shots. The legal trajectory is less certain. Hollywood's response remains muted so far. Predictions that Sora 2 means "the end of Hollywood" are premature—60-second caps and multi-shot inconsistency make feature-length narratives impractical. Short-form social content and ads represent more immediate use cases.

The regulatory response will likely focus on deepfakes and disinformation rather than copyright. Political deepfakes aren't new—President Trump recently shared a racist deepfake of Democratic congressmen. But Sora democratizes the capability. When the app opens beyond invite-only access, these tools reach everyone.

OpenAI secured a $500 billion valuation. The product demonstrates why: superior productization, aggressive legal positioning, and technology that's slightly better than predecessors. The cost—as 404 Media's Jason Koebler frames it—includes "nearly all of the intellectual property ever created by our species, the general concept of the nature of truth, the devaluation of art through endless flooding of the zone, and the knock-on environmental, energy, and negative labor costs of this entire endeavor."

Why this matters:

  • The opt-out copyright model shifts enforcement burden to rightsholders while OpenAI benefits from training on protected works—legal precedent hasn't caught up to this inversion
  • Deepfake accessibility at this quality level, paired with social distribution, fundamentally changes information verification dynamics across platforms regardless of OpenAI's internal safeguards

❓ Frequently Asked Questions

Q: Can anyone use Sora 2 right now?

A: No. The Sora app is invite-only as of October 2025. Once you receive an invite, you can access Sora 2 through the iOS app or sora.com. ChatGPT Pro subscribers will get access to a higher-quality "Sora 2 Pro" model. OpenAI hasn't announced when the app will open to the general public.

Q: How much does Sora 2 cost to use?

A: Sora 2 is initially free to encourage adoption. ChatGPT Pro users (who pay for premium ChatGPT access) get access to the higher-quality Sora 2 Pro model. OpenAI hasn't announced pricing for when the free period ends or what the general public will pay once the app opens beyond invite-only access.

Q: How do copyright holders opt out of Sora?

A: The Wall Street Journal reported that OpenAI contacted studios to inform them they must opt out if they don't want their content appearing in Sora videos. However, blanket opt-outs aren't available—rightsholders must submit specific examples of offending content. Disney opted out. Warner Bros. and Sony Music didn't respond to media inquiries about their plans.

Q: What happens if someone creates a deepfake of me without permission?

A: Users control who can generate videos using their "Cameo" through four settings: only me, people I approve, mutuals, or everyone. You can see any video using your likeness and revoke access or remove videos. The problem: you can't delete exported copies, only versions within Sora. Watermarks can be cropped out.

Q: How realistic are Sora 2 videos compared to real footage?

A: Sora 2 handles physics better than earlier models—basketballs bounce realistically, water behaves naturally. Videos cap at 60 seconds. OpenAI admits the physics remain "imperfect." Deepfakes of real people are convincingly realistic in static shots but often fail with complex movements. One reporter noted the AI voice didn't match hers, but the bedroom setting was exact.

Sora 2: Consent for Faces, Opt-Out for Copyright
OpenAI’s new video app requires consent to use your face but lets studios opt out of copyright protection. Disney already blocked its catalog. Meanwhile, the TikTok clone faces a problem its marketing won’t mention: the servers can’t handle viral growth.
OpenAI Sora 2: Permission for faces, opt-out for copyright
OpenAI’s Sora 2 demands permission for your face but lets Spider-Man appear unless studios opt out. The split policy arrives as year-end conversion deadlines loom and competitors launch rival video feeds—testing fair use through product, not precedent.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.