Google kept AI Mode ad-free for a year to compete with ChatGPT. Last week, ads appeared. The retreat reveals more than monetization strategy—it shows Google's $175B advertising business makes competing structurally impossible.
Google must double AI compute every six months to meet demand, but capacity limits are already capping product rollouts and revenue. As bubble concerns mount, the company bets that underinvesting poses the greater risk in the infrastructure arms race.
OpenAI's new Sora 2 app hit #1 in 24 hours—flooded with copyrighted characters and deepfakes. The company's inverting copyright norms: rightsholders must opt out, not opt in. Disney did. Most didn't. Now the legal questions multiply.
Deepfakes and Pikachu flood new AI app. Move-fast strategy tests legal boundaries
OpenAI's Sora 2 launched Tuesday. By Wednesday morning, the app's feed featured SpongeBob as Hitler, Pikachu stealing from CVS, and Sam Altman shoplifting GPUs from Target. The company's newest AI video generator—now paired with a TikTok-style social app—hit number one in the iOS App Store's photo and video category within 24 hours. It also revealed how far OpenAI's willing to go in its copyright stance: unless rightsholders explicitly opt out, their work can appear in user-generated videos.
The approach inverts standard copyright practice. Disney opted out immediately. Most others didn't respond or haven't yet acted. The result: a feed dominated by familiar characters the model was clearly trained on, despite OpenAI's stated guardrails.
📰
We read 100 articles. You read one email.
Five minutes. Zero BS. Daily AI news.
No spam. Cancel anytime.
What changed—and why it matters
Sora 2 represents a technical leap over the February 2024 original. The model now generates synchronized audio, handles complex physics more reliably (though still imperfectly), and maintains continuity across multiple shots. Videos cap at 60 seconds. The most significant addition: "Cameo," which lets users insert verified likenesses—their own or others who've given permission—into AI-generated scenes.
The standalone Sora app packages these capabilities into an invite-only social feed. Users create 10-second clips from text prompts or photos, remix others' videos, and scroll through AI-generated content. The experience mirrors TikTok's addictive scroll, except every frame is synthetic.
What's actually new isn't just technical capability—Runway, Google's Veo, and others have been advancing in parallel. The shift is strategic. OpenAI paired a more capable model with a social distribution system and an aggressive copyright posture, then made it simple enough for mass adoption.
The copyright calculation
OpenAI's opt-out model places the enforcement burden on rightsholders. The Wall Street Journal reported the company informed studios they'd need to opt out of having their content appear in Sora outputs. Disney did. Warner Bros. and Sony Music didn't respond to media requests for comment.
The legal distinction matters. Mark McKenna, a UCLA law professor who directs the Institute for Technology, Law, and Policy, draws a sharp line between training inputs and generated outputs. "Training AI models on legitimately acquired copyright material can be considered fair use," he told NBC News. "Outputting visual material is a harder copyright question."
OpenAI faces existing copyright litigation from authors including Ta-Nehisi Coates and newspapers including The New York Times. Competitor Anthropic recently settled similar claims for $1.5 billion. The outputs from Sora 2—pixel-accurate Rick and Morty scenes, Nintendo characters in countless scenarios—suggest the training data included substantial copyrighted material.
The company's response through a spokesperson: "We're working with rightsholders to understand their preferences for how their content appears across our ecosystem, including Sora."
McKenna characterizes the approach as calculated risk. "The opt-out is clearly a 'move fast and break things' mindset," he said.
⚡
AI moved fast today. Did you?
Daily. No fluff. What matters in AI today.
No spam. Unsubscribe anytime.
The deepfake infrastructure
Altman made his verified likeness available to all Sora users. The feed responded predictably. Videos show him stealing GPUs, serving Pikachu at Starbucks, asking pigs if they're "enjoying their slop." Some critique OpenAI's copyright stance through the medium itself—Pikachu and SpongeBob characters begging Altman to stop training on them.
The Cameo feature requires one-time biometric capture: users record themselves reading numbers, then turning their head through multiple angles. OpenAI claims "tons of validation" prevents impersonation. Users control who can generate videos using their likeness through four settings: only me, people I approve, mutuals, or everyone.
The safeguards face obvious challenges. Watermarks can be cropped. Metadata indicating AI generation—which OpenAI acknowledges isn't a "silver bullet"—disappears when videos migrate to other platforms. Users can't delete exported copies of videos featuring their likeness, only versions within Sora itself.
TechCrunch reporter Amanda Silberling tested the feature. When she recorded her first attempt wearing a tank top, the app rejected it as violating guidelines—bare shoulders apparently too risqué. After changing to a t-shirt, the system approved her biometric data. The generated video of her "discussing baseball" inferred she was a Phillies fan from her Philadelphia IP address and ChatGPT history, speaking in a voice unlike hers but in a bedroom matching hers exactly.
"Every day I wake up to new horrors beyond my comprehension," a commenter wrote when she shared the result.
The simplicity drives engagement. Siegler described getting "sucked in for at least a half hour" each time he opened the app, "remixing everything and anything that pops into my head." He compared the moment to Vine's early days—another short-form video platform that demonstrated unexpected creative potential before Twitter shuttered it.
The addictiveness is by design. OpenAI includes cooldown periods for teen accounts after extended scrolling. Adult accounts receive "nudges" to take breaks. The app periodically prompts users: "How does using Sora impact your mood?"
What comes next
The technical trajectory is clear: more realistic physics, longer clips, better consistency across shots. The legal trajectory is less certain. Hollywood's response remains muted so far. Predictions that Sora 2 means "the end of Hollywood" are premature—60-second caps and multi-shot inconsistency make feature-length narratives impractical. Short-form social content and ads represent more immediate use cases.
The regulatory response will likely focus on deepfakes and disinformation rather than copyright. Political deepfakes aren't new—President Trump recently shared a racist deepfake of Democratic congressmen. But Sora democratizes the capability. When the app opens beyond invite-only access, these tools reach everyone.
OpenAI secured a $500 billion valuation. The product demonstrates why: superior productization, aggressive legal positioning, and technology that's slightly better than predecessors. The cost—as 404 Media's Jason Koebler frames it—includes "nearly all of the intellectual property ever created by our species, the general concept of the nature of truth, the devaluation of art through endless flooding of the zone, and the knock-on environmental, energy, and negative labor costs of this entire endeavor."
Why this matters:
The opt-out copyright model shifts enforcement burden to rightsholders while OpenAI benefits from training on protected works—legal precedent hasn't caught up to this inversion
Deepfake accessibility at this quality level, paired with social distribution, fundamentally changes information verification dynamics across platforms regardless of OpenAI's internal safeguards
❓ Frequently Asked Questions
Q: Can anyone use Sora 2 right now?
A: No. The Sora app is invite-only as of October 2025. Once you receive an invite, you can access Sora 2 through the iOS app or sora.com. ChatGPT Pro subscribers will get access to a higher-quality "Sora 2 Pro" model. OpenAI hasn't announced when the app will open to the general public.
Q: How much does Sora 2 cost to use?
A: Sora 2 is initially free to encourage adoption. ChatGPT Pro users (who pay for premium ChatGPT access) get access to the higher-quality Sora 2 Pro model. OpenAI hasn't announced pricing for when the free period ends or what the general public will pay once the app opens beyond invite-only access.
Q: How do copyright holders opt out of Sora?
A: The Wall Street Journal reported that OpenAI contacted studios to inform them they must opt out if they don't want their content appearing in Sora videos. However, blanket opt-outs aren't available—rightsholders must submit specific examples of offending content. Disney opted out. Warner Bros. and Sony Music didn't respond to media inquiries about their plans.
Q: What happens if someone creates a deepfake of me without permission?
A: Users control who can generate videos using their "Cameo" through four settings: only me, people I approve, mutuals, or everyone. You can see any video using your likeness and revoke access or remove videos. The problem: you can't delete exported copies, only versions within Sora. Watermarks can be cropped out.
Q: How realistic are Sora 2 videos compared to real footage?
A: Sora 2 handles physics better than earlier models—basketballs bounce realistically, water behaves naturally. Videos cap at 60 seconds. OpenAI admits the physics remain "imperfect." Deepfakes of real people are convincingly realistic in static shots but often fail with complex movements. One reporter noted the AI voice didn't match hers, but the bedroom setting was exact.
Bilingual tech journalist slicing through AI noise at implicator.ai. Decodes digital culture with a ruthless Gen Z lens—fast, sharp, relentlessly curious. Bridges Silicon Valley's marble boardrooms, hunting who tech really serves.
Google kept AI Mode ad-free for a year to compete with ChatGPT. Last week, ads appeared. The retreat reveals more than monetization strategy—it shows Google's $175B advertising business makes competing structurally impossible.
Google must double AI compute every six months to meet demand, but capacity limits are already capping product rollouts and revenue. As bubble concerns mount, the company bets that underinvesting poses the greater risk in the infrastructure arms race.
Musk's Grok chatbot ranked him fitter than LeBron James and smarter than Einstein this week before xAI deleted the posts. The sycophantic episode reveals centralized AI control mechanisms in a system now wired into federal intelligence workflows.
Google's new image generator embeds "critical" authenticity watermarks in every AI creation. But pay $19.99/month and the visible mark disappears. The company monetizes the transparency tools it says protect public trust, while actual deepfake threats go unaddressed.