Stanford researchers found that Meta's newest AI model can reproduce 42% of Harry Potter word-for-word—ten times more than earlier versions. The findings complicate copyright lawsuits and reveal a troubling trend in AI development.
Anthropic says multiple AI agents working together beat single models by 90%. The catch? They use 15x more computing power. This trade-off between performance and cost might reshape how we build AI systems for complex tasks.
AI models typically learn by memorizing patterns, then researchers bolt on reasoning as an afterthought. A new method called Reinforcement Pre-Training flips this approach—teaching models to think during basic training instead.
Hollywood studios are quietly running AI experiments while publicly supporting union deals that limit the technology. A CAA agent at an AI party admitted what insiders know: "Everyone's using it. They just don't talk about it."
The math is simple. Lionsgate can make movies for $50 million with AI instead of $100 million without it. Studios face a financial crisis with fewer films, smaller audiences, and ballooning budgets.
But there's a problem. AI models trained on stolen content face 35 copyright lawsuits. One producer's fear: "You'll make a blockbuster and sit in litigation for 30 years."
The resistance is real too. Filmmaker Justine Bateman calls AI "one of the worst ideas society has ever come up with."
Stay curious,
Marcus Schuler
Hollywood Studios Secretly Test AI Despite Union Restrictions
Hollywood studios are using artificial intelligence across their operations while hiding these efforts from talent and the public. This happens despite union contracts that limit AI use.
Industry insiders report widespread but secret AI adoption throughout major studios. Executives use the technology for script development and visual effects while publicly staying cautious about AI's role in filmmaking.
Studios hide their AI work because they fear backlash. When writers and actors struck in 2023, AI restrictions became central to negotiations. The contracts banned AI-written scripts and required consent for digital cloning. Yet studios continue testing AI tools behind closed doors.
Money Talks Louder Than Principles
"Everyone's using it. They just don't talk about it," a CAA agent said at an AI studio launch. This reflects an industry caught between financial pressure and creative resistance.
Studios face serious money problems. Fewer movies get made, theater attendance drops, and production costs rise. AI offers a solution by cutting expenses. James Cameron recently said if audiences want more blockbusters, "we've got to figure out how to cut the cost of that in half."
The Creative Underground
The technology now handles tasks across production. Studios use AI to create concept art, digital environments, and entire scenes. One executive described how AI could replace a $10 million battlefield sequence for just $10,000.
The secrecy shows deep divisions within Hollywood. Many creatives view AI as a threat to jobs and artistic integrity. Filmmaker Justine Bateman calls generative AI "one of the worst ideas society has ever come up with."
Yet money forces change. A Netflix producer didn't tell colleagues about attending an AI event because a director threatened to quit if her company used AI.
Legal Landmines Ahead
Copyright issues add problems. Most AI models train on copyrighted material without permission. Over 35 lawsuits challenge this practice, creating legal uncertainty for any AI-assisted production.
Union contracts protect some workers but don't prevent AI use in areas like pre-production planning. The gap between public statements and private practice shows an industry in transition. Studios know they need AI to compete but fear embracing it openly.
Why this matters:
Hollywood's secret AI adoption could reshape filmmaking faster than public discussions suggest, catching audiences and workers off guard
The gap between union protections and actual studio practices shows how quickly technology can outpace labor agreements
Prompt: cool cyberpunk woman portrait, polaroid transfer
Reddit Takes Anthropic to Court Over Data Theft
Reddit sued Anthropic this week for scraping millions of user posts without permission. The company behind Claude AI accessed Reddit content over 100,000 times since July, even after claiming it had stopped.
The lawsuit exposes a divide in the AI industry. Google and OpenAI each pay Reddit $60 million annually for data access. Anthropic refused to negotiate similar deals and took the content anyway.
Reddit's complaint includes a smoking gun. When users asked Claude directly, the chatbot admitted it trained on Reddit data. It couldn't confirm whether that included deleted posts users thought they had erased.
The Ethics Paradox
Anthropic markets itself as the ethical AI company. The startup emphasizes safety and user consent in its public materials. But Reddit's filing calls this image hollow. The company acts like "the white knight of the AI industry" while ignoring basic data rules.
Reddit tried multiple times to negotiate a licensing deal. Anthropic refused to engage. The company preferred free scraping over paid partnerships.
What's Really at Stake
This case could reshape how AI companies acquire training data. Reddit's 20-year archive of authentic human conversations helps AI models sound natural. That data has clear commercial value—hence the $60 million annual deals with other companies.
Anthropic recently raised funding at a $61.5 billion valuation. The company reports $3 billion in annual revenue. Those numbers suggest it can afford proper licensing fees.
Reddit seeks damages and an injunction to stop future scraping. The platform argues Anthropic enriched itself by billions using stolen content.
Why this matters:
This lawsuit could force all AI companies to pay for training data, ending the era of free content scraping that built the industry
The outcome determines whether users control how their deleted posts train AI systems that compete with human expertise
Can Supernormal Make Manual Meeting Notes Obsolete?
Supernormal is an AI meeting assistant that joins your calls to take notes and track action items automatically.
How It Works
1. Invite Norma to Your Meeting Install the Chrome extension or schedule Norma through the dashboard. She joins Google Meet, Zoom, or Microsoft Teams calls.
2. Let Norma Listen During your meeting, Norma records audio and creates real-time transcripts. She stays quiet unless you ask questions.
3. Get Auto-Generated Notes After the call, Norma produces organized notes with summaries, action items, and key decisions. No more manual note-taking.
Key Features
Smart Notes: Auto-generated summaries with action items assigned to specific people
Real-time Q&A: Ask Norma questions during meetings for instant answers
Cross-platform: Works with all major video platforms
60+ Languages: Supports international teams
Security: SOC 2 certified with enterprise-grade encryption
Is Every Chat You’ve Ever Had with ChatGPT Now Evidence?
OpenAI is battling a court order that forces the company to save every ChatGPT conversation—even the ones users delete. The order came from a copyright lawsuit where news organizations claim people use ChatGPT to dodge paywalls.
Judge Ona Wang issued the May 13 order after The New York Times and other publishers argued that ChatGPT users delete their conversations to hide evidence of copyright violations. The judge worried that without intervention, OpenAI would keep destroying potential evidence.
OpenAI calls the order a "privacy nightmare" that affects hundreds of millions of users worldwide. The company argues it violates promises users rely on to control their personal data.
What changed for users
Before the order, ChatGPT users could delete specific conversations or use temporary chats that disappeared when closed. Users could also delete their entire accounts, and OpenAI would purge all conversation history within 30 days.
Now OpenAI must keep everything. The order covers ChatGPT Free, Plus, and Pro users, plus sensitive business data from companies using OpenAI's API.
Users panic on social media
Tech workers flooded LinkedIn and X with warnings about the order. One consultant advised clients to avoid sharing sensitive data "with ChatGPT or through OpenAI's API for now," warning that "your outputs could eventually be read by others."
A cybersecurity professional called the mandatory retention "an unacceptable security risk." Others recommended switching to alternatives like Mistral AI or Google Gemini.
OpenAI's defense
The company denies destroying evidence or helping users circumvent paywalls. OpenAI argues the news organizations offer no proof that people actually delete chats to hide copyright violations.
"They have not identified any evidence that anyone has attempted to obtain their content from ChatGPT," OpenAI stated in court filings.
The company says complying with the order requires months of engineering work and substantial costs. It also risks breaching contracts and global privacy laws.
Why this matters:
Your ChatGPT conversations—from wedding vows to financial data—now get stored permanently whether you want it or not
The fight shows how copyright battles can override user privacy, setting a precedent that could affect other AI services
New Meta glasses pack eye tracking and heart sensors
Meta unveiled details about Aria Gen 2, experimental smart glasses that track eye movement, detect blinks, and monitor heart rate through sensors in the nosepad. The glasses weigh 75 grams and fold for the first time, giving researchers a platform to test future AR features before they reach consumers.
Apple and Samsung cut growth forecasts as tariff fears bite
Apple preps AirPods camera trigger and sleep detection
Apple plans to add camera control to AirPods, letting users snap photos by tapping the stem, according to leaked features expected at Monday's WWDC keynote. The company also developed sleep detection that automatically pauses audio when users doze off while wearing their earbuds.
Meta CTO says Silicon Valley embraces defense work again
Meta's chief technology officer said Silicon Valley has dropped its resistance to defense projects and now welcomes military partnerships. Andrew Bosworth claimed a "silent majority" always wanted to pursue defense work, and the industry is returning to its military-founded roots after announcing a partnership with defense contractor Anduril last week.
23andMe DNA data auction restarts with $305 million bid
Amazon forms robot AI team in secretive hardware lab
Amazon created a new artificial intelligence group within Lab126, its hardware research unit that developed the Kindle and Echo devices. The team will build AI agents that can control robots using voice commands, turning warehouse machines into flexible assistants that understand natural language instructions.
AI helps hackers write malware faster than ever
AI chatbots now generate malicious code when hackers pose as security researchers, lowering barriers for inexperienced attackers. The real threat comes from skilled hackers using AI to scale operations that once took days into 30-minute tasks, creating potential waves of simultaneous cyberattacks.
Ola's billion-dollar AI startup faces user exodus
Krutrim, India's first AI unicorn valued at $1 billion, can't keep customers as startups abandon its cloud services for Google and Amazon. The company suffers from poor documentation, login errors, and AI models that take 41 seconds to respond compared to ChatGPT's under-10-second performance.
🚀 AI Profiles: The Companies Defining Tomorrow
PostHog: The Analytics Rebel Eating Big Tech's Lunch
PostHog cracked the code on product analytics by going open-source when everyone else went proprietary. This scrappy London startup now powers 100,000+ companies who refuse to send user data to third parties. 🦔
The Founders
James Hawkins (ex-fintech VP) and Tim Glaser (Dutch engineer) founded PostHog in 2020 after pivoting six times in six months
65 employees across 10+ countries, fully remote from day one
Built from frustration: "We got tired of sending user data to third parties just to understand our own product"
Headquartered in San Francisco, founders still hop between London and Cambridge
Self-hosted or cloud options - companies keep full data control
Auto-captures every click without manual tracking code
Open-source core (26,000+ GitHub stars) with enterprise add-ons
Replaces 3-5 separate tools most teams cobble together
The Competition
Battles Mixpanel (the old guard), Amplitude ($4B public unicorn), and Heap
Undercuts pricing by 50-80% while offering more features
Wins through bottom-up adoption: over half of Y Combinator's latest batch chose PostHog
0.5% of top 1M websites now run PostHog vs. Amplitude's 1% - remarkable for a 4-year-old company
Financing
$27M total raised from GV (Google Ventures), Y Combinator, notable angels like GitHub's CTO
Series B came early due to "getting swamped with demand"
Revenue jumped 6x in 2022, then 4x in 2023 with barely any headcount growth
Approaching profitability while maintaining rapid expansion
The Future ⭐⭐⭐⭐⭐ PostHog owns the developer mindset shift toward data sovereignty and integrated toolchains. The open-source moat grows stronger as privacy regulations tighten globally. Next stop: expanding beyond analytics into the full startup software stack. 🚀