Billionaires Brawl, Courts Intrude—And Everyone Else Pays
Good Morning from San Francisco, The world's most expensive friendship just imploded 💥. Trump and Musk torched their alliance
Good Morning from San Francisco,
Big Tech just served its AI coding darlings an eviction notice. 📧 Anthropic yanked Windsurf's privileged access to Claude models. No warning. No apology. Just business.
The timing wasn't coincidental. 🎯 OpenAI reportedly bought Windsurf for $3 billion. That made the startup a threat to Anthropic's new Claude Code tool. Build on our models, sure. Just don't get too successful. 📈
The entire AI coding boom now faces its original sin. 💀 These startups rent their brains from the same companies that want to kill them. Cursor burns cash at a $10 billion valuation. 🔥 Windsurf hits $100 million in revenue but bleeds money on inference costs.
The landlords are evicting their tenants. 🏠
Stay curious,
Marcus Schuler
The AI coding boom just hit its first major speed bump. Anthropic cut Windsurf's direct access to its Claude models with little warning. The move forces the startup to use expensive workarounds while competitors like Cursor keep their preferred access.
This isn't random housekeeping. OpenAI reportedly bought Windsurf for $3 billion, making it a direct threat to Anthropic's new Claude Code tool. The message is clear: build your business on our models, but don't get too successful.
The irony runs deep across the entire sector. Coding startups raised billions by promising to revolutionize software development using foundation models. Cursor hit a $10 billion valuation. Windsurf reached $100 million in revenue. Both operate at losses because they pay more for AI inference than they collect from customers.
Most coding startups built their entire business on someone else's brain. They rent access to OpenAI's or Anthropic's models, add a coding interface, and hope the math works out. It rarely does. When your core technology comes from a competitor, you're renting your own disruption.
Some startups try building their own models. Poolside raised $600 million for this exact goal. Magic Dev promised investors a frontier coding model last summer. Neither has shipped a product. Training competitive AI models costs hundreds of millions and requires talent these companies can't always attract.
Meanwhile, the foundation companies launched their own coding tools. Anthropic released Claude Code. OpenAI integrates coding directly into ChatGPT. Microsoft owns GitHub Copilot, which generates over $500 million annually. Google generates 30% of its code using AI.
The new security risks don't help. OpenAI just added internet access to its Codex agent, creating fresh attack vectors for code theft and system compromise. One malicious prompt can exfiltrate entire codebases to external servers.
Why this matters:
Read on, my dear:
Chinese AI company DeepSeek released a new reasoning model last week. It performs well on math and coding tests. The problem? Researchers say the company trained it using stolen data from Google's Gemini AI.
Sam Paech, a Melbourne developer, found evidence that DeepSeek's R1-0528 model learned from Gemini outputs. The model uses similar words and expressions that Google's system favors. Another developer discovered that DeepSeek's internal "thoughts" read like Gemini traces.
This marks the second time DeepSeek faces such accusations. In December, developers noticed the company's V3 model often identified itself as ChatGPT. Microsoft detected large data theft through OpenAI accounts in late 2024, which OpenAI believes connects to DeepSeek operations.
Training AI models from scratch costs $63-200 million. Data theft through distillation costs just $1-2 million. The math is simple.
AI companies now race to protect their data. OpenAI requires government ID verification before accessing advanced models. China isn't on the approved list. Google started hiding model traces to prevent copying.
The accusations highlight a bigger issue. AI-generated content now floods the internet. Content farms use AI to create clickbait. Bots spam social media with AI-written posts. This contamination makes it hard to find clean training data.
As models generate more content, they risk learning from their own outputs. This creates a feedback loop that degrades quality over time. Researchers call this "model collapse."
DeepSeek's latest model also shows increased censorship. It refuses to discuss 85% of Chinese government taboo topics, making it the most restricted version yet.
Why this matters:
Read on, my dear:
Orby AI watches you work. It learns your patterns. Then it does the work for you. Think of it as having a digital assistant that actually pays attention.
You do your normal work. Orby silently watches and records your actions. No setup required. No coding needed.
Orby's Large Action Model (LAM) identifies repetitive patterns in your workflow. It understands context, not just clicks.
Orby suggests automations. You approve them. The platform handles the rest.
The Problem: Sarah from accounting spends 15 minutes processing each vendor invoice. She gets hundreds per week.
The Traditional Solution: Hire more people or work weekends.
The Orby Solution:
Result: Sarah reclaims 10+ hours weekly. Error rates drop. She focuses on vendor negotiations instead.
Over 260 state lawmakers from both parties sent a letter opposing a federal budget provision that would ban state and local AI regulation for 10 years. The lawmakers say the moratorium would stop them from protecting residents from deepfake scams, job discrimination, and other AI harms while Congress fails to pass comprehensive federal rules.
AI companies are bleeding billions while claiming revolutionary breakthroughs, according to a major new report that reveals how the industry weaponizes hype to grab power and public resources. The AI Now Institute found that despite sky-high valuations, no profitable AI use cases exist, with companies like Anthropic burning $5.6 billion and OpenAI losing $5 billion annually.
About 1,000 people have left America's top cybersecurity agency since Trump took office, cutting the workforce by nearly a third. The departures happened as the agency faces a proposed 17% budget cut and Trump plans more offensive cyber operations against China.
Google paused its AI-powered Ask Photos feature after admitting it wasn't working well enough. The feature, meant to answer questions about your photo library, struggled with speed, accuracy, and basic user experience according to a Google product manager.
Reddit rolled out privacy controls that let users hide posts and comments from their public profiles. Users can now selectively show content from specific communities while hiding others, or hide everything entirely.
The Rundown AI and Superhuman AI each reached 1 million subscribers and generate seven-figure annual revenue, proving that curating AI news pays big money. Over 3,000 AI newsletters launched in two years, but only five consistently top 100,000 subscribers by targeting busy professionals with quick daily reads.
Brookfield Asset Management plans to spend $9.9 billion building an AI data center in Strangnas, Sweden. The project would create over 1,000 permanent jobs and 2,000 construction jobs over 10-15 years, with the municipality selling land only if conditions are met.
Private equity firm Regent LP shut down TechCrunch's European operations in April, laying off journalists who spent decades covering the region. The publication discovered billion-dollar companies like Revolut and Wise before anyone else, giving European startups direct access to US investors who now have no clear path to discover the next breakthrough.
A Tesla Model Y using Full Self-Driving struck and killed a 71-year-old grandmother in Arizona after failing to detect her in sun glare conditions. Federal regulators are investigating whether Tesla's driving system poses safety risks as the company prepares to launch driverless taxis in Austin this month.
Google DeepMind's CEO believes artificial intelligence will cure humanity's selfish streak. Not through therapy or meditation. Through sheer abundance.
Speaking to Wired, the executive outlined a future where AGI solves what he calls "root-node problems." Think curing diseases, extending lifespans, and discovering new energy sources. His timeline? This transformation begins in 2030.
The logic follows a simple path. Humans act selfishly because resources feel scarce. Make everything abundant, and cooperation becomes easier. Take water scarcity. Desalination works but costs too much energy. AI discovers cheap fusion power. Suddenly, everyone gets clean water without fighting over it.
The CEO acknowledges current skepticism. We already have abundance in the West but distribute it poorly. Climate change solutions exist, but we lack the will to implement them. His response? AI-generated abundance will make sacrifice unnecessary.
This vision assumes human nature bends to economic conditions. Make the pie bigger, and people stop fighting over slices. Whether Silicon Valley can engineer away millennia of human behavior remains an open question.
Why this matters:
Read on, my dear:
Ex-Docker engineers built the simplest way to run AI models locally. Their tool turns laptops into private AI servers.
The Founders Michael Chiang and Jeffrey Morgan launched Ollama in Palo Alto in 2023. Both Docker veterans - Chiang co-founded Kitematic (acquired by Docker) and led Docker Desktop to 20 million users; Morgan engineered at Twitter and Google. Y Combinator alumni who saw open-source AI models exploding but local deployment staying clunky. Small team, big ambitions.
The Product Command-line tool that downloads and runs large language models locally. Type ollama run llama2
and chat with AI on your machine - no cloud, no data leaks. Models packaged like Docker containers. REST API turns any laptop into an AI server. Supports 100+ models from Meta's Llama to Mistral. Works offline, runs on consumer hardware through smart compression. Privacy-first design means your data never leaves your device.
The Competition GPT4All leads with 250k monthly users and friendly GUI. LM Studio offers polished ChatGPT-like interface. Jan provides Electron-based simplicity. LocalAI mimics OpenAI's API locally. Ollama stands out with developer-centric CLI/API approach and 100k GitHub stars. The space resembles early browser wars - healthy competition driving rapid innovation. 🚀
Financing Raised ~$625k total through Y Combinator and micro-VCs like Essence Venture Capital. Added $125k in April 2025. Lean funding strategy focused on community over cash. GitHub stars (100k) outpaced fundraising - classic open-source playbook.
The Future ⭐⭐⭐⭐ Strong prospects riding the local AI wave. Active community and rapid development cycle position them well. Revenue model unclear but enterprise support likely. Could become acquisition target for cloud giants wanting local AI capabilities.
Get tomorrow's intel today. Join the clever kids' club. Free forever.