Berlin Pressures Apple and Google to Drop DeepSeek Amid China Data Fears
Good Morning from San Francisco, Germany wants DeepSeek banned from app stores. The reason? Chinese data laws. Berlin's
Good Morning from San Francisco, 🌉
A federal judge just handed AI companies their first major copyright win. The catch? It only applies to books you actually buy. 💰
Judge William Alsup ruled Anthropic can train Claude on legally purchased books. He called the process "spectacularly transformative" - like how humans learn to write by reading. 🧠
But Anthropic still faces trial for using 7 million pirated books from LibGen and Books3. The company kept stolen copies "forever" even after deciding not to train on them. 🏴☠️
Meanwhile, Databricks co-founder Andy Konwinski pledged $100 million to help universities compete with big tech. His Laude Institute will fund open-source AI research at Berkeley and beyond. 🎓💡
The message is clear: Buy books legally and train away. Steal them and face $150,000 per work in damages. ⚖️💸
Stay curious, 🔍
Marcus Schuler
A federal judge handed AI companies their first major copyright victory. But the win comes with a catch that could cost millions.
Judge William Alsup ruled that Anthropic can train its Claude AI models on books the company legally bought. The court called the training "spectacularly transformative" and compared it to how humans learn to write by reading.
But Anthropic still faces trial for using pirated books. The company downloaded over 7 million stolen copies from sites like LibGen and Books3. The judge rejected fair use protection for this piracy.
The three-part ruling
The court split Anthropic's book use into three categories. Training AI models on legally purchased books counts as fair use. Converting print books to digital files for storage also passes the test.
Using millions of pirated books does not. Anthropic kept these stolen copies "forever" for "general purpose" use, even after deciding not to train on them. The judge called this inexcusable.
Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued Anthropic last year. They claimed the company trained Claude on their pirated works without permission.
The ruling gives AI companies a roadmap. Buy books legally and you can probably train on them. Steal books and face potential damages of $150,000 per work.
Why the training argument worked
Judge Alsup drew parallels to human learning. People read books, absorb their lessons, and write new works. They pay once to buy the book but don't owe royalties each time they recall what they learned.
"Everyone reads texts, too, then writes new texts," the judge wrote. Making people pay for each use would be "unthinkable."
The court noted that Claude doesn't reproduce the original books. Users can't extract copies of the training material from the AI's responses.
Anthropic even hired Google's former book-scanning chief to buy "all the books in the world." The company spent millions purchasing and scanning print books, destroying the originals to create digital copies.
The piracy problem
The fair use defense collapsed for stolen content. Anthropic could have bought the books legally but chose to pirate them to avoid "legal/practice/business slog."
The company downloaded books from three major pirate libraries. It kept these copies in a central library alongside legitimately purchased books. Some pirated works were never used for training at all.
The judge saw no justification for this approach. Anthropic will face trial for damages on the pirated content. With millions of books potentially at stake, the penalties could reach billions of dollars.
Why this matters:
Read on, my dear:
Prompt:
A calm female model in soft beige Celine fashion steps barefoot across a light wood runway. Behind her, a towering pale deer lowers its head with branching antlers wide, legs tensed like it's about to charge. The scene is serene yet electric, sunlight filtering through translucent curtains.
Companies face an avalanche of job applications. The culprit? Artificial intelligence.
LinkedIn processes 11,000 applications per minute. That's a 45 percent jump from last year. ChatGPT makes it easy to pump out resumes stuffed with keywords from job descriptions. Some candidates pay for AI agents that hunt for jobs and apply automatically.
The result is chaos. Katie Tanner, a Utah HR consultant, posted one remote tech job. Within 12 hours, she had 400 applications. By day two, 600. After hitting 1,200 applications, she pulled the post. Three months later, she's still sorting through candidates.
The arms race begins
Companies fight back with their own AI weapons. Chipotle's chatbot "Ava Cado" screens candidates and cuts hiring time by 75 percent. HireVue offers AI-powered video interviews that rank applicants automatically.
But candidates adapt. They use AI to ace video interviews. Companies add more automated tests. The hiring process becomes machines talking to machines while humans get lost in the noise.
The problem runs deeper than volume. Fake candidates pose as real people. North Korean IT workers used false identities to land remote jobs at US companies. Security firm Gartner predicts one in four job applicants could be fraudulent by 2028.
When resumes become meaningless
The traditional resume is dying. When anyone can generate hundreds of tailored applications with a few prompts, the document that once showed effort and genuine interest becomes worthless spam.
Some companies abandon resumes entirely. They turn to live problem-solving sessions, portfolio reviews, and trial work periods. These methods resist AI manipulation better than written applications.
The future of hiring may require proving candidates exist before checking if they're qualified. Identity verification companies like Persona blocked 75 million deepfake attempts in 2024 alone.
Why this matters:
Read on, my dear:
* NYT: Employers Are Buried in A.I.-Generated Résumés
Maxar Intelligence, the satellite company that provided images disputing complete destruction of Iran's nuclear sites, launched an AI service called Sentry that automatically monitors global developments without human intervention. The system can track anomalies like foreign ships in unexpected waters or aircraft off course, as Maxar faces potential 30% budget cuts from the Trump administration despite proving crucial for documenting military strikes and natural disasters.
OpenRouter raised $40 million to help developers navigate the exploding number of AI models flooding the market. The startup, led by OpenSea co-founder Alex Atallah, calls itself a "one-stop shop" that lets developers access multiple AI models through a single platform instead of juggling dozens of different services.
Scale AI exposed confidential AI training documents from Meta, Google, and xAI through public Google Docs that anyone could access with the right link. The data labeling company used unsecured documents to coordinate work across its 240,000 contractors, leaving sensitive project details and contractor information visible to the public.
Google launched Imagen 4, its latest text-to-image AI model, through the Gemini API with a focus on better text rendering. The company offers two versions: standard Imagen 4 at $0.04 per image and Imagen 4 Ultra at $0.06 per image for more precise prompt following, both targeting the competitive AI art generation market dominated by models like Midjourney and DALL-E.
An AI tool called Xbow became the first artificial intelligence to top HackerOne's US leaderboard for finding software vulnerabilities, outranking human security researchers. The year-old startup behind Xbow raised $75 million from Altimeter Capital and Sequoia Capital to automate penetration testing, which typically costs companies $18,000 and takes weeks per system test.
CareerBuilder + Monster, the merged company formed from two former job board leaders, filed for bankruptcy and plans to sell its businesses to multiple buyers. The company agreed to sell its main job board operations to JobGet, a gig worker app, while its government software services go to Canadian firm Valsoft and its military.com and fastweb.com sites head to media company Valnet.
Andy Konwinski made billions from AI companies. Now he wants to give university researchers a fighting chance.
The Databricks and Perplexity co-founder just pledged $100 million of his own money to launch Laude Institute. The nonprofit aims to bridge the gap between academic AI research and real-world products.
The timing matters. AI development costs have exploded. OpenAI burns through $28 billion annually. Meanwhile, university labs struggle to turn promising research papers into working systems. Most breakthroughs stay locked in academic journals.
Fighting the funding gap
Konwinski's $100 million sounds impressive until you compare it to commercial AI investments. OpenAI raised $6.6 billion in its latest round. The funding gap between academia and industry keeps growing.
Laude offers two types of grants. "Slingshots" provide fast funding for early-stage projects. "Moonshots" fund multi-year research labs tackling big problems like healthcare delivery and scientific discovery.
The institute's first major grant goes to UC Berkeley. Konwinski's alma mater gets $3 million annually for five years to build a new AI Systems Lab. The lab opens in 2027.
Open source vs closed doors
Laude requires all funded research to stay open source. This puts it at odds with the tech industry's shift toward secretive AI development.
The institute assembled a star-studded board. Google's Jeff Dean serves alongside former Meta AI chief Joëlle Pineau and Turing Award winner Dave Patterson. Their involvement signals industry support for open research.
Konwinski calls this his "most personal project." He wants to recreate the Berkeley lab experience that launched his career. As a PhD student, he helped develop Apache Spark, which became the foundation for Databricks.
The university advantage
Academic labs offer something big tech companies cannot: independence from commercial pressure. Researchers can pursue risky projects that might not show profits for years.
Universities also provide multidisciplinary perspectives. Berkeley's model brings together experts from different fields to tackle complex problems. This approach led to breakthrough technologies like Apache Spark.
The institute plans annual summits to build community among researchers. The first event in San Francisco brought together 70 handpicked academics from major universities.
Why this matters:
Read on, my dear:
Synthflow AI turns your phone into an AI powerhouse that actually sounds human. The Berlin startup builds voice agents that handle customer calls without the robotic awkwardness that makes people hang up. 🤖📞
The Founders
Founded 2023 by CEO Hakob Astabatsyan and co-founders Albert Astabatsyan and Sassun Mirzakhan-Saky in Berlin. Team of 35+ employees. Started because they were fed up with terrible phone customer service - endless hold times and clunky menus that nobody wanted to navigate.
The Product
No-code platform that creates AI phone agents in days, not months. Key strengths: sub-500ms response time (no awkward pauses), handles interruptions naturally, integrates with 200+ business tools, supports multiple languages. Can schedule appointments, look up orders, transfer calls - basically everything a human receptionist does, minus the bathroom breaks.
The Competition
Crowded field with deep pockets. Sierra ($4.5B valuation, $285M raised) and Bland AI ($50M+) dominate headlines. PolyAI holds down Europe with $120M+ funding. But Synthflow carved out the SMB niche while others chase enterprise whales - smart positioning when everyone's shouting for attention.
Financing
$30M total raised. Pre-seed $1.7M (Atlantic Labs), seed $7.4M (Singular VC), Series A $20M (Accel leading). Growing 15x annually with 5M+ calls monthly. Retention above 90% - customers stick around because it actually works.
The Future ⭐⭐⭐⭐⭐
Expanding to US market with fresh Series A cash. Voice AI hit its iPhone moment - technology finally matches the hype. SMBs desperate for 24/7 customer service without hiring armies of humans. Synthflow nailed the execution speed that kills startups when they scale up.
Get tomorrow's intel today. Join the clever kids' club. Free forever.