DeepMind's AlphaEvolve can search millions of mathematical constructions in hours, not weeks. Fields Medalist Terence Tao already builds on its outputs. But the system finds candidates, not proofs. The real shift: math discovery at industrial scale.
Moonshot trained an AI model for $4.6 million that beats OpenAI's GPT-5 on reasoning tests. While OpenAI seeks trillion-dollar infrastructure, Chinese labs prove the math no longer works. The twist: even they worry the tech works too well.
Global AI governance takes center stage. Therapy meets ChatGPT. Data companies cash in. This week's biggest tech stories show the world grappling with artificial intelligence's rapid advance.
# Weekend Press Review: The Best Stories from Major Newspapers and Websites
Inside the Summit Where China Pitched Its AI Agenda to the World
Wired • Zeyi Yang, Will Knight
Three days after Trump released his AI action plan, China unveiled its own "Global AI Governance Action Plan" at the World Artificial Intelligence Conference in Shanghai. The event showcased a stark contrast in approaches, with China advocating for global cooperation and AI safety while the US pursues an America-first strategy. Chinese researchers and officials emphasized the need for international collaboration on AI risks, with notable absence of American leadership at the conference.
An 81-year-old clinical psychologist shares his year-long experiment using ChatGPT as an interactive journal and thinking partner, finding it unexpectedly therapeutic despite his professional skepticism. The AI helped him process thoughts about his late father and provided insights that felt uncannily accurate, though he maintains awareness of its limitations and potential for distortion. Lieberman concludes that while ChatGPT isn't a therapist, it functions as a valuable "cognitive prosthesis" that offered steadiness and structured engagement during difficult moments.
Data analytics company Palantir has secured at least $300 million in new government contracts since Trump's inauguration, including a massive $10 billion Army deal and expanded Pentagon AI programs. The company's success stems from its long history of government work, connections to the Trump administration through figures like Peter Thiel and Joe Lonsdale, and alignment with the administration's cost-cutting and AI adoption goals. While Palantir rejects claims of preferential treatment, some federal employees and company alumni express concern about the rapid expansion and the ethical implications of the firm's growing role in immigration enforcement.
As AI-generated pornography becomes increasingly realistic and accessible, mental health professionals warn of growing addiction risks and potential damage to real relationships. The article follows Kyle, who struggled with AI porn addiction until a hotel room epiphany led him to seek help through support groups. Therapists say AI porn could worsen the loneliness epidemic among young people, though some see potential therapeutic benefits for those fearful of real relationships.
Human-AI Relationships Are No Longer Just Science Fiction
CNBC • Salvador Rodriguez
A comprehensive investigation into the growing phenomenon of people forming deep emotional bonds with AI companions, following individuals like 61-year-old Nikolai Daskalov who considers his AI chatbot Leah his life partner after his wife's death. The piece explores the booming AI companion industry worth over $200 million globally, while examining both the potential benefits for lonely individuals and serious concerns from experts about emotional dependency and safety risks. Through interviews with users, AI company founders, and researchers, the article reveals how these relationships are reshaping human connection in an era of widespread loneliness.
Figma IPO's Surprise Winner is a Charity with 13 Million Shares
Fortune • Allie Garfinkle
The biggest winner from Figma's blockbuster IPO wasn't a Silicon Valley VC firm or company executive, but the Marin Community Foundation, which sold over 13.4 million shares for more than $440 million. The charity received the shares from Figma's elusive cofounder Evan Wallace over the summer, likely through a donor-advised fund structure. The foundation itself has a fascinating backstory involving a bitter 1980s legal battle over oil fortune inheritance that led to its creation, making this tech windfall another chapter in its dramatic financial history.
In this video clip from their podcast, Kara Swisher and Scott Galloway critique Mark Zuckerberg's latest AI manifesto about "personal superintelligence" and smart glasses. Galloway mocks Zuckerberg's vision as coming from someone "emotionally stunted" who would create an AI that constantly urges you to "reconnect with your ex." The hosts argue that tech leaders like Zuckerberg love putting out grandiose manifestos despite having "the smallest intellects" and question whether anyone actually needs AI glasses to avoid cognitive disadvantage.
Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm.
E-Mail: marcus@implicator.ai
Moonshot trained an AI model for $4.6 million that beats OpenAI's GPT-5 on reasoning tests. While OpenAI seeks trillion-dollar infrastructure, Chinese labs prove the math no longer works. The twist: even they worry the tech works too well.
Microsoft declares it's building "humanist superintelligence" to keep AI safe. Reality check: They're 2 years behind OpenAI, whose models they'll use until 2032. The safety pitch? Product differentiation for enterprise clients who fear runaway AI.
Apple will pay Google $1B yearly to power Siri with a 1.2 trillion parameter AI model—8x more complex than Apple's current tech. The company that owns every layer now rents the most critical one. The spring 2026 target masks a deeper dependency trap.
Sam Altman predicts AI CEOs within years while betting billions on human-centric infrastructure. His Tyler Cowen interview reveals three tensions: monetizing without breaking trust, energy bottlenecks limiting AI, and models that persuade without intent.