Mark Zuckerberg just paid $14.3 billion to hire one person. Meta's desperate shopping spree for AI talent reveals how far behind the company has fallen—and creates an awkward problem for competitors who must now choose sides.
Programming computers in English sounds impossible. But Andrej Karpathy built working apps without knowing code, using only natural language prompts. He calls it Software 3.0. These AI systems think like humans, complete with superhuman memory and distinctly human mistakes.
Amazon, Google, Microsoft and Meta are pushing Congress to ban states from regulating AI for 10 years. California just released a detailed framework for AI oversight. The clash could determine who controls AI rules nationwide.
OpenAI's head of model behavior, Joanne Jang, revolutionized the company's image generation rules just in time for their servers to melt down. The new features proved so popular that CEO Sam Altman had to slam the brakes.
The scene echoes Twitter's infamous growing pains of 2008. Back then, a cute blue whale apologized when servers buckled under viral growth. Today, OpenAI faces its own version: GPUs pushed to their limits by enthusiastic users creating AI art.
"Our GPUs are melting," Altman posted on X today. His team scrambled to add rate limits as demand overwhelmed their infrastructure. Free users will soon get three images per day - once the servers stop smoking.
Sam Altman on X
The crisis emerged from Jang's bold new vision for AI safety. She scrapped the old "block everything risky" rulebook for a system that trusts users while targeting specific harms. The change unleashed a flood of creativity - and an avalanche of processing requests.
Past launches locked down features behind thick walls of caution. The AI refused many reasonable requests, fearing misuse. Jang's team watched users bump into unnecessary restrictions. Even obvious use cases got blocked.
"AI lab employees should not be the arbiters of what people should and shouldn't be allowed to create," Jang writes. Her team learned humility the hard way. Users kept discovering valuable applications they'd never imagined.
The new approach tackles thorny issues head-on. Take public figures - instead of universal restrictions, anyone can opt out of being depicted. Cultural symbols like swastikas? Permitted in educational contexts, blocked when spreading hate.
Jang's team also rewrote rules around human diversity. The old system nervously rejected requests to make people look "more Asian" or "heavier," accidentally treating these features as problematic. The new approach celebrates human variation.
One quote captures Jang's philosophy: "Ships are safest in harbor, but that's not what ships are for." She argues that excessive caution carries its own risks. When fear blocks innovation, good ideas die unseen.
Consider memes. The old thinking questioned whether better meme-making tools justified potential misuse. Jang flipped this logic. Small moments of delight and connection improve lives. Why sacrifice real benefits to prevent hypothetical problems?
The changes might look like lowered standards to outside observers. Jang disagrees. Her team spent months researching and debating each policy shift. The new rules reflect greater sophistication, not less care.
OpenAI now positions itself to learn from real-world use. When policies need updates, they'll change them. Jang sees this flexibility as a strength, not a weakness. Perfect rules don't exist - but adaptable ones do.
The company's GPT-4o model powers these improved images. It renders text better and creates more realistic scenes than previous versions. The Verge called it a "step change" improvement. If only the hardware could keep up.
Why this matters:
OpenAI's success threatens to outrun its infrastructure. Even tech giants can't predict user demand
The company traded "block everything" for "enable creativity" - and users responded with such enthusiasm they broke the system. Talk about a task failed successfully
Mark Zuckerberg just paid $14.3 billion to hire one person. Meta's desperate shopping spree for AI talent reveals how far behind the company has fallen—and creates an awkward problem for competitors who must now choose sides.
ChatGPT's crawlers abandon slow websites before they can respond, generating HTTP 499 timeout errors that cost sites visibility in AI search results. The shift forces a return to server-side rendering as JavaScript becomes invisible to AI bots.
OpenAI executives discussed accusing Microsoft of antitrust violations as their $300 billion AI partnership crumbles over control and money. The breakup would reshape the entire industry and leave enterprise customers scrambling.
TikTok just gave marketers AI tools that turn photos into video ads in seconds. WPP and Adobe jumped on board while TikTok faces a June ban deadline. The race to automate creativity is changing who controls content creation.