Silicon Valley's biggest tech companies spent months lobbying for a 10-year ban on state AI regulation. The Senate voted 99-1 to kill it. Meta, Microsoft, and venture capital firms learned that money can't buy everything in Washington.
Grammarly bought email app Superhuman for an undisclosed sum, part of its plan to build an AI productivity empire. With $1 billion in fresh funding, the grammar company wants to put AI agents at the center of your workday.
Chinese AI lab DeepSeek just dropped their most ambitious creation yet. The DeepSeek V3 model tips the scales at a staggering 641GB, pushing the boundaries of what consumer hardware can handle.
MLX developer Awni Hannun wasted no time. Within hours of release, he had the model purring along at over 20 tokens per second. The catch? You'll need Apple's M3 Ultra Mac Studio - a $9,499 piece of "consumer" hardware that costs more than some cars.
The model marks a significant shift in DeepSeek's approach. They've abandoned their previous custom license in favor of the MIT license, opening the door for broader experimentation and development. The empty README suggests they're letting the code speak for itself.
For those watching their storage space, there's hope. A 4-bit quantization reduces the model to a mere 352GB - still massive, but more manageable for serious enthusiasts. The files come split into 163 chunks, each a hefty piece of the AI puzzle.
OpenRouter integration brings accessibility to those without deep pockets. Free API keys work smoothly, though some users report mysterious "policy" errors when trying to use paid keys with certain privacy settings.
DeepSeek's decision to bake the release date into the model name (DeepSeek-V3-0324) suggests confidence in their rapid development cycle. They're not just building models; they're marking their territory in the AI timeline.
The model handles everything from basic chat functions to creative tasks. It can even generate SVG images of pelicans riding bicycles - though whether that justifies the 641GB footprint remains debatable.
For developers ready to dive in, the setup process is straightforward. A few command-line instructions get you rolling with the llm-mlx plugin, assuming your hardware can handle it. The OpenRouter plugin offers an alternative path, complete with API key management.
The pelican-drawing capabilities might seem frivolous, but they demonstrate the model's versatility. When prompted about pelican facts, it delivers detailed, structured responses complete with markdown formatting and emoji flair.
Why this matters:
We're witnessing the collision of consumer computing and industrial-scale AI - your desktop can now run models that would have required a data center just months ago
The shift to MIT licensing signals a potential sea change in how major AI labs approach intellectual property, possibly leading to more open, collaborative development
Cloudflare now blocks AI bots by default and lets publishers charge per scrape. With AI companies taking 17,000+ crawls per referral while Google takes just 14, the internet's biggest traffic handler is reshaping data collection rules.
Apple considers ditching its own AI technology for Siri, eyeing Anthropic's Claude or OpenAI's ChatGPT instead. The potential reversal exposes Apple's struggle in the AI race and internal talent exodus.
Meta hired four more OpenAI researchers this week, escalating Zuckerberg's talent war with $100M packages. The exodus follows Meta's disappointing Llama 4 launch as the CEO personally hunts AI stars to close the innovation gap.
Meta poaches three OpenAI researchers with $100 million signing bonuses as Zuckerberg builds a "superintelligence" team. Sam Altman dismisses the blitz, but departures suggest money talks in AI's talent war.