OpenAI's new expert council will advise on AI safety—but won't decide anything. The timing reveals the strategy: FTC inquiry in September, wrongful death lawsuit in August, council formalized last week. Advisory input without binding authority.
Nvidia's $4K desktop AI box arrived five months late and $1K more expensive. It runs models too large for consumer GPUs but 4x slower than pro cards. The bet: developers need memory capacity more than speed for prototyping 70B+ parameter models locally.
Walmart wires ChatGPT to checkout, giving OpenAI 270 million weekly shoppers and landing an AI answer to Amazon's Rufus. But fresh food's exclusion reveals exactly where conversational commerce still hits operational walls.
Google just launched Gemma 3, the latest version of its "open" AI model. This new release packs a serious punch - it can analyze text, images, and videos while running on devices as small as your phone.
The tech giant claims Gemma 3 outsmarts rivals like Facebook's Llama and OpenAI when running on a single GPU. It's like fitting a supercomputer into a matchbox, only this one speaks 35 languages fluently.
Credit: Google
Google didn't skimp on safety features either. The new ShieldGemma 2 acts like an overzealous bouncer, filtering out explicit, dangerous, or violent content before it crashes the party.
The company's marketing folks are doing backflips over Gemma's previous success - apparently, developers have downloaded it over 100 million times. That's a lot of artificial intelligence floating around in the wild.
Credit: Google
Academics haven't been left out in the cold. Google's throwing $10,000 worth of cloud credits at researchers who want to tinker with their new toy. It's like a scholarship program for robots.
Meanwhile, debates rage on about what makes an AI model truly "open." Google's license still keeps a tight leash on what users can do with Gemma. Some might say it's about as open as a speakeasy during prohibition.
Why this matters:
Google just proved you don't need a warehouse full of computers to run sophisticated AI
The race for "democratic AI" continues, even if Google's version of democracy comes with an asterisk
Tech translator with German roots who fled to Silicon Valley chaos. Decodes startup noise from San Francisco. Launched implicator.ai to slice through AI's daily madness—crisp, clear, with Teutonic precision and sarcasm.
E-Mail: marcus@implicator.ai
Nvidia's $4K desktop AI box arrived five months late and $1K more expensive. It runs models too large for consumer GPUs but 4x slower than pro cards. The bet: developers need memory capacity more than speed for prototyping 70B+ parameter models locally.
Walmart wires ChatGPT to checkout, giving OpenAI 270 million weekly shoppers and landing an AI answer to Amazon's Rufus. But fresh food's exclusion reveals exactly where conversational commerce still hits operational walls.
Oracle's betting AI agents run where data lives, not where clouds want them. The company just shipped quantum-resistant encryption across its database stack and drew $1.5 billion in partner commitments—before the platform hit GA.
Broadcom's stock surged 10% on OpenAI chip news—then its president said OpenAI isn't the $10B mystery buyer. The deal's real, the payment terms stay vague, and OpenAI now owes hundreds of billions across multiple vendors while burning $10B yearly.