Stanford researchers found that Meta's newest AI model can reproduce 42% of Harry Potter word-for-word—ten times more than earlier versions. The findings complicate copyright lawsuits and reveal a troubling trend in AI development.
Anthropic says multiple AI agents working together beat single models by 90%. The catch? They use 15x more computing power. This trade-off between performance and cost might reshape how we build AI systems for complex tasks.
AI models typically learn by memorizing patterns, then researchers bolt on reasoning as an afterthought. A new method called Reinforcement Pre-Training flips this approach—teaching models to think during basic training instead.
Microsoft just made its AI coding tools open source - a move that could reshape how developers work. The decision opens up GitHub Copilot's core features and adds an autonomous coding agent that writes and fixes code on its own. Here's what developers need to know.
Microsoft is transforming VS Code into an open source AI editor. The company announced at Build 2025 that it will release the code behind GitHub Copilot Chat under the MIT license and integrate AI capabilities directly into VS Code's core.
The move marks a shift from treating AI as an optional add-on to making it a fundamental part of the code editor. It's like VS Code finally admitting that AI is more than just a fancy party trick - it's becoming part of the family.
For developers, this means unprecedented access to VS Code's AI internals. They'll be able to inspect, modify, and enhance the same AI features they use daily. The change affects everything from code completion to the new autonomous coding agent that can tackle bugs and add features while developers grab coffee.
Meet your new AI teammate
This autonomous agent is particularly interesting. It works by spinning up a virtual machine, cloning your repository, and getting to work - sort of like hiring a very efficient intern who never needs sleep or snacks. When it's done, it tags you for review and stands ready to address any feedback.
The timing isn't random. Microsoft says several factors pushed them toward openness. Large language models have improved so much that keeping prompting strategies secret no longer makes sense. It's like holding onto a recipe for boiling water - the technique is now common knowledge.
Security through transparency
Security played a role too. With AI tools increasingly targeted by bad actors, Microsoft is betting that sunlight is the best disinfectant. The open source community has a track record of spotting and squashing bugs faster than closed teams.
The move also responds to growing questions about data collection in AI coding tools. By opening the source code, Microsoft lets developers see exactly what data these tools gather. No more wondering if your code comments about your boss are being secretly archived somewhere.
Extension developers get full access
Extension developers stand to benefit particularly. Currently, building AI-powered VS Code extensions is like trying to dance with a partner while blindfolded - you can do it, but it's awkward and you'll probably step on some toes. Access to the Copilot Chat source code should make this process smoother.
Microsoft isn't just dumping code onto GitHub and calling it a day. They're also releasing their prompt testing infrastructure, making it easier for contributors to test AI features. Given how unpredictable large language models can be, this is like providing a safety net for the coding trapeze act.
Part of broader Copilot expansion
The announcement comes alongside other GitHub Copilot updates. The tool is getting PostgreSQL support, expanded agent capabilities, and new features for modernizing Java and .NET applications. It's currently used by 15 million developers, suggesting AI coding assistants have moved well beyond the experimental phase.
Why this matters:
This shifts AI coding tools from being proprietary black boxes to community-driven projects. Developers can now peek under the hood of tools they use daily, potentially leading to faster improvements and better security.
Microsoft is betting that community collaboration beats secrecy in AI development. It's a notable stance from a company that once considered open source the enemy - kind of like Darth Vader deciding to join the Rebel Alliance.
Stanford researchers found that Meta's newest AI model can reproduce 42% of Harry Potter word-for-word—ten times more than earlier versions. The findings complicate copyright lawsuits and reveal a troubling trend in AI development.
Meta users think they're chatting privately with AI. Instead, they're broadcasting medical questions, legal troubles, and relationship problems to the world through a public feed that many don't realize exists.
Disney and Universal sued AI company Midjourney for using their characters without permission to train image generators. It's the first major Hollywood lawsuit against generative AI, testing whether copyright law protects creators in the age of artificial intelligence.
OpenAI cut its o3 model prices 80% while launching o3-pro—a reasoning AI that takes over 3 minutes to respond but outperforms rivals on complex tasks. The move intensifies AI pricing wars and splits the market between fast chat models and slow thinking ones.