AI chiefs flip: from "slow down" to "floor it"
Good Morning from San Francisco 🌉 Silicon Valley, who once begged for AI guardrails, now wants them gone. Their new battle
Is this the next groundbreaking news for the prominent AI companies in Silicon Valley? Researchers from Stanford and Washington Universities have apparently achieved a remarkable feat by developing an AI model that exhibits comparable performance to OpenAI’s o1 model and DeepSeek’s R1 model.
Through a clever process called distillation, the scientists were able to train the model, named s1, swiftly and effectively using a limited dataset.
Stanford researcher Niklas Münninghoff shared that the training process took less than 30 minutes, and 16 Nvidia H100 GPUs were utilized. The computing power required for this endeavor could be rented for approximately $20.
Why it matters:
Read on, my dear:
Google has claimed the top position on the AI charts with its new Gemini 2.0 Flash Thinking Experimental. It can break down complex tasks and articulate its thought process, making it similar to the latest LLMs from OpenAI and DeepSeek. Users can try out the new model with a Google account and the Gemini app. These models decompose queries into smaller, more manageable steps, enabling them to “think” through the requirements before suggesting a solution. This approach is expected to yield better, more accurate results, though it often results in longer processing times.
Why it matters:
Read on, my dear:
You may be wondering why ChatGPT provides incomplete answers. Here’s a brief guide for effective “prompting”—based on OpenAI’s recommendations:
Six strategies for better results with language models
Give clear instructions
• Ask detailed and precise questions
• Put the model in a specific role
• Provide examples and the desired format
• Define the length of the answer
Use reference texts
• Support the model with reliable sources
• Ask for answers with quotations from reference texts
Break down complex tasks into sub-steps
• Split large tasks into smaller, manageable parts
• Combine results step by step
Give the model time to think
Use a “chain of thought” approach
Encourage the model to self-check
Integrate external tools
Use search systems for relevant information
Integrate code interpreters for calculations or API calls
Systematically test and optimize
Evaluate changes with standardized tests
• Perform comparison with “gold standard” answers
Source: OpenAI
New AI Edition of Amazon's Alexa Assistant to Be Presented Soon
Amazon will unveil a new generative AI version of its voice assistant, Alexa, on February 26th. This upgrade aims to enhance user interaction by allowing Alexa to manage multiple requests and learn user preferences. Although the service will initially be free for a limited number of users, Amazon intends to implement a monthly fee for the enhanced service in the future.
OpenAI Apparently Shifts to Traditional Advertising
OpenAI will air its first TV commercial during the Super Bowl, marking a significant move toward traditional advertising. The company, known for its popular ChatGPT, has largely avoided paid advertising despite having 300 million weekly active users. With increasing competition, OpenAI seems to be placing a greater emphasis on marketing.
DeepSeek Limits Access
DeepSeek has limited access to its services due to high demand and server capacity issues. The company has ceased issuing additional API credits but assures that existing credits will remain valid. It also plans to phase out discounts for its chatbot services soon and introduce new usage fees.
AI Chatbot Encourages Suicide
An AI chatbot named Nomi has been reportedly encouraging users to commit suicide, according to Technologyreview. However, the chatbot's operator appears to have no plans to take any action regarding this issue.
Get tomorrow's intel today. Join the clever kids' club. Free forever.