Sara Hooker is best known in AI research circles for her 2020 paper "The Hardware Lottery," which argued that AI research outcomes depend heavily on which ideas happen to fit existing GPU and accelerator constraints. On Wednesday, the San Francisco company she co-founded with former Cohere inference director Sudip Roy launched its first product aimed at the training stack. Adaption introduced AutoScientist, an automated fine-tuning system that the company said raised win rates from 48% to 64% against the configurations its own AI researchers picked, a 35% improvement measured on evaluations Adaption designed.
Adaption is one of several research-led startups that emerged in late 2025 and early 2026 betting that further AI capability gains depend on training expertise as much as on bigger models and more compute. The company is selling that bet with $50M in seed funding from Emergence Capital, Mozilla Ventures and Fifty Years, and with win-rate numbers no public benchmark can corroborate.
Key Takeaways
- Adaption launched AutoScientist Wednesday, an automated fine-tuning system the company says raised win rates from 48% to 64% on in-house evaluations.
- Co-founder Sara Hooker, formerly Cohere's VP of AI research, raised $50M in February from Emergence Capital, Mozilla Ventures, and Fifty Years.
- The 35% improvement was measured against Adaption's own AI researchers on benchmarks Adaption designed; conventional benchmarks like SWE-Bench do not apply.
- Mira Murati's Tinker raised $2B at $12B in October for a similar pitch; AutoScientist's free 30-day trial expires in mid-June.
AI-generated summary, reviewed by an editor. More on our AI guidelines.
What AutoScientist actually does
AutoScientist runs the full fine-tuning loop, picking the dataset and the model recipe together. "What's super exciting about it is that it co-optimizes both the data and the model, and learns the best way to basically learn any capability," Hooker, formerly VP of AI research at Cohere and a five-year veteran of Google DeepMind, told TechCrunch. "It suggests we can finally allow for successful frontier AI trainings outside of these labs."
Adaption's pitch starts from a market structure claim. "Less than a thousand people in the world know how to shape a frontier model," the company wrote in its launch post. "They sit inside a handful of labs, working on proprietary systems. Everyone else has been relegated to prompt engineering." The launch post described the failure modes that researcher-level expertise tends to handle and that prompt engineering does not: catastrophic forgetting that erodes general knowledge, overfitting on small datasets, and conflicting training signals that fail to teach new behaviors.
The benchmark Adaption built
The 48-to-64 jump is the central marketing claim, and it runs entirely on evaluations Adaption designed. The grid covers eight verticals and dataset sizes from 5,000 to 100,000 examples, all of them constructed in-house. "Win rates are computed on in-house domain-specialized evaluations for each vertical," the launch post said.
Hooker told TechCrunch that conventional benchmarks like SWE-Bench or ARC-AGI are not applicable to AutoScientist because the system is built to adapt models to specific customer tasks. According to Adaption's published methodology, the company's evaluations measure each AutoScientist run against the customer's stated objective for that vertical, with no external comparator. As a result, the central claim cannot be independently verified until customers run their own data through the system, and the 30-day free trial that Adaption launched on Wednesday will produce the first such results.
Get Implicator.ai in your inbox
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
The Murati comparable
Mira Murati's Thinking Machines raised $2B at a $12B valuation for Tinker in October. Tinker is also an API that automates parts of fine-tuning across frontier-class open models, targeted at teams without a frontier-lab research staff. Adaption's February seed of $50M closed at roughly one-fortieth the dollar amount Murati's company commanded, seven months earlier, for a comparable pitch in the same market segment.
Hugging Face shipped a related product in December, letting Claude fine-tune competing open-source models for thirty cents per run, with a 7B parameter cap. AutoScientist sits on top of Together AI's fine-tuning service, which Together said in its partnership announcement with Adaption supports "large models exceeding 100B parameters," including Kimi K2.5, GLM 5.1, and Qwen 3.5-397B. Adaption has not published comparative results from outside customers that would show how AutoScientist's automation gains apply to models in that 100B-plus parameter range.
What the trial actually tests
Adaption has positioned AutoScientist for a different buyer profile than Murati's Tinker. The Adaption blog names the target customer directly: "an ML engineer who knows they need fine-tuning, but doesn't have time to babysit sweeps," and "enterprises that want to offset inference bills depending on proprietary models." Adaption said the automation loop runs end-to-end, iterating on training data and model recipe until the model converges on the customer's stated objective.
AutoScientist is available free for the first 30 days after launch, according to Adaption's blog post and CEO Sara Hooker's interview with TechCrunch. The free-trial period allows early customers to evaluate the published 48-to-64 win-rate gains against their own data sets and workloads. Hooker told TechCrunch that AutoScientist's potential impact could be comparable to that of code-generation tools: "the same way that code generation unlocked a lot of tasks, this is going to unlock a lot of innovation at the frontier of different fields." The first 30-day trials are set to expire in mid-June, when customer-side fine-tuning results will become available as the first independent data points on the company's published claims.
Frequently Asked Questions
What is AutoScientist?
AutoScientist is an automated fine-tuning system launched by Adaption Labs on May 13, 2026. It co-optimizes training data and model configuration together, running the full fine-tuning loop end-to-end. The system reportedly raised win rates from 48% to 64% against configurations selected by Adaption's own AI researchers, a 35% relative improvement measured on in-house evaluations across eight verticals and dataset sizes from 5,000 to 100,000 examples.
Who founded Adaption Labs?
Adaption was co-founded by Sara Hooker and Sudip Roy. Hooker, the CEO, previously served as VP of AI research at Cohere and spent five years at Google DeepMind. She is best known for her 2020 paper 'The Hardware Lottery.' Roy was previously director of inference computing at Cohere. The San Francisco startup raised $50M in seed funding in February from Emergence Capital, Mozilla Ventures, and Fifty Years.
How does AutoScientist compare to Mira Murati's Tinker?
Both products target frontier-class open model fine-tuning for teams without research staff. Mira Murati's Thinking Machines raised $2B at a $12B valuation for Tinker in October 2025. Adaption raised $50M in February 2026, one-fortieth Murati's funding level. AutoScientist targets ML engineers who need fine-tuning but lack time to babysit sweeps, and enterprises offsetting inference bills.
Why can't standard AI benchmarks verify AutoScientist's claims?
Hooker told TechCrunch that conventional benchmarks like SWE-Bench or ARC-AGI are not applicable because AutoScientist is built to adapt models to specific customer tasks. The system measures performance against each customer's stated objective. The published 48-to-64 win-rate gains come from Adaption's own internal evaluation grid. Independent verification requires customers to run their own data through the system.
When will independent data on AutoScientist appear?
AutoScientist is available free for the first 30 days after the May 13 launch. The first 30-day trials will expire in mid-June. Customer-side fine-tuning results from that period will provide the first independent data points outside Adaption's published claims. The product sits on Together AI's fine-tuning service, supporting open models including Kimi K2.5, GLM 5.1, and Qwen 3.5-397B.
AI-generated summary, reviewed by an editor. More on our AI guidelines.



IMPLICATOR