OpenAI Rolls Out Age Detection for ChatGPT Before Adult Mode Launch
OpenAI's ChatGPT now predicts user age through behavioral signals before "adult mode" launch. Privacy experts warn about accuracy and surveillance.
Anthropic and xAI refugees raised $480M at a $4.5B valuation for a startup that rejects autonomous AI. Humans& bets the future isn't bots working alone. It's bots helping people work together.

September 2025. Andi Peng sat in a conference room at Anthropic, fluorescent lights humming, watching a demo that should have made her proud. Claude churned through a coding task for eight hours straight. No human touched it. Her colleagues cheered. Peng sat there with her arms crossed.
She had spent months on reinforcement learning and post-training for Claude 3.5 through 4.5, teaching the model to reason better, code faster, work longer. The stated goal was safety. The unstated direction was autonomy. Anthropic loved to highlight how its models could churn for 24 hours, even 50 hours, completing tasks on their own. That framing troubled her.
Two months later, Peng walked out. She co-founded Humans&, a startup built on a premise that sounds almost heretical in Silicon Valley. The idea: AI should help people work together. Not replace them.
January 20, 2026. Humans& dropped its funding announcement. Seed round: $480 million. The valuation they stuck on it: $4.48 billion. The company has maybe 20 employees. No product. Three months old. Nvidia wrote a check. So did Jeff Bezos. SV Angel led the round with co-founder Georges Harik, employee number seven at Google back when Google fit in a garage.
Those numbers sound absurd. They're not an outlier. This is what happens now. Researchers bail from the big labs, hang out a shingle, and watch the money pile up. Thinking Machines Lab pulled in $2 billion last July. Mira Murati, former CTO at OpenAI, runs it. Unconventional AI pulled in $475 million in December. Safe Superintelligence, co-founded by Ilya Sutskever, hit a $32 billion valuation before shipping anything.
Humans& sits at an unusual angle to this trend. The other breakaway labs chase the same prize as their former employers. More capable, more autonomous models. Humans& claims it wants something different. If you've watched this space, you've heard that claim before. Every new lab promises a fresh approach until the compute bills arrive. The question is whether Humans& is building genuinely different machinery or just painting the same machine a friendlier color.
The pitch is simple. AI labs have spent billions training systems to work alone. Humans& wants to train systems that help humans work together.
Eric Zelikman, the CEO, puts it in terms of interaction design. "Chatbots are designed to answer questions," he told the New York Times. "They're not good at asking them." His point: current AI doesn't work hard enough to figure out what you actually need. It finishes prompts. That's not the same as collaboration.
The company website leans hard on the language of connection. AI as "deeper connective tissue that strengthens organizations and communities." Strip the buzzwords and you get something like Slack with a brain. A system that sits in your group chat, helps with research, handles coordination, but stays in the background. Less assistant, more teammate.
Building that requires different training methods. The technical focus: long-horizon reinforcement learning. Multi-agent systems. Memory. Something the company calls "user understanding," which sounds like marketing until you realize current chatbots remember nothing. Humans& wants AI that asks clarifying questions, holds onto context, and learns what specific teams actually need.
"No one really accomplishes anything alone," Harik said. Teams build things. That's his pitch, anyway.
The framing is deliberate. Anthropic, OpenAI, and xAI have all moved toward "agentic" AI, systems that can plan, execute, and iterate on complex tasks with minimal human oversight. That direction makes economic sense: if AI can do eight hours of work while you sleep, the value proposition is obvious. But it also means AI is competing with workers, not complementing them.
Peng left Anthropic because she saw where that road led. "Anthropic is training its model to work autonomously," she said. "That was never my motivation. I think of machines and humans as complementary."
The founding team reads like a roll call of AI lab alumni who decided the frontier wasn't heading somewhere they wanted to go.
Peng worked on Claude's post-training, the phase where models learn to follow instructions and behave in desired ways. Zelikman and Yuchen He both came from xAI, where they helped build Grok, Elon Musk's chatbot. Harik built Google's first advertising systems decades ago and has since become an investor. Noah Goodman is a Stanford professor of psychology and computer science who previously worked at Google DeepMind.
The company's 20-odd employees include alumni from OpenAI, Meta, Reflection, AI2, and MIT. The concentration of talent from competing labs is itself a signal. These are people who had front-row seats to the most advanced AI development on the planet and chose to leave.
Some of that movement is standard startup economics. Lab salaries are generous, but equity in a unicorn is more generous. But the stated reasons cluster around something else: discomfort with the direction.
Zelikman was among the first employees at xAI. He helped build a chatbot that now generates images, answers questions, and, according to recent reports, produces content that has drawn regulatory scrutiny. He left to build something different.
The pattern echoes across the industry. Murati left OpenAI after the company's messy leadership crisis and launched Thinking Machines Lab, though her stated vision is more capability-focused than Humans&'s. Sutskever left OpenAI after the Altman boardroom drama to start Safe Superintelligence, which is explicitly focused on building advanced AI safely. The exodus suggests that working inside the major labs comes with a growing sense of misalignment between personal values and corporate direction.
The term sheet closed in a converted warehouse in SoMa, the kind of space that signals "startup" without the overhead of Mission Bay biotech towers. Humans& doesn't have a permanent office yet. The founders work from apartments and borrowed conference rooms. The contrast with Anthropic's sleek headquarters a few blocks away is deliberate.
SV Angel led the round. The firm was founded by Ron Conway, a legendary early-stage investor who backed Google, Facebook, and Airbnb before they became household names. Conway doesn't typically lead seed rounds at $4.5 billion valuations. This is an exception.
The cap table reads like a who's who of tech money. Nvidia invested and will partner on hardware. Jeff Bezos invested, though he's also backing his own AI venture, Project Prometheus, where he serves as co-CEO. GV, Emerson Collective (Laurene Powell Jobs's firm), Forerunner, DCVC, Human Capital, Liquid 2, Felicis, and CRV all participated. The individual investors include Anne Wojcicki, Marissa Mayer, James Hong, and Thomas Wolf.
"A lot of our investors are human, and they care where humanity is going," Harik said. He's joking. Sort of. The line lands because it's half-serious. The people signing these checks are not backing a faster horse. They're backing the idea that the race itself might be going in the wrong direction.
The valuation demands scrutiny. At $4.48 billion with $480 million raised, the company is valued at roughly 9x the capital invested. For a three-month-old startup with no product and no revenue, that multiple prices in extraordinary execution. If you think that's normal, compare it to traditional software. A SaaS company at this valuation would need $50 million in ARR growing 100% year-over-year. Humans& has zero. The investors are betting that the founders can build technology the major labs have either ignored or failed to crack, and that the market for a different kind of AI is large enough to justify paying a decade ahead.
If you think this is just another well-funded AI lab with a nicer story, look at the resource gap.
Anthropic raised $7.3 billion in 2024 alone. OpenAI is reportedly raising at a $150 billion valuation. xAI raised $6 billion last year and is building one of the largest GPU clusters on the planet, 100,000 Nvidia H100s humming in a Memphis data center. Google, Meta, and Microsoft have AI budgets that dwarf the entire venture capital market.
Humans& has $480 million and 20 people. That's the math.
The company's thesis requires building frontier AI, the kind of large-scale model training that costs hundreds of millions of dollars. The Nvidia partnership helps, but compute is just one input. The major labs have accumulated years of training data, reinforcement learning expertise, and deployment infrastructure. They've locked in talent with compensation packages that can reach eight figures for top researchers. They have fleets of GPUs already spinning.
Humans& is betting it can compete by training models differently. Maybe human-centric AI needs fundamentally different architectures. If so, Humans& could carve out real territory. If it's mostly a product layer on top of similar foundation models, the major labs can copy the approach and outspend Humans& on distribution. The difference between those two futures is everything.
The other constraint is demand. You might assume enterprise buyers want collaboration tools. But the autonomous agent paradigm is popular because it promises leverage: hire one AI, get the output of many humans. The human-centric paradigm promises something subtler: better teamwork, deeper understanding. That value proposition is harder to put on a slide deck.
If Anthropic's Claude can complete a task in eight hours with no human oversight, and Humans&'s product requires involvement throughout, the procurement math tilts toward Anthropic. Humans& has to convince customers that the collaboration premium is worth paying, either because the output is better or because the autonomous alternative carries risks they'd rather avoid.
Zelikman works from a rented desk in a co-working space near Potrero Hill. No corner office. No standing desk with three monitors. When he explains the company's wager, he does it in terms of machinery, not mission statements.
The dominant labs are building engines optimized for solo performance. Anthropic, OpenAI, and xAI train models to complete tasks independently, to reason through problems without human intervention, to run for hours or days. That's the design goal. The founders of Humans& believe those engines will hit walls. Technical walls, when autonomous systems make errors that humans would catch. Social walls, when workers resist tools built to replace them. Regulatory walls, when governments decide that AI autonomy needs limits.
Humans& is building different machinery. Where other labs optimize for autonomy, Humans& optimizes for interaction. Where other labs design stateless systems that start fresh each conversation, Humans& wants memory. Where other labs train models to answer questions, Humans& trains them to ask.
The technical bet centers on accumulation. Current chatbots are transactional. Each prompt is an isolated event. Humans& wants AI that learns the preferences and patterns of the people it works with, that builds a model of the team over time, that can ask the right question at the right moment instead of waiting to be prompted.
That vision has been tried. Personal AI assistants have been a graveyard in consumer tech. Siri promised to learn your preferences. Cortana promised to anticipate your needs. Google Assistant promised both. All three flopped. The question now is whether large language models change the equation or whether the core problem, getting AI to genuinely understand human context, stays unsolved.
The emergence of Humans& reveals a fracture in the AI industry that venture capital cannot paper over.
For a decade, the major labs ran on one assumption: more capability is better. Bigger models. More data. Better benchmarks. That logic moved billions in investment and produced real breakthroughs. GPT-4 writes code. Claude analyzes documents. Gemini reasons through problems that would have seemed like science fiction in 2020.
But capability without direction is just horsepower. It doesn't tell you where to drive. The labs optimized for what models could do, not for what humans needed them to do. The result is a generation of AI systems that are impressively capable and frequently misaligned with the way people actually work.
Humans& is one response to that misalignment. The founders looked at the agentic future and decided it wasn't the future they wanted to build. Whether their alternative vision can compete with the resource advantages of the major labs is an open question. But the fact that top researchers are leaving those labs to try suggests that capability alone isn't enough to retain the best talent.
The investor interest tells a similar story. The people writing $480 million checks into a three-month-old company are not naive. They see the same market dynamics as everyone else. They're betting that the current AI race will produce a backlash, and that when it does, Humans& will be positioned to capture the demand for something different.
Humans& has not shipped a product. The company has articulated a philosophy and raised capital, but philosophy doesn't compile.
The test comes when the machinery meets the market. Can the founders build AI that actually improves collaboration? Train models that remember context, ask good questions, adapt to how a specific team works? Do all that while competing with labs running 10 to 100 times more compute?
The timeline matters. Thinking Machines Lab raised $2 billion last July and has already lost half its founding team to departures. The AI startup graveyard is filling with companies that raised big rounds and failed to execute. Capital buys time, but it doesn't buy product-market fit.
By January 2027, Humans& will either have a working product or it won't. That's the only test that matters. The philosophy is interesting. The pedigree is impressive. The funding is real. Twelve months from now, though, the machinery either works or it doesn't. These five walked away from Anthropic, xAI, Google. They bet their careers on a different direction. Now comes the hard part.
Q: What does Humans& actually build?
A: The company is building AI tools for team collaboration. Think Slack with a brain sitting in your group chat, helping with research and remembering what you talked about last week. No product yet. Just the pitch and $480 million.
Q: Who founded Humans& and where did they come from?
A: Five co-founders. Andi Peng trained Claude at Anthropic. Eric Zelikman and Yuchen He built Grok at xAI. Georges Harik was Google employee #7. Noah Goodman teaches at Stanford and did time at DeepMind. The rest of the 20-person crew? OpenAI, Meta, MIT.
Q: How does Humans&'s approach differ from Anthropic or OpenAI?
A: The major labs optimize for autonomous AI that can work independently for hours. Humans& trains models to ask questions, remember user preferences, and stay embedded in human workflows. The distinction is philosophical: augmentation versus replacement.
Q: Is this valuation justified for a company with no product?
A: At $4.48 billion on $480 million raised, Humans& is valued at 9x the capital invested. That's steep for a three-month-old startup, but similar to other AI lab spinouts. Thinking Machines Lab hit $12 billion; Safe Superintelligence reached $32 billion. Investors are pricing in execution, not current revenue.
Q: Who invested in the $480 million seed round?
A: SV Angel led alongside co-founder Georges Harik. Nvidia invested and will partner on hardware. Other backers include Jeff Bezos, GV (Google Ventures), Emerson Collective, Forerunner, DCVC, Felicis, and CRV. Individual investors include Anne Wojcicki and Marissa Mayer.
Get the 5-minute Silicon Valley AI briefing, every weekday morning — free.