World Labs Raises $1B From Nvidia, AMD for Spatial AI

World Labs Raises $1 Billion From Nvidia, AMD, and Autodesk for Spatial AI

Fei-Fei Li's World Labs raised $1 billion from Nvidia, AMD, and Autodesk for spatial AI. The startup opened its 3D generator Marble to the public.

World Labs, Fei-Fei Li's spatial AI startup, raised $1 billion in a round backed by Nvidia, AMD, Autodesk, and Andreessen Horowitz, the company said on Wednesday. Autodesk put in $200 million, the single largest commitment in the round, and will serve as a strategic advisor to the startup. Bloomberg had previously reported the firm was talking to investors at roughly a $5 billion valuation. World Labs wouldn't confirm the number.

That puts World Labs among the best-funded startups chasing what the industry calls world models. Emerson Collective, Fidelity Management & Research Company, and Sea also participated, according to a blog post by the company.

You've probably used ChatGPT or Gemini. Those are language models, they read and write text. World models are after something else. They're trying to build AI that reasons about physics, about how objects sit in a room and what happens when someone shoves one off a table. The kind of intelligence needed to train a robot, or test a building design before the foundation gets poured.

The Breakdown

  • World Labs raised $1 billion from Nvidia, AMD, Autodesk, and Andreessen Horowitz for spatial AI development.
  • Autodesk invested $200 million, the largest single commitment, and will advise on integrating world models into professional 3D workflows.
  • The startup opened Marble to the public, a tool that generates navigable 3D environments from text, images, video, or hand-drawn sketches.
  • World Labs is competing with Runway, Google DeepMind, and Yann LeCun's AMI Labs to build AI that reasons about physics and three-dimensional space.

From ImageNet to world models

Li helped build one of the foundations of modern AI. She co-created ImageNet. Nobody expected that dataset to matter as much as it did, but deep learning researchers found a testing ground in it and the field broke open. By 2012, labs were throwing neural networks at those millions of tagged images. The models crushed every previous benchmark in object recognition. A competition grew around the dataset and ran for seven years, dragging the whole field forward with it. Most computer vision products on the market today owe something to that work.

World Labs picks up from there. ImageNet taught machines to spot objects in flat pictures. What Li is chasing now is harder: machines that grasp how objects exist in three-dimensional space, that figure out what happens when someone walks through a room or knocks something off a shelf.

She started the company in early 2024 with Justin Johnson, Christoph Lassner, and Ben Mildenhall. All three came out of generative AI and computer vision research. World Labs went public that September with two hundred and thirty million dollars and a billion-dollar valuation. Seventeen months in, the paper number has roughly quintupled.

Marble goes live

Alongside the funding news, World Labs opened Marble to the public. The product had been in closed beta since last November. Marble generates 3D worlds. Text works. So do photographs, video clips, rough 3D sketches.

Ask for a Mediterranean courtyard, crumbling stone walls, terracotta pots, and the model builds it. Not a flat image. A 3D environment, walkable, rotatable, exportable.

The input options are broad. One photo becomes a navigable space. Several shots of the same place, different angles, and the geometry tightens up. Video clips work. A new tool called Chisel works differently. Users sketch the shape of a world by hand, pick a visual direction, retro sci-fi, brutalist concrete, whatever, and let the model fill in everything else.

Export matters more than generation for the commercial story. Gaussian splats, the particle-cloud format that VFX studios already use, are one option. Standard triangle meshes come in high-detail and physics-ready flavors. Video rendering with camera control is there too.

Users can expand existing worlds by filling in blank areas, or stitch separate worlds together into larger spaces. World Labs launched a companion site, Marble Labs, packed with tutorials. Use cases range from game prototyping to robot training.

Why Autodesk wrote the biggest check

Autodesk's $200 million is the most telling commitment in the round. Architects use Autodesk. Animators use Autodesk. Mechanical engineers use Autodesk. The company's software runs most of the professional 3D work being done today. World models are a natural extension of what Autodesk already sells. That check carries a defensive edge. If a startup owns the world model layer for professional 3D, Autodesk's grip on the workflow weakens.

"If AI is to be truly useful, it must understand worlds, not just words," Li said in a statement. "Worlds are governed by geometry, physics, and dynamics, and reconciling the semantic, spatial, and physical is the next great frontier of AI."

Daron Green, Autodesk's chief scientist, told TechCrunch the partnership is early-stage. "You could anticipate us consuming their models or them consuming our models in different settings," he said. One scenario he described: a customer starts with a world-model-generated sketch of an office layout inside World Labs, then switches to Autodesk's platform to refine specific elements like furniture or structural supports.

Data sharing is not part of the agreement. Green said the collaboration will happen at the research and model level. Both companies will explore how their AI systems feed into each other without pooling proprietary datasets.

Autodesk has its own generative AI effort for 3D work, a system it calls "neural CAD." Trained on geometric data, it generates working 3D models that account for how components would function in the real world. World Labs' technology could extend that capability beyond individual design files to full-scene simulation, the kind of holistic spatial reasoning that robotics firms and production studios need but Autodesk's current tools don't provide on their own.

Gaming and media will be the starting point. Autodesk already works with most major production studios and has trained animation models that respond to physical constraints like gravity and terrain. Green called these "close to world models" and sees a clear fit with World Labs' scene generation. "You're not just animating the dog," he said. "You're giving it a world within which it can now interact."

A crowded race with serious backing

The billion-dollar round arrives at a moment when investors are growing nervous about returns on language models. Hundreds of billions have gone into chatbot infrastructure over the past three years. For most companies outside the model providers themselves, the payoff remains fuzzy. Spatial AI represents a different thesis: that the next wave of value will come from AI tied to physical outcomes, robot performance, building efficiency, production speed, rather than subscription revenue from text generation.

World Labs is not the only company chasing that thesis. Yann LeCun's world models startup, AMI Labs, has drawn heavy investor interest. Runway, the AI video company, raised $315 million earlier this month at a $5.3 billion valuation and is pivoting aggressively from video generation into world models. Its latest foundation model, Gen-4.5, already leads independent text-to-video benchmarks. The pivot is about making those outputs spatially consistent, turning generated clips into environments with persistent geometry.

Google DeepMind is running its own world-generation research, including a project called Genie. Nvidia's Omniverse platform handles physically accurate simulation at industrial scale.

What separates these players is the bet on where revenue arrives first. Runway approaches the problem from video, trying to make its models understand physics well enough to go beyond pretty clips. DeepMind is exploring agents that learn to operate inside generated worlds. World Labs is positioning spatial intelligence as infrastructure, a platform layer beneath applications in robotics, architecture, film production, and scientific research.

Having Nvidia and AMD as investors carries weight beyond the dollar amount on the term sheet. World model training eats enormous quantities of GPU compute. When every AI lab in the world needs the same hardware, the investor list starts to double as a procurement advantage.

What sits on the other side

Li has been pushing the phrase "spatial intelligence" since before World Labs had a name. Sounds clean on a whiteboard. In practice, it's brutal. AI that looks at a room, knows what's in it, and can predict what happens when something moves. Then builds new rooms that follow those same rules. Four problems rolled into one.

A robotics company trying to deploy a warehouse picker needs all of that. The robot maps the space. Spots boxes on shelves. Plans a route around obstacles. Predicts what happens if its arm clips something. Simulation handles the training. Cheaper than the real floor. Safer, too. The catch is fidelity. If the simulated warehouse drifts too far from the real one, the robot shows up and can't do the job. That's the promise of world models, and the open question.

Architecture runs on a version of the same idea, simulating how light falls through a building across the seasons before anyone breaks ground. So does pharmaceutical research, where molecular interactions happen in three dimensions and bad models mean wasted years. These are not consumer applications. They are industrial plumbing, expensive and invisible. The kind of work that justifies a billion-dollar round.

World Labs said its next focus is interactivity, making world models that let humans and AI agents act within generated environments rather than just observe them. That's where the largest commercial opportunity probably sits. Scene generation for designers is a sideshow. The simulation layer underneath robotics, autonomous vehicles, and scientific computing is the real target.

One billion dollars. A product that went live today. A roster of investors who manufacture the chips these models depend on. Li is building the translation layer between flat AI and the physical world. Whether that becomes standard infrastructure or an expensive experiment will depend on whether world models can close the distance between impressive demos and reliable production tools. That distance has swallowed well-funded AI companies before.

Frequently Asked Questions

What is a world model and how does it differ from a language model?

A language model reads and generates text. A world model tries to understand three-dimensional space, physics, and how objects interact. Think of it as the difference between describing a room and actually knowing what happens when you knock something off a shelf. World models aim to simulate physical environments for robotics, architecture, and scientific research.

What is Marble and what can it do?

Marble is World Labs' first commercial product, now publicly available after a closed beta. It generates 3D environments from text prompts, photographs, video clips, or rough 3D sketches. Users can walk through, rotate, and export these environments as Gaussian splats, triangle meshes, or rendered video with camera control.

Why did Autodesk invest $200 million in World Labs?

Autodesk's software underpins most professional 3D work in architecture, animation, and engineering. World models extend that capability toward full-scene simulation. The investment carries a defensive logic: if another company owns the world model layer, Autodesk's position in the workflow weakens. The two companies plan to start with gaming and media use cases.

Who else is building world models?

Runway raised $315 million at a $5.3 billion valuation and is pivoting from video generation into spatially consistent environments. Yann LeCun's AMI Labs has drawn heavy investor interest. Google DeepMind is running its own research through Project Genie. Nvidia's Omniverse handles physically accurate simulation at industrial scale.

What are the commercial applications for spatial AI?

The near-term targets are gaming and media production, where studios already work with 3D tools. Longer-term applications include robotics training through simulated warehouses, architectural simulation testing light and airflow before construction, and pharmaceutical research modeling molecular interactions in three dimensions.

Sitegeist Raises €4M to Automate Concrete Repair With Autonomous Robots
Munich startup Sitegeist has raised €4 million in a pre-seed round to deploy autonomous robots that strip deteriorated concrete from aging bridges, tunnels, and parking structures, the company announc
Tesla Needs Hundreds of Chinese Suppliers to Build Its American-Made Robot
For three years, Tesla has been quietly signing up hundreds of Chinese companies to supply parts for its Optimus humanoid robot, the South China Morning Post reported on Friday. We're talking actuator
China Built the Robot. America Built the PowerPoint.
At CES 2026, the Central Hall told a story before anyone switched on a demo. TCL occupied the anchor position, 3,400 square meters of prime real estate where Samsung had planted its flag for two decad

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.