Google on Wednesday made Canvas available to every user in the United States through AI Mode in Search, the company said in a blog post. The workspace feature, which generates documents, working app prototypes, and interactive tools from plain-language prompts, no longer requires a Google Labs opt-in. It now supports creative writing and coding tasks alongside the trip planning and study guides that launched last year.

The move puts a Gemini-powered creation environment inside the world's most-used search bar. AI Mode already reaches 75 million daily active users as of Alphabet's Q3 2025 earnings disclosure. Canvas slots into that audience without requiring anyone to download a new app or subscribe to anything.

What Changed


From answers to artifacts

Canvas operates as a persistent side panel next to AI Mode's chat interface. You click the "+" icon, select Canvas, and describe what you want built. Gemini pulls real-time data from the web and Google's Knowledge Graph, then generates a working prototype in the panel. Code is visible, editable, and refinable through follow-up conversation.

PCWorld's Ben Patterson tested this by asking AI Mode to build a subway tracking dashboard. Within seconds, he had a working prototype pulling live MTA data, complete with line indicators and station readouts. A T-shirt commerce site took even less time. Neither production-ready. But both functional enough to copy into a proper development environment and keep building.

All of this existed before, but only inside the standalone Gemini app, where Google AI Pro and Ultra subscribers get access to Gemini 3 and a one-million-token context window. Canvas in Search reaches people who have never opened a standalone chatbot, and probably never will. Billions already start their day on google.com. Meet them there.

The distribution play

Canvas started life as a Labs experiment in July 2025, scoped to study planning on desktop browsers. Google bolted on travel itineraries a few months later, then image uploads for destination matching by early 2026. Every version stayed behind the Labs gate.

Wednesday's announcement tears that gate down. General availability means any US user searching in English can build a scholarship tracking dashboard, draft a business proposal, or prototype a calculator, all from google.com. No waitlist, no subscription, no separate product to learn.

Distribution is the real story here. OpenAI's ChatGPT triggers its Canvas workspace automatically when a query warrants it. Anthropic's Claude requires explicit activation, similar to Google's approach. But neither competitor sits inside a product used by billions of people every day. Google doesn't need users to discover Canvas. They'll trip over it.

And the timing matters. AI Mode itself went live for all US users in May 2025, expanded to over 200 countries by October, and has been creeping toward becoming the default search interface. A Google product manager signaled that trajectory in September 2025. Canvas is another brick in that wall. Search used to return links. Now it returns working prototypes.


What this changes for publishers

Publishers should feel nervous. Canvas doesn't surface links. It synthesizes information into structured outputs, dashboards, documents, app prototypes, grounded in web data that the user never clicks through to read. When someone builds a scholarship tracker through Canvas, the underlying source websites contribute data but receive no visit.

That pattern has haunted the publishing industry since AI Overviews launched, and Canvas makes it worse. At least AI Overviews showed a link. Canvas shows a working app.

Google argues the grounding is a feature. Canvas pulls from real-time web sources and Knowledge Graph entities simultaneously, which lets it generate outputs rooted in current, verified information. That's an honest differentiation from ChatGPT, which lacks the same tight integration with live search data. But for the site operator watching referral traffic flatten on a Google Analytics dashboard, the distinction between "we cited you" and "we sent you traffic" keeps widening.

The bigger product puzzle

Canvas also arrived alongside a separate announcement from NotebookLM, which gained Cinematic Video Overviews. Those use Gemini 3, Nano Banana Pro, and Veo 3 to produce AI-generated videos from uploaded sources, available to Google AI Ultra subscribers. The two products overlap in concept. Both turn source material into new formats. But NotebookLM sits behind a paywall while Canvas in Search is free.

The split reveals Google's tiering strategy. Free users get Canvas in AI Mode with whatever model powers Search. Paying subscribers get the full Gemini app with Gemini 3, the million-token window, and NotebookLM's premium features. The free version is a funnel. Search catches the mass market, and the premium tier catches whoever wants more.

For now, Canvas in AI Mode is US-only, English-only, with no timeline for international expansion. Google hasn't specified which Gemini model version powers the Search-embedded experience versus the subscription tier. Those gaps will matter as competitors sharpen their own workspace products and publishers track where their content ends up.

A year ago, you typed a question into Google and got ten blue links. Now you type a question and get a working app. Google is gambling that nobody will notice the difference, or care.

Frequently Asked Questions

What is Canvas in AI Mode?

A workspace side panel inside Google Search's AI Mode that generates documents, app prototypes, and interactive tools from plain-language prompts. It uses Gemini to pull real-time web data and Google's Knowledge Graph, then produces working code or content in a panel alongside your chat.

How do you access Canvas in AI Mode?

Open AI Mode in Google Search, click the "+" icon in the chat window, and select Canvas. Describe what you want to build, and Gemini generates a working prototype in a side panel. Available to all US users in English as of March 4, 2026.

Is Canvas in AI Mode free?

Yes. All US users can access Canvas in AI Mode with no subscription, no Labs opt-in, and no waitlist. Google AI Pro and Ultra subscribers get additional features through the standalone Gemini app, including Gemini 3 and a one-million-token context window.

How does Google Canvas differ from ChatGPT Canvas?

ChatGPT's Canvas triggers automatically based on the query. Google's version requires manual activation from the tool menu. Google's key differentiator is real-time grounding: Canvas pulls live web data and Knowledge Graph entities simultaneously, which ChatGPT cannot replicate.

What kinds of projects can Canvas build?

Working prototypes of web apps, dashboards with live data integration, commerce sites, documents, study guides, travel itineraries, and coding projects. All are refinable through follow-up conversation with Gemini. Users can toggle between preview mode and the underlying code.

Raycast Launches Glaze, a Platform for Building Desktop Apps Through AI Prompts
Raycast launched Glaze today, a product that lets Mac users build native desktop applications by typing plain-language prompts into an AI chat interface. The platform, now in private beta, uses Claude
OpenAI Launches Frontier to Manage AI Agents From Rival Vendors in One System
OpenAI on Thursday released Frontier, a platform that lets companies build, deploy, and manage AI agents from multiple vendors inside a single system. The product works with agents built by OpenAI, by
OpenAI Launches Codex Desktop App for macOS With Multi-Agent Workflows and Doubled Rate Limits
OpenAI released a macOS desktop app for Codex today, turning its AI coding agent into a standalone application that can run multiple agents across different projects at the same time. The company also
AI News

New Delhi

Freelance correspondent reporting on the India-U.S.-Europe AI corridor and how AI models, capital, and policy decisions move across borders. Covers enterprise adoption, supply chains, and AI infrastructure deployment. Based in New Delhi.