Google Charges Premium Users to Remove the AI Watermarks It Calls Critical

Google's new image generator embeds "critical" authenticity watermarks in every AI creation. But pay $19.99/month and the visible mark disappears. The company monetizes the transparency tools it says protect public trust, while actual deepfake threats go unaddressed.

Google Charges to Remove AI Watermarks It Calls Critical

A fake image of an explosion at the Pentagon hit social media in May 2023. Within minutes, financial news aggregators picked it up. Some mainstream outlets briefly reported it as real. The S&P 500 dipped before the news was debunked. The image was AI-generated, a demonstration of how convincing synthetic media moves markets before verification catches up.

Two and a half years later, Google has an answer: Nano Banana Pro, launched November 20, 2025. The new image generation model creates 4K visuals, renders legible multilingual text, and embeds SynthID watermarks into every output. Users can upload suspicious images to Gemini and ask if they're AI-generated. Google checks for its proprietary watermark and provides an answer.

There's one exception. Pay Google $19.99 per month for an AI Ultra subscription, and the company removes the visible watermark entirely. The invisible SynthID signal remains, but only Google's tools can detect it. The average person looking at an Ultra subscriber's output sees nothing indicating artificial origin.

This isn't a minor implementation detail. It's the core contradiction in Google's transparency strategy, a business model that monetizes the very authenticity markers the company claims are "critical" for public trust.

The Breakdown

• Nano Banana Pro costs $0.24 per 4K image via API, but $19.99/month Ultra subscribers can remove visible watermarks Google calls critical

• SynthID verification only detects Google-made images, missing the $347 million in quarterly deepfake fraud losses from other sources

• C2PA industry standard faces minimal adoption and documented security vulnerabilities, leaving actual authentication problems unsolved

• Alphabet gained $140 billion in market cap on the announcement despite unit economics that require subsidies from other revenue streams

The Economics Behind the Watermark Toggle

Alphabet's stock jumped 4% following the Nano Banana Pro announcement, reaching a 52-week high of $306.42. The company has added 58% in market value this year. Investors rewarded the move because they understood what Google positioned as a transparency tool doubles as a subscription incentive.

The pricing structure makes this explicit. API access to Nano Banana Pro costs $0.24 per 4K image, with each of 14 possible reference images adding $0.067. A fully-loaded generation runs $1.18 before text token costs at $2.00 per million tokens. Compare that to Midjourney's $10 monthly subscription for 200 images, roughly $0.05 each at base resolution.

Google counters that its model offers capabilities competitors lack: Google Search integration for real-time data, "thinking mode" that generates uncharged intermediate images, and the ability to maintain consistency across five characters and 14 reference images simultaneously. Josh Woodward, vice president of Google Labs and Gemini, said internal users create infographics from LinkedIn resumes and code snippets, transforming "things that were previously maybe not something you would think of as a visual medium."

The technical achievement is genuine. Simon Willison, who received preview access, called Nano Banana Pro "an astonishingly capable image generation model" after testing its ability to follow complex multi-step editing instructions. His example: placing specific berries in a pancake skull's eye sockets, changing the plate to a cookie, and adding happy people in the background. The model executed every detail, producing a 24.1MB, 5632 × 3072 pixel PNG file.

But Google's watermark economics create a two-tier authenticity system. Free users and $9.99 Google AI Plus subscribers get stamped outputs. Ultra subscribers at $19.99 get clean images for "professional work." The company frames this as meeting business needs. Translation: visual authenticity becomes a feature you pay to disable.

SynthID's Narrow Detection Window

Google's new verification feature in the Gemini app demonstrates the limitations built into its transparency approach. Upload an image, ask if it's AI-generated, and Gemini checks for SynthID. The system worked in Willison's test, detecting watermarks in raccoon photos where he'd used Apple Photos' cleanup tool to remove visible markers. Gemini reported watermark presence in "25-50% of the image" because only the raccoons were synthetic.

This sounds promising until you consider what it doesn't detect. Midjourney images. DALL-E outputs. Stable Diffusion generations. Anything from the dozens of other image models users actually encounter online. SynthID verification works exclusively on Google's own creations.

The company acknowledges this. Google says it plans to support C2PA metadata "in the coming months," which would enable verification of content from models outside its ecosystem. C2PA is an industry standard backed by Adobe, Microsoft, and others that attaches provenance records to files. Nano Banana Pro images now include C2PA metadata alongside SynthID.

But C2PA adoption faces fundamental challenges. Leonard Rosenthol, chair of the C2PA Technical Working Group and Senior Principal Scientist at Adobe, acknowledges the work remains incomplete. "C2PA development is actively evolving with a new version of the specification published in May 2025," Rosenthol said. "The time is now for community feedback and engagement to help steer the work."

Dr. Neal Krawetz, a computer security specialist who runs FotoForensics and has analyzed C2PA's architecture extensively, offers a harsher assessment. In his presentation "C2PA from the Attacker's Perspective" at the IPTC Photo Metadata Conference in May 2024, Krawetz demonstrated multiple ways to bypass C2PA safeguards. On his Hacker Factor Blog, he writes that C2PA "relies on peer pressure, hand-waving, and a hope that people won't look too closely" and characterizes it as "snake oil" that doesn't prove anything about whether files are trustworthy.

Even setting aside technical vulnerabilities, practical barriers remain severe. As of 2025, adoption is lacking, with very little internet content using C2PA. Most AI image generators don't implement it. Many users strip metadata deliberately. The people creating misleading synthetic media rarely volunteer authentication markers.

Google's verification tool addresses the narrow case where someone used a Google model, kept the metadata intact, and a viewer both suspects manipulation and knows to check using Gemini. Pushmeet Kohli, writing on Google's blog, described the feature as putting "a powerful verification tool directly in consumers' hands." That's technically accurate. It's also practically irrelevant for the content authenticity problems people actually face, particularly when dealing with health misinformation, financial fraud, or political deepfakes from non-Google sources.

The User Growth Trap

Nano Banana Pro's predecessor generated 13 million new Gemini app users in four days following its August launch. Users turned themselves into 3D figurines, restored old photos, and created mashups that went viral. Google now reports 650 million monthly active users for the Gemini app and 2 billion monthly users for AI Overviews in Search.

The original Nano Banana was free with limited quotas. Nano Banana Pro operates differently. Free users get "limited free quotas" before reverting to the original model. Google AI Plus subscribers ($9.99/month) get "higher quotas." Pro and Ultra subscribers ($19.99/month) get the highest limits.

Woodward said this represents "the best problem to have," noting "high numbers of people coming to lots of these products" and surging paid subscription demand. He's describing a conversion funnel. Go viral with free capabilities, then monetize the audience through quota constraints that push power users toward paid tiers.

The math supports this interpretation. ChatGPT reports 800 million weekly active users. Gemini's 650 million monthly active users sounds comparable until you realize OpenAI's metric implies higher engagement frequency. Comparing monthly to weekly figures obscures the gap. OpenAI maintains the top position in Apple's App Store free app rankings, with Gemini holding second place.

Both companies have grown dramatically since ChatGPT's November 2022 launch reshaped expectations for AI interfaces. Neither has demonstrated sustainable economics at current infrastructure costs. They're competing for user lock-in before the market clarifies how people actually want to use these tools long-term.

What Real Harm Looks Like

The stakes extend beyond abstract concerns about authenticity. In January 2024, fraudsters used deepfake video to impersonate a company's CFO on a video call, tricking an employee in Hong Kong into transferring $25 million. Resemble.ai's report documented 487 deepfake attacks in the second quarter of 2025, up 41% from the previous quarter, with approximately $347 million in losses in three months.

Health misinformation scales differently. In December 2024, Diabetes Victoria called attention to deepfake videos showing experts from The Baker Heart and Diabetes Institute promoting a diabetes supplement they never endorsed. Dr. Karl Kruszelnicki's face was used in April 2024 to sell pills via Facebook, with the platform initially determining the ads didn't violate standards.

The Internet Watch Foundation documented 210 web pages with AI-generated deepfakes of child sexual abuse in the first half of 2025, a 400% increase over the same period in 2024. Whereas only two AI videos of child sexual abuse were reported in the first six months of 2024, 1,286 videos were reported in the first half of 2025.

Google's SynthID verification tool addresses none of this. The health scams used non-Google models. The financial fraud relied on video deepfakes that wouldn't carry Google's watermarks. The child abuse material came from generators that don't implement C2PA. Verification tools that only work on content voluntarily watermarked by compliant services miss the actual threat landscape by design.

What the Stock Response Reveals

Alphabet added roughly $140 billion in market capitalization on the Nano Banana Pro announcement, calculated from the 4% gain on the company's $3.53 trillion market cap. Investors see AI capabilities as validating the company's position against OpenAI, Anthropic, and other competitors in the foundation model race.

That reaction disconnects from usage economics. Google doesn't disclose Nano Banana Pro's infrastructure costs, but 4K image generation at scale requires significant compute. The company's "thinking mode" generates multiple intermediate images before producing final output, with interim generations uncharged to users. This masks true costs.

The premium pricing for API access, $0.24 per 4K image, suggests unit economics that don't support the free tier's viral growth strategy. Google can subsidize this through Search and Cloud revenue. The question is whether image generation represents a defensible business or a feature war where companies burn capital to claim capability leadership.

The capability itself demonstrates real progress. Maintaining consistency across 14 reference images while rendering accurate multilingual text solves problems that frustrated earlier models. Designers prototyping mockups or creating localized marketing materials gain genuinely useful tools. Willison's infographic test, where he prompted for "Infographic explaining how the Datasette open source project works" and received a technically accurate diagram with proper logos and correctly spelled text, shows the model's practical utility.

But Wall Street's enthusiasm reflects AI's valuation puzzle. Companies announce features, stock prices jump, user growth accelerates. The path from that sequence to sustainable profit margins remains speculative, particularly when the features require expensive infrastructure and the business model depends on converting free users who came for viral effects into subscribers paying monthly for professional capabilities.

Meanwhile, the actual problem, verifying content authenticity across a fragmented ecosystem where bad actors deliberately avoid standards compliance, receives a solution that works exclusively within Google's walled garden. Premium subscribers can disable even that limited signal.

Why This Matters

  • Selective transparency becomes a product tier. Google positions SynthID as critical for content authenticity while charging premium subscribers to remove visible watermarks. This creates a market where verifiable provenance is a feature some users pay to disable, undermining the public good framing around AI transparency tools while generating subscription revenue from the $19.99/month Ultra tier.
  • Detection theater doesn't address the actual problem. SynthID verification only works on Google-generated content, leaving users unable to verify the $25 million Hong Kong fraud, health scams using non-Google models, or child abuse material from unregulated generators. C2PA adoption remains minimal as of 2025, while technical vulnerabilities persist, meaning verification tools serve Google's competitive positioning more than society's authenticity needs even as deepfake incidents cause $347 million in quarterly losses.

❓ Frequently Asked Questions

Q: What is SynthID and how does it work?

A: SynthID is Google's digital watermarking technology that embeds invisible signals into AI-generated images. Think of it as a hidden pattern woven into the pixels that survives editing and compression. Only Google's tools can detect it. The system worked in tests, identifying watermarks even after images were edited with Apple Photos' cleanup tool.

Q: Can Google's verification detect images from Midjourney or DALL-E?

A: No. The Gemini app verification tool only detects SynthID watermarks from Google's own models. It can't identify images created by Midjourney, DALL-E, Stable Diffusion, or any other AI generator. This makes it useless for verifying the vast majority of AI images people actually encounter online.

Q: How much does Nano Banana Pro cost compared to competitors?

A: API access costs $0.24 per 4K image, with reference images adding $0.067 each. A fully-loaded 14-image generation runs $1.18. Compare that to Midjourney's $10 monthly subscription for 200 images (roughly $0.05 each). Google charges 5-24x more per image, justified by advanced features like character consistency and Google Search integration.

Q: What is C2PA and why hasn't it solved authentication problems?

A: C2PA (Coalition for Content Provenance and Authenticity) is an industry standard that attaches provenance records to files, backed by Adobe, Microsoft, and Google. As of 2025, adoption remains minimal across the internet. Security researcher Dr. Neal Krawetz demonstrated multiple ways to bypass its safeguards, and most AI generators don't implement it.

Q: What's "thinking mode" and why are interim images free?

A: Thinking mode generates multiple draft images internally before producing the final output. These intermediate images help the model refine composition and details. Google doesn't charge for these interim generations, only the final result. This masks the true computational cost of each image, making pricing appear lower than actual infrastructure expenses.

Google Ships Gemini “Nano-Banana” Editor vs OpenAI Images
Google’s mysterious “nano-banana” image model—which secretly topped editing benchmarks—is now Gemini 2.5 Flash Image. The reveal targets OpenAI’s visual dominance with better character consistency, but the real battle is user growth and platform control.
OpenAI Flips Sora Copyright Policy in 72 Hours
OpenAI flipped Sora’s copyright policy from opt-out to opt-in within 72 hours of launch. The reversal—plus a new revenue-sharing model—reveals the collision between AI companies’ burn rates, Hollywood’s legal firepower, and the race to monetize generative video.
AI Voice Scammer Impersonates Marco Rubio to Fool Officials
Someone used AI to clone Marco Rubio’s voice and contacted foreign ministers, a US governor, and Congress members through Signal. The scammer left convincing voicemails targeting high-level officials. Government security gaps revealed.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.