In an Associated Press story updated at 7:02 a.m. Pacific on Monday, John Hultquist said the sentence defenders had been waiting not to hear: "It's here." The chief analyst at Google Threat Intelligence Group was talking about a criminal crew, an unnamed open-source administration tool, and a Python exploit that could bypass two-factor authentication after a username and password were already in hand. One exploit. A planned mass campaign.

Google's finding turns AI bug hunting from a lab-access debate into a patch-window problem. Anthropic's Mythos showed what happens when a model finds old flaws for defenders. Google's report says criminals are starting to run the same race from the other side.

What Changed

AI-generated summary, reviewed by an editor. More on our AI guidelines.

The first trace was noisy

Google's report says the attackers were "prominent cyber crime threat actors" preparing a mass exploitation operation, not a state service running a quiet espionage job. The company worked with the unnamed vendor to patch the flaw before the campaign did damage, according to The New York Times. Google also told reporters it did not believe Gemini or Anthropic's Mythos was used.

Google cited "educational docstrings," a hallucinated CVSS score, detailed help menus, and the clean _C ANSI color class. Rob Joyce, the former NSA cybersecurity director, told the Times that "A.I.-authored code does not announce itself," then called Google's evidence "the closest thing yet to a fingerprint at the crime scene."

The exploit bypassed two-factor authentication. It still required valid credentials.

That is the first trace in miniature: real enough to matter, sloppy enough to expose itself.

Mythos made the clock public

Anthropic had already made the defensive side visible. In April, the company said Mythos found thousands of vulnerabilities across major operating systems and browsers, and Implicator previously covered why Project Glasswing was about time: selected vendors got months to patch before comparable tools spread.

Mozilla told Ars Technica that Mythos helped find 271 Firefox bugs over two months, including 180 marked sec-high, 80 sec-moderate, and 11 sec-low. Brian Grinstead, a Mozilla distinguished engineer, said the reports had "almost no false positives."

Daniel Stenberg, curl's lead developer, posted the colder counterexample Monday: "yes, as in singular one." Mythos scanned 178,000 lines of curl C code, part of a codebase with 660,000 words and 188 published CVEs. Curl's team cut five claimed "confirmed security vulnerabilities" to one low-severity CVE planned for June, three false positives, and one ordinary bug.

One scan. One confirmed vulnerability.

The lesson is not that Mythos failed. Stenberg wrote that AI analyzers are "significantly better" than older tools and that not using them leaves attackers time. The lesson is that the advantage depends on harnesses, targets, and verification, not model mythology.

Criminal speed changes the math

Google's new case matters because criminals do not need perfect scanners. They need speed enough to beat disclosure. Hultquist told AP that criminal hackers have more to gain than slow-moving government spies from AI's "tremendous capability for speed" in finding and weaponizing bugs.

"There's a race between you and them to stop them before they can essentially get whatever data they need to extort you with, or launch ransomware," he said. He separately told The Register, "For every zero-day we can trace back to AI, there are probably many more out there."

Google's broader tracker shows the same pattern outside the zero-day case. APT45 sent thousands of prompts to analyze CVEs and validate proof-of-concept exploits. PRC-linked actors used expert persona prompts against TP-Link firmware and Odette File Transfer Protocol implementations. Google also observed threat actors experimenting with a GitHub repository containing more than 85,000 vulnerability cases from China's WooYun bug bounty era.

The odd detail is the operational one. Google says an APT27-linked tool had maxHops hardcoded to 3, while normal VPN settings are usually 1 hop. That is not a model benchmark. It is a clue that AI-assisted code work is bleeding into relay networks, malware padding, and account factories.

The zero-day race is no longer only about who finds the bug first. It is about who turns discovery into repeatable work.

The patch window is the product

Google wants the same technology on defense. Its report points to Big Sleep for finding flaws and CodeMender for fixing them. Anthropic wants controlled access through Glasswing. OpenAI, AP reported, has a cybersecurity version of ChatGPT limited to defenders of critical infrastructure. Dean Ball, a former White House tech policy adviser, told AP, "I don't like regulation. I would prefer for things not to be regulated. But I think we need to in this case."

Those arguments now have a criminal fact pattern attached. Google said prominent cybercrime actors likely used AI to support the discovery and weaponization of a 2FA-bypass vulnerability, and a vendor patched before a mass campaign began. No target, attacker group, tool, or model named; Google said it disrupted the operation before damage.

The missing names are the story.

Since Anthropic's April Mythos rollout, part of the AI-security policy debate has centered on who should get access to restricted cyber-capable models. Google just moved the harder question forward: how much time defenders get before restricted capability becomes ordinary criminal process. On Monday morning, Hultquist said the era was here. The first trace was not a warning label. It was a patch window closing.

Frequently Asked Questions

What did Google say criminals did with AI?

Google said prominent cybercrime actors likely used an AI model to support the discovery and weaponization of a zero-day vulnerability. The flaw could bypass two-factor authentication on an unnamed open-source web administration tool, but it still required valid credentials.

Did Google identify the AI model used?

No. Google said it did not believe Gemini was used, and reporting in the clipping says Anthropic's Mythos was also not believed to be involved. The specific model remains unnamed.

Was the attack successful?

Google said it worked with the affected vendor and disrupted the operation before damage. The company did not name the vendor, the target, the tool, or the attacker group.

Why does Mythos matter to this story?

Mythos showed that AI systems can help defenders find serious software bugs. Google's report matters because it points to criminals starting to use similar capabilities for exploitation rather than patching.

What is the main policy issue now?

The debate is no longer only who gets access to restricted cyber-capable models. It is also how much time defenders get to patch before similar capability becomes routine in criminal workflows.

AI-generated summary, reviewed by an editor. More on our AI guidelines.

OpenAI Builds Cybersecurity Product for Select Partners as AI Hacking Fears Mount
OpenAI is finalizing a product with advanced cybersecurity capabilities for a limited set of partners, Axios reported Thursday, becoming the second major AI lab in a week to restrict access to tools c
The $50 Exploit. The Eight-Hour Shift. The Missing Meter.
San Francisco | April 8, 2026 Anthropic built a model that cracks software for under $50 a run. Mythos Preview found zero-day vulnerabilities in every major OS and browser, including a TCP flaw hidde
OpenAI Ships GPT-5.4-Cyber, Expands Trusted Access to Thousands of Defenders
OpenAI on Tuesday unveiled GPT-5.4-Cyber, a variant of its flagship model fine-tuned for defensive security work, and opened tiered access to thousands of verified defenders through its Trusted Access
AI News

San Francisco

Editor-in-Chief and founder of Implicator.ai. Former ARD correspondent and senior broadcast journalist with 10+ years covering tech. Writes daily briefings on policy and market developments. Based in San Francisco. E-mail: [email protected]