Fake Rubio, Real Risk: AI Clone Targets Signal Accounts of Senior Government Leaders

Someone used AI to clone Marco Rubio's voice and contacted foreign ministers, a US governor, and Congress members through Signal. The scammer left convincing voicemails targeting high-level officials. Government security gaps revealed.

AI Voice Scammer Impersonates Marco Rubio to Fool Officials

💡 TL;DR - The 30 Seconds Version

👉 Someone used AI to clone Marco Rubio's voice and contacted 5 officials including 3 foreign ministers, a US governor, and Congress member through Signal in mid-June.

📊 Creating AI voice clones requires just 15-20 seconds of audio from the target and basic internet access - technology accessible to anyone.

🔍 FBI is investigating this case alongside similar May incident where someone impersonated White House Chief of Staff Susie Wiles.

🌍 AI impersonation attacks are spreading globally with recent campaigns reported in Canada, Ukraine, and multiple US government agencies.

🚨 State Department warns diplomats worldwide that voice messages from senior officials should not be assumed authentic without verification.

🛡️ Government officials remain vulnerable because they use Signal for sensitive communications, making impersonation easier once phone numbers are obtained.

Someone used artificial intelligence to clone Marco Rubio's voice and contacted at least five high-level officials, including three foreign ministers, a US governor, and a member of Congress. The impersonator left voicemails and sent text messages through Signal, the encrypted messaging app that government officials use for sensitive communications.

The unknown person created a fake Signal account in mid-June using the display name "marco.rubio@state.gov" - not Rubio's actual email address. They left AI-generated voicemails for at least two targets and sent text messages inviting others to communicate on the platform.

US authorities don't know who did this, but they believe the goal was to "manipulate targeted individuals with the goal of gaining access to information or accounts," according to a State Department cable sent to diplomatic posts worldwide on July 3.

The Technology Behind the Scam

Creating convincing AI voice clones has become remarkably simple. Experts say anyone can do it with basic internet access and about 15 to 20 seconds of audio from the target person. For someone like Rubio, who gives public speeches and interviews regularly, getting that audio sample is trivial.

"You just need 15 to 20 seconds of audio of the person, which is easy in Marco Rubio's case," said Hany Farid, a digital forensics professor at UC Berkeley. "You upload it to any number of services, click a button that says 'I have permission to use this person's voice,' and then you type what you want him to say."

The scammer chose voicemails deliberately. Unlike phone calls, voicemails don't require real-time interaction, making them easier to fake convincingly. The recipient can't ask follow-up questions or notice conversational quirks that might expose the fraud.

Part of a Larger Pattern

This incident fits into a troubling trend. In May, someone breached White House Chief of Staff Susie Wiles' phone and began calling senators, governors, and business executives while pretending to be her. The FBI investigated that case, though President Trump dismissed its importance.

The State Department is tracking a separate campaign that began in April involving a Russia-linked actor who posed as a State Department official. This person targeted Gmail accounts of think tank scholars, Eastern European activists, dissidents, journalists, and former State Department officials through sophisticated phishing emails.

"The actor demonstrated extensive knowledge of the Department's naming conventions and internal documentation," the cable noted. Industry partners attributed this campaign to a cyber threat actor associated with the Russian Foreign Intelligence Service.

Why Signal Makes This Easier

The impersonator chose Signal for good reason. Government officials rely heavily on the encrypted messaging app for both personal and professional communications. Its end-to-end encryption makes it popular among officials who handle sensitive information.

Signal's popularity in government circles became clear in March when then-National Security Adviser Mike Waltz accidentally added a journalist to a Signal group chat discussing secret US attack plans in Yemen. The chat included Rubio, Wiles, Defense Secretary Pete Hegseth, Vice President JD Vance, and other top officials. The security breach contributed to Waltz's ouster and curtailed Signal's use for national security meetings.

But officials continue using Signal individually. Once scammers obtain phone numbers linked to an official's Signal account, creating fake accounts becomes straightforward.

The Broader Threat

AI impersonation attacks are spreading globally. In June, Ukraine's Security Service announced that Russian intelligence agents were impersonating the agency to recruit Ukrainian civilians for sabotage missions. The same month, Canadian authorities warned that scammers were using AI to impersonate senior government officials in campaigns designed to steal sensitive information or install malware.

The FBI issued a warning in May about "malicious actors" impersonating senior US officials through "ongoing malicious text and voice messaging campaigns." These attacks use AI-generated voice messages and likely aim to "elicit information or funds."

"If you receive a message claiming to be from a senior US official," the FBI warned, "do not assume it is authentic."

The Security Response

The State Department is investigating the Rubio impersonation and has urged diplomats to warn external partners about fake accounts. Officials can report impersonation attempts to the FBI's Internet Crime Complaint Center, while State Department personnel should alert the Bureau of Diplomatic Security.

"The State Department is aware of this incident and is currently investigating the matter," a senior official said. "The Department takes seriously its responsibility to safeguard its information and continuously takes steps to improve the department's cybersecurity posture to prevent future incidents."

Easy to Execute, Hard to Detect

Security experts warn that these attacks will only become more common as the technology improves and spreads. "It's easy enough that my eight-year-old or my 80-year-old parents can do it without any technical ability," said Ben Colman, CEO of Reality Defender, a deep-fake detection company.

"Unlike ransomware or a traditional computer virus, anybody with an internet connection and browser for Google search can make an incredibly entertaining or incredibly dangerous deep-fake of absolutely anybody."

The ease of creation contrasts sharply with the difficulty of detection. Voice cloning technology has advanced faster than the tools designed to identify fake audio. This creates a dangerous gap that malicious actors can exploit.

Government Officials as Targets

High-profile officials make attractive targets for several reasons. They have access to sensitive information, maintain extensive professional networks, and often use their personal devices for work communications. Government officials can also be careless about data security, making them vulnerable to social engineering attacks.

The Rubio impersonation demonstrates how attackers can weaponize officials' own communication preferences against them. By mimicking their voice and targeting their known contacts, scammers can exploit the trust relationships that make government function.

Why this matters:

• Government officials now face a new category of threat where their own voice becomes a weapon against their contacts and networks.

• The technology gap between creating and detecting AI voice clones means we're entering an era where any phone call or voicemail from a public figure should be treated with skepticism.

❓ Frequently Asked Questions

Q: How easy is it to create AI voice clones?

A: Extremely easy. You need just 15-20 seconds of audio from the target person and basic internet access. Experts say an 8-year-old or 80-year-old can do it without technical skills. You upload the audio to available services, click "I have permission," and type what you want them to say.

Q: How can you tell if a voice message is fake?

A: It's becoming nearly impossible with current technology. Detection tools lag behind creation tools. The FBI warns not to assume any message from a senior official is authentic. Your best defense is to verify through a separate communication channel before responding.

Q: What other officials have been targeted by AI impersonators?

A: White House Chief of Staff Susie Wiles was impersonated in May, with the scammer calling senators, governors, and business executives. The FBI is also tracking cases where "malicious actors" impersonated multiple senior US officials in ongoing campaigns targeting government leaders.

Q: Why don't government officials just stop using Signal?

A: Signal's end-to-end encryption makes it valuable for sensitive communications. Officials use it for both personal and professional messages. While the March security breach reduced its use for national security meetings, individual officials continue using it because secure communication outweighs the impersonation risk.

Q: What happens if someone falls for this scam?

A: The scammer gains access to personal accounts or sensitive information. The State Department cable warned that "information shared with a third party could be exposed if targeted individuals are compromised." This could include classified data, personal details, or login credentials.

Q: Is impersonating federal officials illegal?

A: Yes. Impersonating a federal officer or employee to deceive or obtain something is a federal crime. The FBI is investigating these cases through their Internet Crime Complaint Center, though prosecuting international actors remains challenging.

Q: How common are these AI impersonation attacks?

A: They're spreading rapidly worldwide. Canada, Ukraine, and the US have all reported recent cases. Security experts report "a huge increase in volume" of these attacks. The FBI issued warnings in May about "ongoing malicious text and voice messaging campaigns" targeting senior officials.

Q: What should you do if you get a suspicious message from an official?

A: Don't respond immediately. Verify through an independent communication channel before taking any action. Government employees should report incidents to the Bureau of Diplomatic Security. Others can report to the FBI's Internet Crime Complaint Center at ic3.gov.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to implicator.ai.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.