President Donald Trump on Sunday accused Iran of using artificial intelligence as a "disinformation weapon," claiming Tehran fabricated images of kamikaze boats, a successful strike on the USS Abraham Lincoln, and a 250,000-person rally that "never took place." The New York Times separately identified more than 110 unique AI-generated images and videos about the war in two weeks, a pace that Marc Owen Jones, an associate professor of media analytics at Northwestern University in Qatar, said far exceeds anything seen in prior conflicts. The accusations arrived the same day FCC Chairman Brendan Carr threatened to pull broadcaster licenses over war coverage Trump considered unfair.
Which reports, specifically? He never said. Reuters, for its part, reviewed footage shot at Basra's port. It showed explosive-laden Iranian boats ramming two fuel tankers, and at least one crew member was killed in the attack. Iranian state media did claim a hit on the Abraham Lincoln. Western outlets mostly didn't pick it up.
The Breakdown
- NYT identified 110+ AI-generated war fakes in two weeks, more than any prior conflict
- Iran inflated US casualties 100x; White House mixed Call of Duty clips with real strikes
- X's 90-day demonetization policy sees thin enforcement; Grok misidentified fakes as real
- BBC verified a funeral photo critics called AI-generated; real and fake now share the same feed
Both sides generate fakes
Iran's state media apparatus, emboldened by how far its content travels before anyone can fact-check it, has been pushing fabricated satellite imagery and inflated casualty figures since fighting started on February 28. It has also staged military victories that never happened. Tehran Times, a state-aligned English daily, posted a "before vs. after" satellite image on X claiming to show "completely destroyed" US radar equipment at a base in Qatar. Open-source intelligence researchers traced it to an AI-manipulated Google Earth image of a US base in Bahrain, not Qatar. Same parking lot, same cars. That's the tell. AFP detected a SynthID watermark, Google's invisible tag for AI-generated content. The manipulated photo still reached millions of views across multiple languages.
Iran's IRGC news agency Tasnim went further. On March 3, it published a number: 650 US military personnel killed in the war's first 48 hours. The Pentagon's count at that point stood at six. As of March 13, US Central Command's total stood at 13.
But fabrication runs in both directions. On March 4, the White House posted a video on its official X account merging real missile strike footage with clips from the Call of Duty video game. A choppy voiceover declared, "We're winning this fight." Five days into the war. The following day, another White House video celebrated "justice the American way" with scenes spliced from Braveheart, Breaking Bad, and Gladiator. The war has killed more than 1,300 people in Iran, according to officials there, and 13 US service members, according to Central Command.
"We have reached a level of realism in video, audio, and image deepfakes that for most people, it is not discernible from fact," Rumman Chowdhury, former head of ethics at X, told Rolling Stone.
When real photos become suspect
The volume of fakes has started corroding trust in authentic images, too, creating a fog where real and fabricated sit side by side in the same feed and look identical. Political scientist Steven Feldstein describes a growing category he calls "shallowfakes," content that manipulates subtly rather than fabricating entirely. A real photograph of an Iraqi airport showing smoke over a US military base was altered on March 1 with AI to replace the smoke with a giant fireball. The original was authentic. The alteration was minor. But the impression it left was not.
The New York Times ended up defending one of its own photographs this month. An organization called the Empirical Research and Forecasting Institute accused the paper of digitally manipulating a Tehran crowd image. "This is a genuine image, taken by a journalist in Iran on Monday, March 9, 2026," the Times wrote in a public statement. The accusation, it added, was "fundamentally flawed" and "dishonestly based on a re-posted version."
On March 3, Iranian state media said a strike on a school near a military base killed more than 160 children and staff. State outlets published an aerial photograph of a mass funeral alongside the report. Critics jumped on it, calling the funeral image AI-generated. BBC journalists geolocated the cemetery to a site 3.7 kilometers from the school, matching trees, road layout, and a nearby building against satellite imagery. Freshly dug graves appeared on satellite photos from the day after the funeral. The day before, bare ground.
Mahsa Alimardani at the human rights group Witness told the BBC she sees a split screen. Hold two truths at once, she urged. The Islamic Republic destroyed protest footage in 2022, hid prisoner deaths for years, scrubbed hospital records after crackdowns. Yet during this war, Tehran has been documenting civilian casualties with real resources. That documentation serves state propaganda. It is not automatically false.
Stay ahead of the curve
Strategic AI news from San Francisco. No hype, no "AI will change everything" throat clearing. Just what moved, who won, and why it matters. Daily at 6am PST.
No spam. Unsubscribe anytime.
That fog, where nothing looks trustworthy and everything looks the same, is exactly the environment social media platforms were supposed to manage.
X struggles to police its own platform
Elon Musk's X announced last week that creators posting AI-generated content about armed conflict without disclosure would lose revenue-sharing eligibility for 90 days. A defensive move for a platform broadly criticized as permissive toward disinformation since Musk's $44 billion acquisition in 2022.
Enforcement has been thin. "The feeds I monitor are still flooded with AI-generated content about the war," Joe Bodnar of the Institute for Strategic Dialogue told AFP. One premium account shared an AI clip depicting an Iranian nuclear strike on Israel. It gathered more views than the policy announcement itself. By a wide margin.
X's own chatbot compounded the problem. When disinformation analyst Tal Hagin asked Grok to verify a video of Iranian missiles supposedly striking Tel Aviv, originally posted by an Iranian state outlet, the chatbot repeatedly misidentified the location and date. A study by the Digital Democracy Institute of the Americas found that more than 90 percent of X's Community Notes never get published, leaving the platform's primary fact-checking mechanism largely inert.
Premium accounts with purchasable blue checkmarks have spread much of the AI content. One posted an AI video of Dubai's Burj Khalifa engulfed in flames, ignored a direct request from X's head of product to label it, and watched the post accumulate more than two million views.
"X's policy is a reasonable countermeasure," said Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell Tech. "It is unlikely that X will be able to guarantee both high precision and high recall for this policy."
A conflict saturated with fabrication
A Cyabra study of online activity found that the majority of AI-generated war videos push pro-Iranian narratives, often to falsely demonstrate military superiority. Nobody controls it anymore. Imposter accounts on Instagram, Threads, and X now pose as credible open-source intelligence analysts. They post fabricated satellite imagery with gibberish coordinates, poisoning the verification pipeline that journalists and researchers depend on.
Jones put it plainly. "Even compared to when the Ukraine war broke out, things now are very different," he said. "We're probably seeing far more AI-related content now than we ever have before."
For civilians in Iran trying to determine whether a strike report is genuine or whether an evacuation order is real, the fog carries weight beyond media criticism. The platforms carrying this content have not earned your trust. The governments producing it haven't either. New fakes land faster than anyone can flag the old ones. The verified photos sit three posts down, looking exactly the same.
Frequently Asked Questions
What specific AI fakes has Iran's state media produced?
Tehran Times posted a manipulated Google Earth satellite image claiming destroyed US radar in Qatar. It was actually a Bahrain base with AI edits. Iran's IRGC agency Tasnim claimed 650 US troops killed when the Pentagon count stood at six. AFP detected Google's SynthID watermark on some fabricated images.
Has the US government also shared manipulated content?
On March 4, the White House posted a video on X merging real missile strike footage with Call of Duty game clips. The next day it released another video splicing scenes from Braveheart, Breaking Bad, and Gladiator to celebrate 'justice the American way.'
What are shallowfakes?
Political scientist Steven Feldstein's term for content that manipulates real images subtly rather than fabricating from scratch. A genuine photo of smoke at an Iraqi airport was altered with AI to show a giant fireball. The original was real, but the altered version created a false impression of devastation.
Why did the New York Times have to defend a photograph?
The Empirical Research and Forecasting Institute accused the paper of digitally manipulating a Tehran crowd photo. The Times responded publicly, calling it 'a genuine image, taken by a journalist in Iran on Monday, March 9, 2026' and the accusation 'fundamentally flawed.'
Is X's new AI content policy working?
Researchers say no. Joe Bodnar at the Institute for Strategic Dialogue said feeds remain 'flooded' with AI war content. Over 90% of Community Notes never get published, and Grok has repeatedly misidentified AI fakes as real footage.
Recommended




