A billboard in central Tehran, Iran, depicts named Iranian ballistic missiles in service, with text in Arabic reading
Caption

A billboard in central Tehran, Iran, depicts named Iranian ballistic missiles in service, with text in Arabic reading "the honest [person's] promise" and text in Persian reading "Israel is weaker than a spider's web," on April 15. Iran attacked Israel over the weekend with missiles, which it said was a response to a deadly strike on its consulate building in Damascus, Syria. / AFP via Getty Images

After Iran sent more than 300 drones and ballistic missiles toward Israel on Saturday, Iranian state television showed purported damage on the ground with video of a fire under a hazy, orange sky. But that footage was not from Israel or anywhere close. Fact-checkers identified it as footage from a Chilean wildfire in February.

In reality, nearly all the drones and missiles were intercepted with help from the U.S. and other countries, and no deaths were reported.

As the international community worried that Iran's attacks would turn Israel's war in Gaza into a wider regional conflict, a version of the clip appeared on social media. It was one of the images and videos that circulated on social media platforms and falsely claimed to show the aftermath of the attack on the ground in Israel. Some of the images and videos were from previous conflicts or wildfires, others were from video games and some appeared to have been made with generative artificial intelligence. They have gained millions of views on X, the platform formerly known as Twitter, according to researchers from the Institute for Strategic Dialogue.

Similar footage surfaced in the aftermath of the Oct. 7 attack on Israel by Hamas and Russia's invasion of Ukraine, said Moustafa Ayad, ISD's executive director for Africa, the Middle East and Asia, in an interview with NPR. He co-authored a blog post for ISD that was published Sunday.

Misinformation can gain a lot of traction when people have questions they need urgent answers to during time-bound events like election days and during conflict, said Isabelle Frances-Wright, ISD's U.S. director of technology and society. Breaking news events, where facts are not firmly established, are frequently a vector for false and misleading content on social media.

ISD researchers identified over 30 "false, misleading, or AI generated images and videos" on X that garnered over 35 million views in total, according to the blog post. When NPR reviewed and revisited the posts Monday afternoon U.S. Eastern time, some had been removed, but others continued to gain views. One of the posts came from a satire account, but the poster did not flag that the post was satirical.

An X post that showed computer-generated footage of rapid explosions with the caption "MARK THIS TWEET IN HISTORY WW3 HAS OFFICIALLY STARTED" had over 13 million views when ISD researchers checked on Sunday. By Monday afternoon U.S. Eastern time, the same video's views topped 22 million. Another X post that shared outdated footage of rockets in the region from October had fewer than 200,000 views on Sunday. By Monday, the post had gained over a million views, despite the author acknowledging in a second post that the video was old.

ISD tracked multiple major platforms, including Facebook, Instagram, TikTok and YouTube, and found that the clips that gained the most views were on X, Ayad said.

NPR reached out to X for a comment about the misleading posts on its platform and received an automated response.

Most of the accounts that researchers identified sport a blue check, which used to mean the user had been verified by Twitter but now means the user has an active subscription to X Premium and meets X's eligibility rules. Premium accounts enjoy a wider reach on the platform and stand to gain revenue from their exposure.

It's part of how the platform has changed under the ownership of Elon Musk. Aside from the blue checks, the platform has also dialed back on content moderation and relied more on users flagging content in "community notes."

Of the posts that ISD researchers identified, they found only two that had a community note attached on Sunday. By Monday, 10 of the posts had community notes labeling the content as misleading, outdated or computer generated, but those posts appeared to continue to draw views.

It's unclear who is behind the accounts or what their motive is, Ayad said. Some have previously posted pro-Iran or pro-Kremlin content and already had large followings. Some purport to be from open-source intelligence researchers or citizen journalists.

Frances-Wright noticed that many users also mistakenly thought that footage that had been verified and released by credible sources was fake.

"People are really struggling to understand what is true and what is false."

Ayad said not only does the misleading content give viewers a false sense of the conflict, but the posts also spur Islamophobia and antisemitism.

"Users who see this content will often comment about how — 'I hope the Muslims kill the Jews' or 'the Jews die,'" said Ayad.

"We've seen that on Oct. 7 [when Hamas attacked Israel]. We've seen that in the bombardment of Gaza. It's similar in nature. Like it creates a ripple effect."

NPR's Research, Archives & Data Strategy team contributed research to this story

Tags: Twitter  Israel  Iran  ai  X  Series: