AI Fuels Surge in Sophisticated Propaganda Amid Middle East Conflict

AI Fuels Surge in Sophisticated Propaganda Amid Middle East Conflict

7 views

Following Iran’s recent missile strikes on Israel, AI-generated videos depicting extensive destruction in Tel Aviv and Ben Gurion Airport went viral despite being fabricated. Forensic experts confirmed these highly realistic clips, some created with Google’s latest AI video tools, were deepfakes designed to manipulate public perception.

This trend reflects a new era in information warfare where deepfakes, chatbot misinformation, and repurposed video game footage are increasingly deployed across social media. As tensions escalate in the Middle East, millions seek updates online but often encounter synthetic content shaped for propaganda.

Iranian campaigns on platforms like TikTok have circulated AI-generated scenes showing ordinary neighborhoods transformed into war zones and airports under missile attack. Some clips, such as an El Al airplane engulfed in flames, are fully computer-generated yet convincing enough to mislead viewers.

Advanced video generators like Kling 2.1 Master, Seedream, and Google Veo3 use image-to-video technology to create realistic animations based on real photos. Open-source tools, including Wan 2.1 with enhancements, contribute to generating high-quality fake videos that avoid content restrictions.

These political deepfakes attract millions of views on TikTok, Instagram, Facebook, and X, spreading rapidly despite platform attempts to remove them. For instance, a video exaggerating Iran’s attacks on U.S. bases reached over 3 million views on X, while manipulated photos misrepresenting journalists amassed hundreds of thousands of views.

Propaganda production involves both state actors and partisan groups. Russian networks, such as the so-called “Pravda” network, infiltrate AI chatbot training data to distribute pro-Russia disinformation globally, producing millions of articles annually in multiple languages.

In the Middle East, AI-generated content is tailored by language to influence regional audiences. Arabic and Farsi videos emphasize regional unity and anti-Israel messages, while Hebrew content aims to apply psychological pressure within Israel. AI is also used to mock officials or exalt certain leaders through exaggerated, symbolic scenarios.

Some deepfakes combine fake imagery with altered voices, amplifying political messaging. One highly viewed video depicts an Iranian military parade with a deepfake voice of Ayatollah Ali Khamenei threatening retaliation against the U.S. Iran’s state media further disseminated fake footage, including misleading videos of missile mobilization and false images of foreign wildfires attributed to Israeli cities.

Israel, meanwhile, has restricted media coverage to control narratives, which experts warn fuels disinformation and dehumanization. While Israel primarily uses AI for military applications, generative AI is also employed by various actors to create fake videos mocking Iranian figures and to build networks of AI bots that amplify propaganda online.

The rise of synthetic personas presents another challenge. These AI-generated identities feature realistic speech and motion, extending beyond fake pictures to fully constructed virtual influencers. Research projects the virtual influencer market could reach $37.8 billion by 2030, blurring the lines between reality and synthetic presence.

Generative AI expands the battlefield into smartphones and social media, making information warfare personal and pervasive. As powerful figures utilize this technology without apparent consequences, society faces a future where misinformation affects all participants.