Content:

A Pro-Russia Disinformation Campaign Is Using AI to Fuel a ‘Content Explosion’
Wired
07/01/2025
A PRO-RUSSIA DISINFORMATION campaign is leveraging consumer artificial intelligence tools to fuel a “content explosion” focused on exacerbating existing tensions around global elections, Ukraine, and immigration, among other controversial issues, according to new research published last week.
The campaign, known by many names including Operation Overload and Matryoshka (other researchers have also tied it to Storm-1679), has been operating since 2023 and has been aligned with the Russian government by multiple groups, including Microsoft and the Institute for Strategic Dialogue. The campaign disseminates false narratives by impersonating media outlets with the apparent aim of sowing division in democratic countries. While the campaign targets audiences around the world, including in the US, its main target has been Ukraine. Hundreds of AI-manipulated videos from the campaign have tried to fuel pro-Russian narratives.
The researchers said the spike in content was driven by consumer-grade AI tools that are available for free online. This easy access helped fuel the campaign’s tactic of “content amalgamation,” where those running the operation were able to produce multiple pieces of content pushing the same story thanks to AI tools.
Pro-Russia disinformation groups have long been experimenting with using AI tools to supercharge their output. Last year a group dubbed CopyCop, likely linked to the Russian government, was shown to be using large language models, or LLMs, to create fake websites designed to look like legitimate media outlets. While these attempts don’t typically get much traffic, the accompanying social media promotion can attract attention and in some cases the fake information can end up on the top of Google search results.
A recent report from the American Sunlight Project estimated that Russian disinformation networks were producing at least 3 million AI-generated articles each year, and that this content was poisoning the output of AI-powered chatbots like OpenAI’s ChatGPT and Google’s Gemini.
Researchers have repeatedly shown how disinformation operatives are embracing AI tools, and as it becomes increasingly difficult for people to tell real from AI-generated content, experts predict the surge in AI content fuelling disinformation campaigns will continue.