US Wants to “Suppress Dissenting Arguments” Using AI Propaganda

Sam Biddle

The Intercept

08/25/2025

Open AI’s ChatGPT or Google’s Gemini have surged in popularity despite their propensity for factual errors and other erratic outputs. But their ability to immediately churn out text on virtually any subject, written in virtually any tone — from casual trolling to pseudo-academic — could mark a major leap forward for internet propagandists. These tools give users the potential to finetune messaging any number of audiences without the time or cost of human labor.

Whether AI-generated propaganda works remains an open question, but the practice has already been amply documented in the wild. In May 2024, OpenAI issued a report revealing efforts by Iranian, Chinese, and Russian actors to use the company’s tools to engage in covert influence campaigns, but found none had been particularly successful. In comments before the 2023 Senate AI Insight Forum, Jessica Brandt of the Brookings Institution warned “LLMs could increase the personalization, and therefore the persuasiveness, of information campaigns.” In an online ecosystem filled with AI information warfare campaigns, “skepticism about the existence of objective truth is likely to increase,” she cautioned. A 2024 study published in the academic journal PNAS Nexus found that “language models can generate text that is nearly as persuasive for US audiences as content we sourced from real-world foreign covert propaganda campaigns.”

Unsurprisingly, the national security establishment is now insisting that the threat posed by this technology in the hands of foreign powers, namely Russia and China, is most dire.

“The Era of A.I. Propaganda Has Arrived, and America Must Act,” warned a recent New York Times opinion essay on GoLaxy, software created by the Chinese firm Beijing Thinker originally used to play the board game Go. Co-authors Brett Benson, a political science professor at Vanderbilt University, and Brett Goldstein, a former Department of Defense official, paint a grim picture showing GoLaxy as an emerging leader in state-aligned influence campaigns.

GoLaxy, they caution, is able to scan public social media content and produce bespoke propaganda campaigns. “The company privately claims that it can use a new technology to reshape and influence public opinion on behalf of the Chinese government,” according to a companion piece by Times national security reporter Julian Barnes headlined “China Turns to A.I. in Information Warfare.” The news item strikes a similarly stark tone: “GoLaxy can quickly craft responses that reinforce the Chinese government’s views and counter opposing arguments. Once put into use, such posts could drown out organic debate with propaganda.” According to these materials, the Times says, GoLaxy has “undertaken influence campaigns in Hong Kong and Taiwan, and collected data on members of Congress and other influential Americans.”

To respond to this foreign threat, Benson and Goldstein argue a “coordinated response” across government, academia, and the private sector is necessary. They describe this response as defensive in nature: mapping and countering foreign AI propaganda.

That’s not what the document from the Special Operations Forces Acquisition, Technology, and Logistics Center suggests the Pentagon is seeking.

The material shows SOCOM believes it needs technology that closely matches the reported Chinese capabilities, with bots scouring and ingesting large volumes of internet chatter to better persuade a targeted population, or an individual, on any given subject.

SOCOM says it specifically wants “automated systems to scrape the information environment, analyze the situation and respond with messages that are in line with MISO objectives. This technology should be able to respond to post(s), suppress dissenting arguments, and produce source material that can be referenced to support friendly arguments and messages.

The Pentagon is paying especially close attention to those who might call out its propaganda efforts.

“This program should also be able to access profiles, networks, and systems of individuals or groups that are attempting to counter or discredit our messages,” the document notes. “The capability should utilize information gained to create a more targeted message to influence that specific individual or group.”