Advertisement
They also warned that propagandists could use artificial intelligence (AI) to expose citizens to many articles, thereby increasing the volume of propaganda and making them harder to detect.
For the study, researchers at the Stanford University and the Georgetown University in the US identified six English language articles, which, according to investigative journalists and the research community, likely originated from Iranian or Russian state-aligned covert propaganda campaigns.
The researchers explained that these articles made claims about US foreign relations, such as the false claim that Saudi Arabia committed to help fund the US-Mexico border wall or that the US fabricated reports showing that the Syrian government had used chemical weapons.
Related Articles
Advertisement
In December 2021, the researchers presented the actual propaganda articles and AI-generated propaganda articles to 8,221 US adults, recruited through survey company Lucid.
They clarified that the participants were informed that the articles came from propaganda sources and possibly contained false information after the study concluded.
The team found that reading propaganda created by GPT-3 was almost as effective as reading real propaganda. On average, while a little over 24 per cent of the participants who were not shown an article believed the claims, the figure rose to more than 47 per cent upon reading the original propaganda.
However, reading the AI-generated propaganda material was not vastly different in effectiveness as roughly 44 per cent of the participants agreed with the claims, suggesting that many AI-written articles were as persuasive as those written by humans, the researchers said.
Further, they cautioned that their estimates might be an under-representation of the persuasive potential of large language models, as companies have released larger, enhanced models since their study was conducted.
”We expect that these improved models, and others in the pipeline, would produce propaganda at least as persuasive as the text we administered,” the researchers said in their study.
Propagandists could therefore use AI to mass-produce convincing propaganda material with minimal effort, they said.
”Regarding risks to society, propagandists are likely already well aware of the capabilities of large language models; historically, propagandists have been quick both to adopt new technologies and incorporate local language speakers into their work,” the study said.
Propagandists could also use AI to expose citizens to many articles, thereby increasing the volume of propaganda and making them harder to detect, as varying style and wording could convey the impression of the content being the views of real people or genuine news sources, they said in their study.
”As a result, the societal benefit of assessing the potential risks outweighs the possibility that our paper would give propagandists new ideas,” the researchers wrote.
A future line of research could include probing strategies to guard against the potential misuse of language models for propaganda campaigns, they said, as research that improves the detection of infrastructure needed to deliver content to a target will become more important.