Advertisement

Researchers: Large language models will revolutionize digital propaganda campaigns

Advances in machine learning will make it cheaper and easier to carry out influence operations at scale.
A picture taken on July 5, 2017 shows a souvenir kiosk offering among others a drawing depicting Russian President Vladimir Putin holding a baby with the face of former U.S. President Donald Trump. (MLADEN ANTONOV/AFP via Getty Images)

Artificial intelligence systems such as ChatGPT, which captured public imagination thanks to its uncanny ability to produce human-seeming text, are poised to transform how nations deploy digital propaganda operations to manipulate public opinion, according to a major new report released Wednesday.

By utilizing the large language models to rapidly deploy and scale influence campaigns, the purveyors of online influence and disinformation operations will be able to more easily scale and accelerate their efforts, according to the study from a consortium of prominent researchers. 

With the help of more capable and readily available large language models, actors running influence operations will be able to automate social media posts; use these models to compose full length articles to populate fake news sites; and build chatbots to interact with targets on a one-to-one basis, according to researchersIn short, language models are likely to reduce the cost of generating the propaganda that makes up a large-scale influence campaign and create more impactful content, the researchers conclude. 

Advances in large language models have generated immense interest and prompted investors to pour billions into so-called generative AI companies. Microsoft, for instance, is reportedly set to invest $10 billion in OpenAI — whose researchers contributed to Wednesday’s study — a capital injection that would value the company at $29 billion. The company’s advances have spurred a rush of investments in companies that build AI models to generate content and has created a panic at Google, where managers have reportedly declared a “code red” to ensure that OpenAI does not eat into the company’s lucrative search engine business.

Advertisement

So far, large language models have not been deployed as part of influence campaigns, and the authors of the report are urging companies and policymakers to be proactive in building guardrails.  “We don’t want to wait until these models are deployed for influence operations at scale before we start to consider mitigations,” said Josh A. Goldstein, one of the lead authors of the report and a researcher the Center for Security and Emerging Technology. 

To carry out an influence operation using a language model, operators require a model to exist, access to that model, the ability to disseminate content produced by it and for a user to consume it. The authors contemplate a variety of interventions to prevent large language models from being used for influence campaigns.

These include, for example, access controls on AI hardware to make it more difficult to build LLMs in the first place, usage restrictions on existing AI models so that propagandists can’t access the models they need, the adoption of digital provenance standards so that content produced by language models is easy to detect and media literacy campaigns to make users more difficult to influence.

But Wednesday’s report illustrates the scale of addressing these challenges: The current U.S. approach to disinformation is “fractured” across social media platforms and among researchers, the report observes. Addressing the threat posed by large language models will require a “whole of society” approach with coordination among social media companies, government and civil society — a unity of effort sorely lacking from the effort to combat false information online. 

Wednesday’s report is the result of more than a year-long collaboration between researchers at CSET; Stanford’s Internet Observatory, which has done pioneering research in the study of online influence operations; and researchers at OpenAI, whose groundbreaking advances in machine learning are largely responsible for helping to generate the current wave of intense interest in the field. 

Advertisement

By making it easier to interact with LLMs, OpenAI has helped open the eyes of researchers, technologists and many ordinary users to the potential of using the model to perform queries, answer complex questions but to also put it to harmful use — like using the model to cheat on exams or write malicious code

While there is no evidence that these models have so far been used as part of influence operations, there are documented use cases that illustrate how language models could be used by state actors in pernicious ways.

Last year, an AI researcher trained an existing LLM — a process known as fine-tuning — on a large body of data from 4chan, the toxic online message board. The researcher then let the model run rampant on the forum, making more than 30,000 posts on the site — mostly fooling the site’s users into thinking that it was a legitimate user aping the site’s racist, misogynist discourse. 

It’s easy to imagine how an LLM might be fine-tuned to perform a similar task on behalf of a favored candidate in a foreign election, but Goldstein cautioned that it is important not to overstate the threat posed by these models in revolutionizing influence operations. 

”I think it’s critical that we don’t engage in threat inflation,” Goldstein said. “Just because we assess that language models will be useful does not mean that all influence operations using language models will automatically have a big effect.”

Advertisement

This week, the journal Nature published a major study examining the effect of Twitter messaging by Russia’s Internet Research Agency during the 2016 and whether users exposed to that influence operation changed their beliefs or voting behavior, finding “no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior.” 

Studies like this highlight that even as states around the world invest in their capacity to influence online audiences, it remains difficult to have an impact in changing minds. “We shouldn’t jump to assume that they have a major impact just because they exist,” Goldstein said.

Elias Groll

Written by Elias Groll

Elias Groll is a senior editor at CyberScoop. He has previously worked as a reporter and editor at Foreign Policy, covering technology and national security, and at the Brookings Institution, where he was the managing editor of TechStream and worked as part of the AI and Emerging Technology Initiative. He is a graduate of Harvard University, where he was the managing editor of The Harvard Crimson.

Latest Podcasts