About a quarter of support expressed on Twitter for political candidates in Arizona and Florida has been generated by “influence agents” ranging from bots to paid mouthpieces, according to a new study, showing the increasingly artificial political conversation in battleground states on the eve of the midterms.
“There are at least thousands of cases of politicians, journalists and thought leaders responding to, and even endorsing, influence agents,” said APCO Worldwide and Morpheus Cyber Security, the companies that produced the research.
The researchers used an analytics platform to study the Twitter traffic patterns of every major primary candidate for governor, the House of Representatives, and Senate in Arizona and Florida. The results show how an injection of artificial voices can significantly alter the political discourse on Twitter.
“Not all influence operations peddle fake news using digital bots, and here we see a case of faking political support, rather than news,” Morpheus Cyber Security co-founder Eran Reshef said. “Influence operations are becoming so sophisticated, that many politicians and journalists are manipulated to accept, and even endorse, their narratives.”
APCO and Morpheus covered a broad spectrum of influence-peddling Twitter accounts, from fully and semi-automated bots to “software-assisted influencers” (people aided by the occasional algorithm); to political volunteers “working to game the system;” to paid mouthpieces who don’t disclose their motives.
In one example tracked by researchers, Joan Greene, a Democrat running for Congress in Arizona, amplified a Twitter user’s criticism of Republican social welfare policies. The Twitter user, @PeconicLady is most likely a semi-automated bot or someone coordinating with other users to inflate their Twitter footprint, according to Reshef.
So true – Pro-birth until the baby starts crying. Not even pre-birth for them since they are against prenatal care.
— Joan Greene (@joangreeneaz) July 4, 2018
APCO and Morpheus covered a broad spectrum of influence-peddling Twitter accounts, from fully and semi-automated bots to “software-assisted influencers” (people aided by the occasional algorithm); to political volunteers “working to game the system;” to paid ‘influencers” who don’t disclose their motives.
“The existence of influence operations in American politics remains profound and very difficult to eradicate,” said Jay Solomon, senior director at APCO Worldwide’s Global Solutions unit.
Morpheus is an Israel-based cybersecurity company that detects, analyzes and mitigates influence operations. The company, currently in stealth mode, was founded by Eran and Roni Reshef, along with help from Check Point Technologies co-founder Marius Nacht.
In the lead up to the election, social media companies have been removing accounts that have been tied to influence campaigns. Facebook blocked 30 Facebook accounts and 85 Instagram accounts that were “engaged in coordinated inauthentic behavior” after receiving information from law enforcement officials earlier this week.
The research comes as U.S. officials grapple with how to counter influence operations, which can be difficult to detect and stamp out because of evolving tactics and free-speech considerations. On Friday, Homeland Security Secretary Kirstjen Nielsen said that adversaries were adapting to U.S. defenses and that influence operations are “much more difficult to get our hands around” compared to cyberthreats to election infrastructure.