Advertisement

The worst part about finding Facebook disinformation is finding it again

Fool me once, shame on me.
(Getty Images)

When Facebook said in August it had removed a network of fake accounts that had been trying to amplify criticism of President Donald Trump, it gave some external researchers a sense of déjà vu.

After all, Facebook had taken intermittent action against accounts, pages and groups that were misrepresenting themselves to promote China’s Communist Party, including specific removals of a campaign known as Spamouflage Dragon. The Spamouflage campaign apparently began in the summer of 2019 as a scheme to denounce pro-democracy protesters in Hong Kong, eventually shifting to demonize critics of Beijing and to praise China’s handling of the coronavirus pandemic.

By August 2020, Facebook, like Twitter and YouTube, was still removing Spamouflage-affiliated accounts that bashed Trump’s inaction on the coronavirus and U.S. scrutiny of TikTok under its “coordinated inauthentic behavior” policy. Other networks of accounts also have managed to return to Facebook after they were detected and previously removed, resulting in frustration for outside disinformation specialists who spend resources catching propaganda and reporting their findings to Facebook, only to do it all over again.

“We’re seeing specific actors who are coming offline, and then they make their way back on,” said one researcher granted the condition of anonymity. “It gives me a sense of why we’re playing catch up. And it’s really f**king frustrating.”

Advertisement

It’s an issue that demonstrates how social media companies, U.S. law enforcement and other entities have failed to effectively disincentivize attackers from using American platforms to try to influence public opinion, both at home and abroad. While Facebook says it’s made progress against information operations — artificial intelligence software removed 6.6 billion fake accounts in 2019, the company boasted — attackers, spammers and spies continue to return to the site after they are blacklisted.

Facebook employees also have spoken internally about a failure to remove accounts quickly, a problem that politicians in Honduras, Azerbaijan and elsewhere have exploited to affect political outcomes, according to a whistleblower complaint obtained by BuzzFeed News.

Under its coordinated inauthentic behavior policy (CIB), Facebook scrubs accounts, pages and groups based on their actions, such as whether an account in fact belongs to who it appears. (Spamouflage accounts, for instance, would use fabricated names and images generated by artificial intelligence to denigrate Trump.)

Often, when Facebook removes a network of inauthentic accounts, the company adds data about the malicious activity to an automated recidivism tool that aims to block the same efforts in the future.

Other times, a group will entirely abandon the techniques that caught Facebook’s attention, such as bots or artificial engagement tactics, only to “step back, build a new infrastructure or really try to hide themselves better,” said Nathaniel Gleicher, head of cybersecurity policy at Facebook.

Advertisement

“We’ve learned that, particularly for CIB actors, preventing them from doing what they want to do doesn’t mean they’re going to give up,” he said. “They don’t say, ‘Oh, you got me. It’s too hard.’ No, they keep coming back.”

It’s normal for social media firms to expose and remove a network, only for researchers to find more connections between still-active accounts or for the group to try to reassert itself with new techniques, said Graham Brookie, director and managing editor of the Atlantic Council’s Digital Forensic Research Lab. It’s one of the problems that makes manufactured social media activity such a difficult national security challenge.

“Disinformation is designed to seep through the cracks of any policy that we make,” said Brookie, adding that the costs currently imposed against attackers — publicly exposing their work, among others — require more thought.

Still, there’s more room for transparency, said Nina Jankowicz, author of “How to Lose the Information War.” Any effective influence operation utilizes multiple social media platforms. Those same sites vary in terms of how many details they are willing to provide about each finding.

Facebook’s monthly coordinated inauthentic behavior reports, for instance, are far less comprehensive than the sets of data that Twitter shares with researchers after its own takedowns. Twitter, unlike Facebook, publicizes information such as account names, dates of creation, the preferred language of the tweets and other details that help specialists map an account’s interactions and find clues about other, related networks.

Advertisement

“One thing I would criticize about the approach of some of the social media companies is the fact that we have to trust them for their version of events, and we don’t get to see the full picture of the takedown,” Jankowicz said. “I would love to see even more coordination across platforms so we can track cross-platform activity.”

A shortcoming in the approach to stopping cross-platform activity was on display after a recent Facebook action against the Proud Boys, a U.S.organization designated as a hate group by the Southern Poverty Law Center. The company in July said it removed 54 accounts, 50 pages and four Instagram accounts belonging to the pro-Trump group, which Facebook initially banned from its services in 2018. The group still exists in various forms on Twitter, where a movement of gay men have co-opted the Proud Boys hashtag to promote LGBTQIA viewpoints.

The Proud Boys network on Facebook was mostly active in 2015 and 2017, only to go mostly dormant in the time since. Facebook learned of the full reach of the network as a result of former Special Counsel Robert Mueller’s investigation into Roger Stone, a Trump associate also affiliated with the Proud Boys.

“The whack-a-troll policy is not a solution in and of itself,” Brookie said. “But it helps build facts and understanding that could lead to broad decisions or policy interventions that could have second, third or fourth-order effects that, frankly, we don’t understand right now.”

Latest Podcasts