Advertisement

Why Twitter’s bot problem is a looming security challenge

Last month's bot attack targeting think tanks and journalists is just the latest instance that calls the company's security practices into question.
(Flickr)

The persons or groups behind Twitter’s thousands of bot accounts have realized they can attack people without triggering Twitter’s protective security policies, presenting a rapidly evolving information security challenge for the social media network.

Late last month, bot researchers at ProPublica and the Atlantic Council were attacked by a campaign of Twitter bots, which spammed the victim accounts with thousands of retweets and likes, causing Twitter to temporarily suspend certain accounts for unusually high activity.

Ben Nimmo, Information Defense Fellow at the Atlantic Council’s Digital Forensic Research Lab, was personally targeted by the bot campaign and live-tweeted his analysis of the attacks, which included impersonations of Atlantic Council user accounts that tweeted fake content, like a message alleging that Nimmo had died.

“They certainly wanted to intimidate me by faking those accounts. That was about scaring me, rather than me getting blocked,” Nimmo told Cyberscoop.

Advertisement

Nimmo noted that he was easily able to manipulate the botnet that informed the bot accounts what accounts to target (e.g., those discussing either the alt-right, Russia, or tagging the Atlantic Council Twitter account), suggesting that the botnet itself was fairly crude and simplistic.

Regarding the somewhat large number of accounts targeted by the campaign, Nimmo theorized, “a lot of people can get caught up in the collateral damage of a dumb bomb.”

Crude botnets are relatively easy to make, manipulate and deploy. Even with methods to spot bots, this makes Twitter’s job much harder.

According to Brian Krebs, a prominent journalist and cybersecurity expert, there may be a silver lining for the future of Twitter’s information security. Krebs observed that the majority of the bot accounts in the campaign, which numbered more than 10,000, were several years old, suggesting that the campaign organizers identified and compromised dormant accounts with easy passwords, or capitalized on accounts they had stockpiled years earlier.

These older accounts are quite valuable, because the original account registrants didn’t have to undergo Twitter’s new and improved verification processes.

Advertisement

The good news, Krebs says, is that there may be an expiration date on this tactic.

“Several years ago, Twitter, alongside other companies that provide free online information services, decided to make things more expensive and difficult for fraudsters by requiring phone numbers in the account registration process,” Krebs told CyberScoop. That means that every year, it is much harder to create multiple fraudulent accounts.

Even with this anticipated decrease of available bot accounts in the future, Twitter faces a rapidly evolving information security challenge.

Some cybersecurity experts think the social media company isn’t doing enough to combat these simpler DDoS tactics, or isn’t being transparent enough about its initiatives.

“Twitter’s approach to addressing the bot issue is indirect,” said Jonathan Song, systems administrator for Columbia University’s Digital and Cyber Group. “They address abuse within the context of tweets — trolling, doxing, etc. However, if accounts can get banned for artificially inflating followers, like what Brian Krebs has talked about, then Twitter needs to reconsider its policies about bans and suspensions.”

Advertisement

When reached for comment, Twitter pointed CyberScoop to a blog entry on bots and misinformation, highlighting a passage: “It’s worth noting that in order to respond to this challenge efficiently and to ensure people cannot circumvent these safeguards, we’re unable to share the details of these internal signals in our public API.”

Yet, until Twitter is able to both come up with and implement these magic bullet solutions – and there’s no guarantee that any exist – its information security environment might continue to draw crude campaigns like the one faced by Krebs and Nimmo. Impersonations, the use of previously dormant accounts, and changes in account activity may seem like obvious signs of suspicious behavior, but they don’t always trigger Twitter’s security measures, nor those of other companies chasing the same goals.

“There are a lot of companies trying to figure out how to do proper behavioral data analytics, but few do it successfully yet,” said Natasha Cohen, Director of Cyber Policy and Client Strategy at BlueteamGlobal.

Additionally, any policies trying to disambiguate suspicious user activities might face certain internet ideology issues.

“Regarding the pressure to tie accounts to people, the Western internet has been built on a idea that you can be (at least somewhat) anonymous on the internet,” Cohen said. “In some ways, that very nature of Twitter has become a threat to the platform and the people using it.”

Nicole Softness

Written by Nicole Softness

Nicole Softness is a graduate student at Columbia University’s School of International and Public Affairs, studying International Security & Cyber Policy and working as a researcher for Columbia's Initiative on the Future of Cyber Risk. She has published articles relating to cybersecurity, counterterrorism, artificial intelligence and technology law.

Latest Podcasts