Advertisement

Would social media collaboration kneecap abusive content? One startup thinks so.

The chief executive at Sentropy details the company's plans for next year.
phone encryption
(Chainarong Prasertthai / Getty Images)

Social media companies need to band together more to limit the spread of abusive and harmful content online, according to John Redgrave, the co-founder and CEO of abuse detection software company Sentropy.

Social media companies can work all they want to root out harmful content, but if they’re working in silos and not sharing lessons learned, some harmful content will continue to spread unabated, Redgrave said during FedTalks, a virtual event produced by FedScoop.

“Facebook, after the Christchurch shooting did, what I would view as a technologist, an admirable job of yanking down the video on their platform. But I can still find the video online,” Redgrave said, referring to the shooting in New Zealand which was live-streamed on social media last year.

“This is not a Facebook problem, this is not a Twitter problem — this is an internet problem,” Redgrave said. “What we need to see is increased collaboration.”

Advertisement

Sentropy, which offers an API-based and a browser-based interface to help companies make content moderation decisions, emerged from stealth five months ago with $13 million in backing, including from the likes of Alexis Ohanian, the co-founder of Reddit. Sentropy has not detailed with which companies it works.

What’s in store for 2021

Sentropy currently only tracks text-based harmful content, such as messages about self-harm, threats of physical violence, or white supremacist aggression, but told CyberScoop that his company will be triaging audio and image-based content in 2021 given the prevalence of these vectors for abuse.

“We will be doing audio and image work in 2021,” Redgrave said. “We recognize the importance of it. You have to be in multiple modalities to be able to address this problem.”

Redgrave’s comments about increased collaboration come after Facebook, Twitter, YouTube, and several other social media companies have for years faced criticism for their attempts and failures to curb harmful content from hopping between platforms and cascading into the physical world.

Advertisement

In recent weeks, the social media platforms have faced particular scrutiny for their inability to prevent lies about the U.S. presidential election from spreading, hopping between platforms, and manifesting in the physical world in the form of armed protests.

Social media companies have made some strides in limiting harmful content online, and have recently announced efforts to limit the reach of QAnon conspiracy theories, for instance. But the recent election has introduced a new hurdle for the platforms: handling posts from incoming members of Congress who have either given credence to QAnon conspiracy theories or who actively support them. Redgrave said he thinks blocking these kinds of posts is the right thing to do — but did not commit to creating a Sentropy QAnon classifier.

“This is a tricky question honestly because the platforms have been pretty clear in they are going to remove QAnon content,” Redgrave said. “I think that’s the right stance. Our company and our stance is we will build classifiers if companies find it valuable. Will we build a QAnon classifier at some point in the future? Maybe. I don’t want to commit to it.”

Moving forward, President-elect Joe Biden has said he wants his administration to launch a taskforce meant to curb online harassment and abuse, which Redgrave said would do well to involve industry experts.

“I’ve spent some time talking with the eSafety office, which was the first safety office in the world, in Australia. The promise of that type of office popping in our own government is really exciting to me,” Redgrave said. “The platforms and the people watching this data flow through their ecosystem every day, they have the best chance of actually building the right framework for this.”

Shannon Vavra

Written by Shannon Vavra

Shannon Vavra covers the NSA, Cyber Command, espionage, and cyber-operations for CyberScoop. She previously worked at Axios as a news reporter, covering breaking political news, foreign policy, and cybersecurity. She has appeared on live national television and radio to discuss her reporting, including on MSNBC, Fox News, Fox Business, CBS, Al Jazeera, NPR, WTOP, as well as on podcasts including Motherboard’s CYBER and The CyberWire’s Caveat. Shannon hails from Chicago and received her bachelor’s degree from Tufts University.

Latest Podcasts