Several deepfake video experts called a doctored video of Ukrainian President Volodymyr Zelenskyy that went viral this week before social media platforms removed it a poorly executed example of the form, but nonetheless damaging.
Elements of the Zelenskyy deepfake — which purported to show him calling for surrender — made it easy to debunk, they said. But that won’t always be the case.
Soon after Facebook announced Wednesday that it had quickly taken down the doctored video of Zelenskyy calling on Ukrainians to surrender, Shane Huntley, who manages and leads Google’s Threat Analysis Group, tweeted that what the Zelenskyy video “may end up showing is that people are actually pretty aware of how easy it is to fake videos and how quickly they get reported and taken down.”
That is a view which some disinformation experts refuted, noting that the Zelenskyy video was not representative of the typical deepfake. It was poorly made, involved a high-profile figure and had been “pre-bunked” by Zelenskyy’s administration, making it less effective than many such videos.
“While crude fakes might be easy to detect, debunk, and take down, this doesn’t mean that more sophisticated fakes won’t have a larger impact,” Hany Farid, a University of California at Berkeley professor who specializes in digital image analysis, said via email. “Even crude fakes will remain online for hours or days and can have an impact in a fast moving war.”
Farid also emphasized that even the poorest quality fakes pollute the information ecosystem, “making it easier to cast doubt on real videos and generally making it easier to cast doubt on everything that we see and hear online.”
How ‘pre-bunking’ helped
Other experts said the Zelenskyy deepfake failed not only because of how poorly executed it was, but also because of the Ukrainian president’s effective “pre-bunking” of a likely Russian deepfake purporting to show him calling for surrender — that is, steps Ukraine took to warn in advance that Russia would attempt to pass off a deepfake as reality.
This pre-bunking helped Ukrainians more readily recognize the deepfake as phony, according to Roman Osadchuk, a Ukraine-based Eurasia disinformation researcher with the Atlantic Council’s Digital Forensic Research Lab. He said Ukrainian soldiers warned each other over Telegram to be on guard for a deepfake from Russia suggesting capitulation back on March 2. That same day, the Ukraine government’s Center for Strategic Communication warned citizens to expect a deepfake surrender video.
Osadchuk called the presumably Russian-made deepfake video “basically a pre-record job.” He noted that Zelesnkyy’s head is out of place and the subject’s voice is clearly not Zelenskyy’s. He said many Ukrainians had ridiculed the video upon seeing it. In addition to Facebook and YouTube, the clip was also posted on a Ukrainian TV station. The Atlantic Council said it also appeared on Telegram and on a Russian social media channel.
Zelenskyy’s team also showed savvy by quickly shooting and distributing its own video and social media posts calling the surrender deepfake a hoax, Osadchuk said.
The Zelenskyy surrender video is “the best case in terms of detecting a deepfake, said Sam Gregory, program director at Witness, a non-profit which helps people use video to protect human rights.
He said the poor quality of the video; the Ukrainian government’s advance warning to citizens to expect a surrender deepfake; Zelesnkyy’s prominence and recognizability as a target; and the ease with which social media platforms could identify the video as a deepfake all made it a slam dunk for debunking. Many deepfake videos are harder to spot, Gregory warned.
The Zelenskyy video was an easy call for Facebook to remove, he said, but other less obvious deepfakes can be incredibly destructive, particularly when they play out in contexts where social media companies and journalists “don’t have the full weight of a public figure [like Zelenskyy] behind them to report it.”
The inherent threat
Gregory said what makes deepfakes so threatening is that they pay what he called “the liars’ dividend.”
“This is the idea that you can claim that real footage is false, and put pressure on journalists to prove it,” Gregory said. “Because of the existence of deepfakes, it’s easier to say you can’t trust any footage and that, of course, undermines truthful accounts.”
Christopher Paul, an information warfare expert with the RAND Corp. think tank, said he thinks of the technological evolution of deepfakes and the lack of availability of effective detection software as a “cat and mouse game.” He said it is clear to him that Facebook spotted this deepfake through human intervention rather than automation.
“At the moment, the cat and mouse game favors the aggressor in terms of not being able to spot the fakes in an automated way,” Paul said.
He said that deepfake producers also have proven they learn from failure. Once experts began warning the public to detect deepfakes by noting that human subjects in them failed to blink, producers responded by writing code to insert blinking.
“As deepfakes get better, it will be harder,” Paul said. “We may discover it eventually, but how much damage can it do in the hours or days or weeks before it’s disclosed or exposed?”