On March 16, Ukrainians were surprised by a video shared at Facebook and YouTube in which their president Volodymyr Zelensky told them to surrender to the invading Russians:
A fake video emerged on Facebook and YouTube in which a strangely motionless version of Zelensky asked Ukrainian troops to lay down their weapons in a voice different from his usual tone. The clip was also posted to Telegram and Russian social network VKontakte, according to the US think tank the Atlantic Council. TV Channel Ukraine 24 said hackers defaced its website with a still from the video and inserted a summary of the fake news into a broadcast’s scrolling chyron.
Minutes after the TV station posted about the hack, Zelensky himself posted a Facebook video denying that he had asked Ukrainians to lay down their arms and calling the fake a childish provocation. Nathaniel Gleicher, head of security policy at Facebook’s owner Meta, tweeted that the company had removed the original deepfake clip for violating its policy against misleading manipulated media. A statement provided by Twitter spokesperson Trenton Kennedy said the company was tracking the video and removing it in cases where it breached rules banning deceptive synthetic media. YouTube spokesperson Ivy Choi said it also had removed uploads of the video.Tom Simonite, “A Zelensky Deepfake Was Quickly Defeated. The Next One Might Not Be” at Wired (March 17, 2022)
A deepfake of Ukrainian President Volodymyr Zelensky calling on his soldiers to lay down their weapons was reportedly uploaded to a hacked Ukrainian news website today, per @Shayan86 pic.twitter.com/tXLrYECGY4
— Mikael Thalen (@MikaelThalen) March 16, 2022
It appears to have been an ill-timed attempt at deepfakery. The possibility of such a strategy had been advertised as early as March 2 so it wasn’t a surprise. In any event, no one expected Zelensky to surrender for no obvious reason. But, as Simonite says at Wired, Zelensky benefited from his high international profile, clear stance, and quick action. We can hardly hope that every such attempt will be ill-timed or poorly produced or met with a quick and effective response.
What happens when a savvy tech team produces an advertised deepfake of U.S. President Richard Nixon (1913–1994) addressing a fictional event in which astronauts Neil Armstrong and Buzz Aldrin die on the moon. As it happens, in July, 1969, speechwriter William Safire wrote a speech for Nixon to give in that event. So MIT media artists Francesca Panetta and Halsey Burgundy worked with Canny AI and Respeecher to create an award-winning deepfake, built around that theme.
The filmmakers are at pains to reassure us that the footage is a deepfake. But what if the tech savvy of “In Event of Moon Disaster” with a more plausible scenario in an armed conflict than, say, Zelensky suddenly surrendering?
To accomplish the visual part of the fake, Canny AI employed a technique called “video dialogue replacement.” Instead of face swapping, an AI system was trained to transfer facial movements for the “In Event of Moon Disaster” speech to a realistic reconstruction of Nixon’s visage and its surroundings.
For the sound portion, Respeecher processed training audio of an actor reading the speech, as well as recordings of Nixon’s many televised appearances from the White House. The company then used a “voice conversion system” to synthesize Nixon delivering the speech from the actor’s performance. (Both Canny AI’s and Respeecher’s techniques are proprietary, and the companies say they have never been used to make illegal fakes.)Jeffrey DelViscio, “A Nixon Deepfake, a ‘Moon Disaster’ Speech and an Information Ecosystem at Risk” at Scientific American (July 20, 2020)
The project, which took more than six months, is an effort to “demonstrate such deepfakes’ potential to raise questions about how they might affect our shared history and future experiences.”
What about deepfake detection?
Today, it is not difficult to make a deepfake that would fool the average viewer. And fool other AI as well. When Facebook held a contest in 2020, the best detection software was accurate less than two-thirds of the time. But, in general, detection is getting better:
But now, in a blog post and a paper posted online, the research team from Facebook and Michigan State say they have created a system that, at 70% accuracy on a key benchmark test, is significantly better than any previous system that ingested whole still images or video frames for examination. The system was also only 1% less accurate than the best previous method overall. But that earlier top-performing system examined images pixel by pixel, instead of frame by frame. As a result, it consumed far more computer power than the system Facebook and Michigan State scientists created.
Critically, unlike many rival deepfake detectors, the researchers said, their approach should be able to uncover deepfakes created using methods its detection algorithm has never even encountered before in its training.Jeremy Kahn, “Facebook says it’s made a big leap forward in detecting deepfakes” at Fortune (June 16, 2021)
The researchers departed from the script in their design of the project by assuming that every image is a deepfake. They then asked the software to reverse engineer it. Thus the software can spot patterns created by known deepfake programs. At Fortune, Kahn notes, “This is not that dissimilar from how the FBI once used subtle differences in how different mechanical typewriters render text to determine which brand and model of typewriter was used to compose a ransom note.”
The fight against deepfakes is an arms race with a difference: The faker has to fake something before it can be detected so the detectors will always be a bit behind. But, as detection grows more sophisticated, they will also be quicker to catch up.
You may also wish to read: Deepfakes can replicate human voices now — maybe yours. Digitally faked voice tech has already been used to perpetrate a big bank fraud. Even as deepfakes grow more sophisticated, the technology to detect them lags behind the technology to create them.
Deepfake of Queen’s Christmas Message highlights era of fake news The concept is actually an old one and we are not helpless against such deceptions. The golden rule, of course, is: When in doubt, doubt and if it sounds unbelievable, don’t believe it.