DeepFaceLive is a live-streaming version of the popular DeepFaceLab software, and is capable of creating alternate video identities in real-time.
From more or less face-on viewpoints, most of the celebrity recreations are quite effective, and some are very convincing even at fairly acute angles – until the facial angle hits a full 90°.ry good at recreating profile views.Martin Anderson, “To Uncover a Deepfake Video Call, Ask the Caller to Turn Sideways” at Metaphysic (August 8, 2022)
Then, as his examples at the site show, the possibly convincing face fuzzes out. He thinks that computer security analysts have overlooked this angle (so to speak) because video call deepfakes are a comparatively new issue. The revealing sideways view arises from a limitation in the current deepfake software:
The standard software (Facial Alignment Network) that estimates facial poses in images, in deepfakes packages, does not work reliably at acute angles. In fact, most 2D-based facial alignments algorithms assign only 50-60% of the number of landmarks from a front-on face view to a profile view.Martin Anderson, “To Uncover a Deepfake Video Call, Ask the Caller to Turn Sideways” at Metaphysic (August 8, 2022)
That’s mainly because, inconveniently for fraudsters, people who haven’t been arrested don’t have many profile shots to work from. Celebrities may be an exception because they are so often photographed or filmed that there may be many profile shots to work from. But technology as such is only part of the problem and someone will doubtless find a way to get round the issue.
Generally, it’s getting harder to detect deepfakes, which impacts to science research as well as consumer safety. One reason for that should be quite familiar. Political scientist Nina Schick, author of Deep Fakes (2020), admits,
“This is always going to be a game of cat and mouse,” Schick said. “Because just as soon as you build a detection model that can detect one kind of deepfake, there will be a generator that will be able to beat that detector.”
Schick likened it to antivirus software that must be continually updated because viruses advance more quickly than the software that finds them.Brit McCandless Farmer, “The impact of deepfakes: How do you know when a video is real?” at MSN (July 31, 2022)
She thinks a better solution would be to validate true videos, that is, a form of media provenance:
To do this, technology will need to be embedded in hardware and software. For a video, the technology would indicate where the video was shot and keep a record of how it had been manipulated. Think of it, in a way, like a digital watermark imprinted every time a video is edited.Brit McCandless Farmer, “The impact of deepfakes: How do you know when a video is real?” at MSN (July 31, 2022)
The principle would be similar to blockchain, where an error can be detected by the whole system. Then the challenge for fraudsters then is not to edit successfully but to avoid detection when doing so.
Meanwhile, MIT’s Media Lab offers eight tips for detecting deepfakes on our own. Here are three:
2.Pay attention to the cheeks and forehead. Does the skin appear too smooth or too wrinkly? Is the agedness of the skin similar to the agedness of the hair and eyes? DeepFakes are often incongruent on some dimensions.
3.Pay attention to the eyes and eyebrows. Do shadows appear in places that you would expect? DeepFakes often fail to fully represent the natural physics of a scene.
4.Pay attention to the glasses. Is there any glare? Is there too much glare? Does the angle of the glare change when the person moves? Once again, DeepFakes often fail to fully represent the natural physics of lighting.Matt Groh, Affective Computing, “Detect DeepFakes: How to counteract misinformation created by AI” at MIT Media Lab
Fraudsters will get better, of course, so Media Lab also mentions the recent Kaggle Competition for best technology to spot deepfakes, with $1 million in prizes offered.
Meanwhile, here’s a start at what we know about spotting the fakes:
You may also wish to read: Neuroscientists: Our brains may detect deepfakes when minds don’t. Using electroencephalography, researchers found that brains may be spotting something that minds miss. Given the rapid rise of deepfake technology, they think that the possibility that we have natural defenses against fakes is worth exploring.