Mind Matters Natural and Artificial Intelligence News and Analysis
digital face construction
Computing, cyberspace and programming background
Photo licensed via Adobe Stock

Are We Losing the Battle With Online Fraud via Deepfakes?

Now, the FBI warns, faked up “tech employees” are applying for remote work jobs with sensitive corporate data
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

From Gizmodo this week we learn that the FBI has received a number of complaints about people using “stolen information and deepfaked video and voice to apply to remote tech jobs”:

According to the FBI’s announcement, more companies have been reporting people applying to jobs using video, images, or recordings that are manipulated to look and sound like somebody else. These fakers are also using personal identifiable information from other people—stolen identities—to apply to jobs at IT, programming, database, and software firms. The report noted that many of these open positions had access to sensitive customer or employee data, as well as financial and proprietary company info, implying the imposters could have a desire to steal sensitive information as well as a bent to cash a fraudulent paycheck.

Kyle Barr, “FBI Says People Are Using Deepfakes to Apply to Remote Jobs” at Gizmodo (June 28, 2022)

And what else can we use deepfakes for (besides defaming political figures and celebrities). Here are a few actual uses, some useful, some interesting, and some criminal:

The , created by Malaria Must Die campaign, features legendary soccer player David Beckham appearing to speak in nine different languages. Well, actually, the voice changes—even to female—yet Beckham’s mouth seems to be perfectly in sync with the words. With the help of artificial intelligence (AI), deepfake technology was used to manipulate his facial movements, thereby creating the visual illusion that he’s saying the words in each language.

Max Kalmykov, “Positive Applications for Deepfake Technology” at Hackernoon (November 12, 2019)

Digital de-aging, for instance, is particularly popular: Samuel L. Jackson and Clark Gregg have been digitally de-aged by approximately 25 years in Captain Marvel.

Although with less success, it is often possible to ‘digitally resurrect’ late actors by reconstructing their appearance from scratch – this was the case of Peter Cushing, who appeared in the 2017 Star Wars installment Rogue One, despite having passed away in 1994. – Alan Zucconi, lecturer in AI and independent software developer

Bitesize, “Deepfakes: What are they and why would I make one?” at BBC (July 2019)
  • Create an “if only” history:

In 1963, President John F. Kennedy was on his way to deliver a speech in Dallas when he was assassinated. The beloved politician didn’t get to bring those words to the world on that day, but thanks to modern technologies and innovative techniques, we can hear it now. To bring this idea to fruition, the team at CereProc analyzed the recordings of 831 JFK speeches to “build his voice” by separating the audio into 116,177 phonetic units. The incredibly challenging task was made even more difficult by the fact that the recordings were made on various types of equipment at different times. CereProc used AI to successfully create a very realistic audio speech derived entirely from data.

Max Kalmykov, “Positive Applications for Deepfake Technology” at Hackernoon (November 12, 2019)

Kalmykov appears to see this technology as a good thing for education. Indeed, this short film makes clear that the speech was never given and provides useful information on how the technicians created an audio file of how the speech might have sounded if it were. But will everyone be so careful? Students need the qualifiers.

  • Deepfakes can add a new dimension to phishing (pretending to be the bank or some other legitimate enterprise to get hold of customer access numbers). That new dimension is vocal deepfakes, known as “vishing”:

In early 2020, a bank manager was tricked into transferring $35 million to an attacker-controlled account.

The transfer was performed based on a phone call. The attackers used deepfake technology to duplicate the voice of a company’s director whom the bank manager had previously spoken with. The claim was that the $35 million was needed to close an acquisition, and the bank manager had received emails from the director and a lawyer hired to close the acquisition.

Howard Poston, “Deepfake phishing: Can you trust that call from the CEO?” at Infosec (May 31, 2022)

That’s not the only time it’s happened. One reason the security industry thinks vishing poses an ongoing risk is that people are more apt to trust phone calls “when they think they can recognize the caller’s voice.” Indeed. A criminal enterprise might have put a lot of work into that…

Forged passports with a deepfake photograph will be difficult to detect. These could then be used to facilitate multiple other crimes, from identity theft and trafficking to illegal immigration and terrorist travel.

Kevin Townsend, “Deepfakes Are a Growing Threat to Cybersecurity and Society: Europol” at Security Week (May 3, 2022)

What’s being done to help prevent deepfakes used in crime or deception?

In mid-2020, Facebook released 100,000 deepfakes to help teach AI programs because the current methods of detection were not accurate enough. The winner of a deepfakes detection contest which hosted over 2000 entrants was only 65% accurate.

Facebook does not plan to use any of the winning models on its site. For one thing, 65% accuracy is not yet good enough to be useful. Some models achieved more than 80% accuracy with the training data, but this dropped when pitted against unseen clips. Generalizing to new videos, which can include different faces swapped in using different techniques, is the hardest part of the challenge, says Seferbekov.

Will Douglas Heaven, “Facebook just released a database of 100,000 deepfakes to teach AI how to spot them” at Technology Review (June 12, 2020)

Security Week warns, “The simple reality is that deepfake production technology is currently improving faster than deepfake detection technology. ” In the meantime, we must make do in a world where seeing isn’t necessarily believing when a lot of money or power is involved.


You may also wish to read: The threat that deepfakes pose to science journals Image manipulation has been a problem for decades but convincing deepfakes could magnify the problem considerably. When Xiamen University researchers tested their deepfakes on experts, two of three fared poorly by eyesight alone; one scored 60%. Not reassuring.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Are We Losing the Battle With Online Fraud via Deepfakes?