Mark was having a typical workday when he played his voicemails back. One of them was from his boss Tom, who was on vacation and asked him to transfer company funds to another account. The voicemail was accompanied by an email thread that corroborated the audio, and Mark performed the transfer. But the audio turned out to be a “deepfake” vishing call from a cybercriminal, audio almost indistinguishable from real audio of Tom’s boss.
You can have a hard time recognizing what is or is not valid in the news–and the internet has democratized fakery. Anyone can now use machine learning (ML) and artificial intelligence (AI) to create deepfakes–videos or phone calls using the likeness of someone’s actual, existing image, audio, or video to say or do things the targets never said or did. Creating these videos now take days versus what used to be weeks or years and can be done with available apps. And famous people have often been the subject of this misleading content: the Kardashians, Nancy Pelosi, Donald Trump, Barack Obama, Kim Jong-un, Vladimir Putin, Tom Cruise, and Volodymyr Zelenskyy have all been victims of videos of them saying or doing things they did not say or do.
Cybercriminals and scammers can manipulate visual and audio content to easily create fake child abuse material, celebrity and revenge porn, fake news, hoaxes, and fraud (Wikipedia). A study in the Netherlands found that 96% of deepfakes online involve the non-consensual, digital placement of persons in porn videos. Governments and the IT industry have been playing whack-a-mole with this never-ending content stream. As the next election approaches, staying on top of deepfakes has become a national security issue.
As the barrier to entry in creating this material has decreased, Google, Facebook, and Microsoft are taking active roles in analyzing these media to improve detection. Twitter has placed warnings on deepfake media, and Facebook/Meta has taken measures to decrease fake media on its platform. In 2019, Facebook/Meta hosted the “Deepfake Detection Challenge (DFDC),” which had over 2,000 participants and 3,500 paid actors, to “accelerate the development of new ways to detect deepfake videos” (Meta).
The Problem Appears to be Growing
Recently, deepfake technologies have been used to apply for remote technical jobs. Cybercriminals and scammers use stolen personally identifiable information (PII) and involve other Business Email Compromise (BEC) tactics to spy on email accounts and threads, learning about the impersonated individuals to carry out the crime. These crooks have actively exploited these methods in the wild. In 2019, a cyber scammer used a deepfaked voice to trick a U.K.-based energy firm’s CEO into transferring €220,000 into a Hungarian bank account belonging to the scammer.
The FBI has alerted companies to these problems: “we will continue to investigate any violations of federal law and actors that may use them for nefarious acts.” The UN and Europol are also looking into how to combat the threat of these media (Al Jazeera).
As is always the case, evaluation of new technology comes down to the weight of the benefits versus misuse or abuse. In this case, the tech appears more prone to misuse than to a morally neutral use. In December 2021, Today presented the opposite view, that the benefits will be more significant than the misuse. Some software developers are only taking on deepfake projects that are considered ethical.
How Can You Tell?
How to expose deepfakes? Though the technology is improving, audio that does not sync with the video–with the mouth or facial expressions of the individual while they speak–has exposed the ruse. From the FBI: “At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually.” Watch for problems with the video resolution, blurring around ears or hairline, and ghosting around the face (Al Jazeera).
So how could these developments affect your business? Cybercriminals and scammers could use your likeness to extort (blackmail) you through a video or audio clip with you or another target at your organization saying or doing something objectionable. The cybercriminals could also damage your reputation through fake pornography using your likeness or, again, blackmailing you with the threat of publicly releasing it. Social media and internet memes could spread through destructive deepfakes. An artificial voice could ask an employee to transfer funds (BEC).
We Can Help Protect You
As always, staying on top of threats with threat intelligence, verifying email senders, and requiring vetting of new applicants are just some of the precautions you should take. Human resources may have their hands full detecting what is a legitimate applicant but should realize that crooks and scammers can steal information to pass background checks of prospective employees. ZDNet has one recommendation for recognizing deepfakes: the model in a deepfake video can be exposed through he/she turning to a profile shot. The quality for the video in this case may not be up to par.
At Tech Kahunas, we know the fakes, and we have our hands on the pulse of the dark web to monitor for these exploits.
We’ll stay on top of the threats.
We’ll watch your data.
We’ll review your risks.
We’ve got years of this.