Deepfake cyberthreats – The next evolution

This blog was written by an independent guest blogger.

In 2019, we published an article about deepfakes and the technology behind them. At the time, the potential criminal applications of this technology were limited. Since then, research published in Crime Science has delved into the topic in-depth.

The study identified several potential criminal applications for deepfakes. Among these categories, the following were deemed the highest risk:

  • Audio/video impersonation
  • Tailored phishing
  • Blackmail
  • Driverless vehicles being used as weapons
  • Disrupting AI-based systems
  • Fake news created by AI

This list sparked the idea for this article. Considering that ransomware claims a new victim every 14 seconds, we decided to explore the topic of deepfake ransomware.

Is that a real thing? You may never have heard the terms together before, but they’ll certainly play a large role in cybercrimes of the future.

How are criminals leveraging this technology?

Technically, they aren’t, but criminals are an innovative bunch. We had a taste of what they can do with deepfakes in 2019. A British CEO received a call from the company head, asking him to transfer $243,000.

He did so but later became suspicious when he received a second call for another transfer. This is a modern take on email whaling attacks. In this case, however, the victim verified the caller’s identity because he knew the voice.

Experts believe that AI made it possible to spoof the company head’s voice and intonations. While we may never know if the CEO was speaking to a bot or not, it shows that criminals can leverage AI-based technology.  

How does ransomware come into the equation?

Ransomware essentially holds your computer hostage. But how can the two seemingly deeply divergent technologies work together? To understand that, we might have to broaden our definition of ransomware. To do so effectively, we should consider some real-world examples.

Imagine you received a video message from your CEO asking you to complete an online form. You know the CEO’s face and voice and can see it on the screen. The idea that the video is fake doesn’t enter your mind, so you click through to the link.

Bam!, your computer is infected with ransomware. It might be a traditional form of this malicious threat or a more modern version.

Say, for example, you’ve used your work computer to check your Facebook page or store photos. The malware is now able to sniff out pictures and videos of you. Thanks to facial recognition software, this process is automated and simple to complete.

This isn’t just run-of-the-mill software, though. It’s a highly sophisticated program with AI built into it. It can not only detect images but use them to create content. It can also sniff out other personal details online and on your computer.

It puts all of these together to create a video of you. The footage makes it look like you did something that would damage your reputation. You’re innocent, but the video seems convincing. If you don’t pay the ransom, it’ll be released. The ransom might be in the form of cash or information about your company or clients.

Perhaps you don’t care about your reputation. What about that of your family? The idea of ransomware put to this use is a scary one but plausible.

Automation makes these attacks more frightening

Spearphishing, also known as whaling attacks, requires an intense amount of research. They’re typically used against high-value targets that are worth the investment of resources. That’s changing, though.

Incorporating AI into ransomware makes it extremely powerful. The criminal sets it loose to wreak havoc and takes a step back. The software does everything and learns to do better as it goes along.

Do attackers require special programing skills?

Of even more concern is that there are several programs out there allowing you to create deepfakes. Google and other big players have been honing platforms that make it easy to create intelligent software.

With almost no coding skill, you can create a simple program incorporating AI and content creation software.

The easy access to this technology makes it a lucrative idea for the criminals. They don’t even have to set high ransom limits to create a passive income for themselves.

Can you protect yourself?

Your best defense is to be careful who has access to your personal videos and photos. If you’re storing these files on your computer, make sure that they’re encrypted.

Controlling the information that you share online is slightly more complex. It may well be time to stop posting group pictures, family shots, or selfies. Posting videos in which you speak might also be problematic.

Check your social media accounts and who you’re linked to on there. Remove any publicly accessible photos under your control immediately. If someone else posts something, ask them to take it down or untag yourself.

When posting, share only to a select group of friends and family members. You never know who else might be looking.

Finally, exercise great care when accepting new contacts on social media. Even if you think that you know the person, be wary of clicking on links they send.

A common tactic is to hack a friend’s social media account. Your “friend” then sends you a link to a video they found of you online. They’ll typically send a message asking something like, “Is this really you?” or “Is everything okay?”

Naturally, you’ll be curious and so primed to watch the video. The minute you click on that link, though, you’ll go through to a malicious site.

Final notes

It’s high time you protected yourself from this new wave of cyber threats that deepfake poses. At this stage, vigilance and maintaining your personal privacy is your best defense.

     

Article Link: https://feeds.feedblitz.com/~/646146066/0/alienvault-blogs~Deepfake-cyberthreats-–-The-next-evolution