It’s been a long time coming. The worry over deepfake technology being used during times of major upheaval has been alluded to frequently over the last couple of years. The buildup to the US election was peppered by “any moment now…” style warnings of dramatic and plausible deepfake deployment. In the end, what we got was very little to write home about. Terrible renderings promoted as “look what they can do” declarations failed to impress.
The current situation in Ukraine was inevitably going to lead to some form of deepfake activity. The only real questions were “when?”, and “how bad will they be?” As a matter of fact, deepfake activity began immediately. After the usual breathless punditry warning about what could happen, the “best” mock-up example from February 23 was a frankly terrible clip of Vladimir Putin.
Just how deep is this fake, anyway?
There were plenty of warnings about deepfakes too, but the reports were perhaps a little over the top. “Russia is using deepfakes to spread misinformation against Ukraine, claims report”. Well, that definitely sounds bad. But how were those deepfakes spreading it?
The fake in question wasn’t sophisticated video or audio, which is what people reading across several sources may have believed. What actually happened was that an account spreading misinformation used a fake profile picture, the likes of which you can generate yourself, endlessly, here. I’d argue that the misinformation pushed would be exactly the same regardless of whether the profile used an AI-generated avatar, or the creator simply stole an image from somewhere else. How much does the profile picture really matter in this example?
Quick thread:
— Ben Collins (@oneunderscore__) February 28, 2022
I want you all to meet Vladimir Bondarenko.
He’s a blogger from Kiev who really hates the Ukrainian government.
He also doesn’t exist, according to Facebook.
He’s an invention of a Russian troll farm targeting Ukraine. His face was made by AI. pic.twitter.com/uWslj1Xnx3
Deepfakes or cheapfakes?
Sure, it’s good that the campaign(s) were shut down. Even so, would those most likely to believe the false information being promoted ever think to check to see if the profile picture was real or not? I’d suggest only those hunting down misinformation would bother to analyse the image in the first place. Grabbing a fake AI picture simply takes less time than picking a stock photo image and reversing it to try and ward off reverse image searches.
For the most part, what we have above is no different to tactics used in the malign interference campaign we covered back in 2019. Fake profile? Check. AI-generated headshots on the fly? Check. The fake headshot simply being present because somebody needed one in a hurry, without it being the primary reason for dubious activity? Almost certainly, check.
Back in 2020, some people argued whether it was helpful to tag static images as deepfakes, and whether it was perhaps more useful to flag them as “cheapfakes” as a way to cleanly separate them from video content. I did wonder if we’d see nothing but fake profiles with lazy image grabs this time around. However, someone actually has created a synthetic video with audio and then dropped it onto a hacked website as a way of exerting sudden influence over activities on the ground.
How well did it do? The answer is “not very”.
Zelenskyy deepfake aims for chaos
Make no mistake, this is it. The first big example I can think of during a crisis where a deepfake video—not a profile picture—has been used to deceive on a large scale, with potentially severe results.
A broadcast by Ukraine’s 24 TV channel was compromised, and news tickers appeared to display messages from President Zelenskyy urging people to lay down weapons.
Meanwhile, a video was uploaded of President Zelenskyy to at least one compromised website, giving some sort of news conference. In it, he appears to be calling for Ukrainian troops to stop fighting, lay down their weapons, and so on.
Thankfully, it wasn’t very good.
Tearing down a fake
The Zelenskyy fake fell foul of many of the same problems that affected the Putin fake. The flat-looking and virtually unmoving body, the lack of convincing shadows, the head appearing to work independently of the neck, and the erratic angular movement of the head itself.
As a matter of principle, I never post or link to fake or false content. But @MikaelThalen has helpfully whacked a label on this Zelensky one, so here goes.
— Shayan Sardarizadeh (@Shayan86) March 16, 2022
I've seen some well-made deepfakes. This, however, has to rank among the worst of all time.pic.twitter.com/6OTjGxT28a
Pay particular attention to the eyes. Even if you didn’t notice anything off about anything else in the video, the eyes blink very unnaturally. Barely 3 seconds in, they snap shut and open again in a way that eyelids simply don’t do. The more you look at it, the more the sensation of something being not quite right sets in.
Note that it’s also very unaligned with recent footage of Zelenskyy, in terms of how he records things. In most if not all of his recent announcements, everything is very natural, informal, off the cuff. This is trying for some sort of press release format which is another indicator that something may not be quite right here.
An early warning pays dividends
Interestingly, Ukraine had warned of the possibility of deepfake videos just days earlier. That the video didn’t appear on any official communication channels almost certainly contributed to the general sensation of scepticism.
Researchers quickly deduced that the deepfake is a composite of two separate images. One for the face, and one for the background:
A measure of success in failure
In terms of how successful this deepfake is? It depends what our baseline is for success, I suppose. The suspicion has been that for something like this to work, it has to be believable enough that it spreads widely during in the short window of opportunity it has to spread before official sources counter it. On those terms, it was a failure. Handed a big platform to spread and perform what could have been exceptionally chaotic activity, it just flopped right out of the gate.
In terms of making some sort of impact, however? It caused Zelenskyy, in the middle of an invasion, to have to make a recorded rebuttal. Having him expend any sort of energy on this is quite remarkable so that’s one thing the fakers may count as a win.
The post Deepfake Zelenskyy video surfaces on compromised websites appeared first on Malwarebytes Labs.
Article Link: https://blog.malwarebytes.com/social-engineering/2022/03/deepfake-zelenskyy-video-surfaces-on-compromised-websites/