BG

Deepfake – what is it & is it dangerous?

Deepfake – what is it & is it dangerous?

What are deepfakes and what are the implications of this technology?

A deepfake video is a fake photo, video or story made by AI (Artificial Intelligence) neural networks. The idea of deepfake is it creates images that strongly (almost identically) look like the subject, the subject is usually a human being but is not actually them. Deepfaking it not like photo shopping, it is much more advanced and complicated. Deepfakes can be so convincing they are nearly impossible to tell the difference between the real thing to the average human eye which is down to the use of AI and machine learning. Below is an example of deepfaking provided by University of California. Deepfaking is a manipulation of video mainly that can make someone do or say something they did not say in real life.

deepfake
Left – original, Right – deepfake.

How do they work? Programmes that create deepfakes use not one but two different AI’s working together. The first AI will scan many images of the subject and create new faked images, the second AI will then examine these fakes and compare them to real images and if the differences are too noticeable the second AI will mark the image as an obvious fake and inform the first AI. The first AI will then process the fakes and adjust the images until the second AIs cannot tell the difference between the real image and the fake anymore.

Deepfaking is used for a lot of advertising purposes, to create models. It’s much cheaper than hiring a real model and paying them. Deepfakes can also help police matters with age progression to help find missing people or to upscale old computer games, so they appear more attractive on modern displays. These are some examples of great uses for deepfakes!

Deepfake video of the President of Ukraine

However, if you have seen stories in the news, you will see a much darker side to them and just what kind of destruction they can cause. Above is an example of a deepfake video that is not true and could have damaging effects courteous of YouTube. In this video it appears the President of Ukraine is telling his soldiers to surrender to Russia, the President did not make this video or speech. The potential to spread misinformation can be extremely bad.

The main concern is political deepfakes currently due to the progression of these videos could convince enough people in to believing the person/organisation behind its agenda. As we all know, not everyones agenda is pure and for the good. Cyber-criminals are a prime example of this.

So, how do we spot a deepfake?

  • Search the internet for information surrounding that particular media and the video/picture to find out what has been said about it. You may come across news articles confirming that that public figure did not say those words or do those actions.
  • You can also screengrab the video and do a reverse image search on Google which will locate the original source of the video/picture. Sometimes you will find the original, real video/picture further down which clearly shows that there was a manipulation of the original video/picture that led to the deepfake video/picture.

You can read about more interesting information with regards to cyber-security here in our blog section!


Related Posts
Leave a Reply

Your email address will not be published.Required fields are marked *