Every day brings us more fake videos. While some are purely entertaining, others pose a threat to politics and society. There is a clear trend towards identity fraud 2.0, where all it takes is to fake your voice.

Some examples: In June, Berlin Mayor Giffey spent half an hour on the phone with alleged Kiev Mayor Klitschko. And in March, a deepfake video circulated where Ukrainian President Selenskyi was supposedly calling for surrender.

Regardless of whether these videos and video calls have been referred to as Shallowfakes, Deepfakes or other types of fakes: They show us that fakes of moving images – and with them the topic of deepfakes – are taking up more and more space. Their aim is to deliberately manipulate and disinform, mostly via social media.

Deepfakes can affect both companies and individuals. In this article, we want to explain for which purposes deepfakes are used and how you can detect them.

In a nutshell: Deepfake explained

In a nutshell
In a nutshell
  • Deepfakes can be used by criminals or foreign intelligence agencies for various purposes
  • There are different types of deepfakes, among others for identity fraud: facial manipulation, voice manipulation and text forgery
  • Resilience in society can be strengthened through education and the increase of media and digital competence
  • Our advice: Sensitize your employees

What are deepfakes?

The term deep fakes or deepfakes is a neologism made up of deep learning (a machine learning method) and fake. It refers to videos or audio recordings that have been altered with the support of artificial intelligence (with the use of deep neural networks). That way, people can be imitated deceptively real – which opens the door to identity fraud. The special thing about machine learning and artificial neural network methods is that fakes are generated largely autonomously. They can then also be used at a “live level” during a video call, for example. 

Deepfakes are used in video/image, audio and text. Scenes are imitated as similar as possible to an original scene. This requires as much video footage as possible of the person to be imitated. Thanks to social media, this is usually no longer a problem, even for private individuals (just one or two minutes are enough). Software and tutorials are sometimes available free of charge on the Internet. So the hurdles to entry are quite low.

Since when do deepfakes exist?

The term first appeared in 2017. A user on the internet forum Reddit put the faces of famous actresses in porn videos and shared them with the community under the username “Deepfake”. The first known identity fraud in a Deepfake attack took place in 2019, when the alleged CEO asked the Managing Director of a British energy company over the phone to transfer 220,000 Euros to a Hungarian account. As it turned out later, the call was made by criminals imitating the CEO´s voice.

What types of deepfakes are there?

Face manipulation

  • Face Swapping: The face of a person is replaced by another face. A facial image is created with the same facial expressions, facial illumination and direction of gaze. This requires only a few minutes of video recordings of the target person. However, these must be of high quality and show as many different facial expressions and perspectives as possible.
  • Face Reenactment: With this method, an already existing video is altered. Facial expressions, head and lip movements are manipulated. This allows for people to make statements they never made in reality.
  • Face Synthesis: People are created who do not even exist in reality. So far, this is done in the form of individual images, but already in high image resolution.

Voice manipulation

  • Voice Conversion (VC): An audio sound is transformed so that it adopts the voice of the target person.
  • Text to Speech (TTS): The voice of the target person is generated from a text. The user provides a text, which is then converted into an audio signal. That way, humans and automated speech recognition processes can be deceived.

The prerequisite for both types of manipulation is that as much training data as possible is available in high quality.

Text falsification

AI also makes it possible to generate texts. At first glance, you can non longer tell whether the text originates from a human or a machine. On the basis of a few introductory words, a text can simply be continued. This process is already being used for messages, blog entries or even chat responses.

What are deeepfakes used for?

In the future, deepfakes could increasingly become weapons of attack for criminals or foreign intelligence services. This is because advancing technology is making it easier to imitate people in audio and video recordings. For more in-depth information, we recommend reading the detailed BSI article.

Deepfakes can be used for the following purposes, among others: 

  • identity fraud
    • Overcoming biometric systems (for example for remote authentication)
    • Social engineering (targeted phishing attacks -> “spear phishing” to obtain information or data. An example of this is the “CEO fraud” described above or also the “grandchild trick”). 
  • misinformation campaigns
  • propaganda
  • slander and discrediting
  • industrial espionage
  • non-consensual pornography

Deepfake explained : How can you protect yourself?

„Das Heimtückische an Deepfakes ist, dass Angreifer nicht erst Vertrauen aufbauen müssen.“SPOC – Single Point of Contact N° 1 | 2021/2022 (verfassungsschutz.de)

This quote sums up the issue. Compared to conventional social engineering, you already trust in the person you see or hear. We rely on the credibility of moving images. This is exactly what is exploited by deepfakes. 

Does this mean that you are at the mercy of deepfakes? Or are there ways to protect yourself against them? The BSI (German Federal Office for Information Security) recommends both prevention and detection.

Prevention und detection

The higher the level of media and digital literacy in society, the greater the resilience. As a user, you should be sparing with your data in public and act with common sense. Every upload of a video on the Internet should happen consciously.

Up to now, there is no awareness in society that moving images can also be manipulated. This must be changed by information campaigns. 

Is it possible to recognize deepfakes as an “amateur”? Those who know the artifacts are at a clear advantage. We advise you to always have a critical and vigilant look.

How to detect artifacts

Typical features for face manipulation

There may be visible transitions when using the face swapping procedure, especially at the seam around the face. The skin color might be changed, and sometimes double eyebrows are visible. Sharp contours are blurred, for example with teeth and eyes. Facial expressions are limited and the lighting or environment may be inconsistent. Therefore, one should always pay attention to the color settings. Irregularities can be detected especially when the face is in profile or when it is covered by something.

Video calls should preferably be made on a large screen rather than via the cell phone. Here, it is easier to pay attention to little things like blinking, frowning or the “angry vein”. Imitating natural reactions is more difficult for AI. Distortions can also be detected if you look at the image slower.

So far, it is only possible to manipulate visual data, but not behavior. Counterfeits can therefore quite likely be detected by people who know the individuals well. 

If you suspect there might be identity fraud, you can ask the contact person kindly to perform a certain gesture (e.g., tapping his or her nose).

Typical features for synthetic voices

Some procedures produce a “metallic sound”. Words in a foreign language can be pronounced incorrectly, for example if a procedure has been trained on the English language. Special features of a speaker (e.g. accents or stresses) cannot be imitated so well in contrast to the timbre. Also, temporal delays can be perceived.

Check the source

You should always question the authenticity. How was the contact established? How did a video call come about? One can also ask for a callback to verify the video call or video. It is best to accept links to video conferences only from trusted email addresses and to verify the identity of the video channel. 

In the case of calls with an unusual request to the company, it is also advisable to use the callback option and the four-eyes principle. For remote identification, you should make use of multi-factor authentication.

Deepfakes - Identity fraud 2.0
Pin it!

Summary

In this article, you have learned about deepfakes and how to detect them. So far, attacks using this technique are rather uncommon. However, that may change. At the moment, politicians and industry are striving to make deepfakes easier to identify. It should be legally required that all materials created with deepfake technology must also be labeled as such. In doing so, digital signatures could be used with blockchain technology proving that media has not been subsequently edited. Detection software and AI-based programs can check videos and streams for certain anomalies.

At the same time, it is becoming increasingly difficult to detect fakes manually. Due to the continuous development of AI, creating deepfakes requires lower effort, and less data from attacked persons is needed to perform a successful identity fraud.

What can you do as a business?

Think about the situations in which you or your employees could be confronted with deepfakes. Raise awareness among your employees. Include the topic in your next awareness training.

Do you need assistance? Feel free to contact us. We can offer you extensive training and awareness methods, and will be happy to accompany you and your team in the process.

Have you perhaps already made the acquaintance of deepfakes? Then tell us about your experience in a comment.