Deepfakes: When you can not believe what you see


So-called deepfakes, fake videos, are not just an entertaining technological innovation. It can also be a threat to you, and not least it can contribute to conflict between groups and countries.

  • What are deepfakes?
  • What can you use deepfakes for?
  • What consequences can they have?
  • What can we do to avoid being tricked?

What would you do if you saw a video on the internet of the Norwegian Prime Minister saying that nuclear weapons were planned to be used against Norway in the near future and that everyone had to gather in their nearest bomb shelter?

For most people, such a statement is unrealistic given the background of the current security policy situation. But such plays do not have to happen in real life anymore for us to see such a video.

Development in society is faster than ever before – but for most of us it is difficult to understand what kind of consequences this can have for us every day. Technology offers many possibilities, but also new types of challenges. It can, among other things, enable situations that would never have happened otherwise.

2: What are deepfakes?

What enables such events as the board above is called deepfakes , and uses artificial intelligence to produce videos, sound and images that can be difficult to detect as fake. The word deepfake comes from the underlying technology deep learning, or deep learning, which is a type of artificial intelligence. Algorithms in deep learning teach themselves how to solve problems when they receive large amounts of data, and are used, among other things, to change faces in video that looks realistic.

Machine learning techniques have automated what used to be a manual process, something that makes editing videos this way much faster.

There are several ways to create deepfakes, but the most common uses deep neural networks that involve autocoders that use a face-changing technique. You first need a basic video as a basis for deepfake – and then a collection of video clips of the person you want to insert into it. But this is not just limited to videos. One can even make fake sound clips.

Maybe you’re one of the many who watched the deepfake video of Tom Cruise? The reason why so far there are only credible deepfakes of celebrities and politicians, is because there is so much data of those on the internet that can “train” the machines to know that and create facial features and voices.

At the same time, most of us live large parts of our lives digitally – and this data that we share with the outside world can quickly affect us in ways we had not seen before. Individuals can be subjected to blackmail, but the consequences can be of a completely different caliber.

3: Major international consequences

Deepfakes can also be directed at countries and have political motives. Now, it may not be Norway that is the biggest target for deepfakes that specifically targets states. But we as a small country in a large interconnected world can be affected by the associated consequences. Måla can be countries that have fragile institutions, inhabitants with low digital competence, at the same time as they have a divided population.

There are many imaginary situations that could have major international consequences.

If Donald Trump, former president of the United States, in a video during his presidency had mocked the clerical regime in Iran, mocked their religion – or ridiculed other regimes the United States has a strained relationship with, the consequences could be great and fatal. Even if this had quickly been exposed as false, the video and message would have already spread widely.

The technology behind deepfakes will also, after a quarter, be particularly independent for several states to use to manipulate both their own citizens, but also citizens in other countries. By using such videos just before an election, one can also contribute to turning public opinion in a short time.

Deepfakes simply have the explosive power to provide politically motivated power, sabotage elections – and destroy diplomatic relations.

The scenario may not be probable at first, but the disadvantage is that there are so many current scenarios that it is not surprising if something similar could happen in the future.

After a quarter as the technology develops and spreads all over the world, many of us will have the opportunity to make videos and sound clips that can overwhelm most people. External tools will soon be required to distinguish the manipulated from the real videos.

4: Just the start of «fake news»

For over a hundred years, video and audio have served as a kind of quality assurance of what is true and what is not. There are already some who ask questions about events that we have clear evidence of happened, such as the Holocaust, 9/11 and the lunar landing. If deepfakes make confidence in this evidence even weaker, there is a risk that the challenges we now see associated with conspiracy theories and misinformation will only worsen.

Deepfakes are a part of a larger problem that has largely been created with a massive flow of information through the internet and social media, which are often exploited. The Internet as a domain has never had a good way of distinguishing between what is real and what is fake. When looking at a picture on a screen, it is difficult to decide what is real and what is not.

Manipulations are not something new, not even with a political motive. The Soviet Union under Stalin removed its dissidents and others from the picture to rewrite history. The emergence of deepfakes is a turning point in the possibility of creating false content. At the same time, the fake content is becoming increasingly difficult to detect – and faster to create.

The American professors Robert Chesney and Danielle Citron have pointed to an effect where the ordinary inhabitants become more aware of deepfakes. This can make sure that more and more people will doubt whether real all kinds of videos are. Furthermore, it can therefore create opportunities for someone who is caught for bad behavior in a real video. It will be easier to reject videos that are actually authentic as fake and deepfake.

5: What can happen?

We already see the consequences of social media – and the use of disinformation both in the run-up to elections, but also in other contexts. It weakens the ability of ordinary people to keep up with political developments – and it can develop into a democratic problem.

We know that we are already strongly influenced by algorithms in the way we find information on the internet, and it becomes even more amplified with deepfakes.

What can one do when one can neither trust what one reads, sees or hears?

According to, states across national borders have all spoken out in favor of initiating initiatives to put pressure on large companies that control much of the information flow in today’s society. The UN may and must play a role in this by providing international regulations across member countries.

Fortunately, several of the big companies are taking this more seriously. Among other things, Facebook has outlined quality-assuring measures to single out deepfakes that are meant to mislead its users – and many companies have all used tools to be able to handle the development.

Source criticism is a key word for how we can define the consequences deepfakes can have. One of the ways is active awareness – and through the use of popular culture to clarify what challenge this represents.

The creators of Southpark have, among other things, created the miniseries Sassy Justice, which uses deep-fake technology to put celebrities and politicians in a fictional value to a TV reporter. This series can help more people understand the scope and consequences of the age we live in. We already know how much misinformation is conveyed on social media, and deepfakes can help reinforce an already dangerous trend.

Sabah Jassim, professor at the University of Buckingham, points out in an interview that analyzes of deepfake videos show that per now there is less blinking in fake videos, the sound can not fit well with the movements in the lips and often it seems that it is demanding to give expression of feelings.

For the time being, it is possible to distinguish the fake ones from the real videos, but after a quarter as mechanisms for distinguishing fake videos develop, it is natural to believe that the quality of deepfakes will be even better. It will most likely be a lasting battle, where artificial intelligence must be trained regularly to be able to be ahead.