The Dilemma of Deepfake

Oli Tate

Widespread distrust, confusion and hysteria are just a few of the words used to describe the impact deepfakes could have on society if they are left unchecked and allowed to become more advanced. Deepfake is the use of a type of AI called “deep learning” to produce images of fake events, mostly by imposing someone’s face on another’s body or making someone say something they never did. The implications of this technology undoubtedly hold significant importance for truth, democracy and trust.

The Origins of Deepfakes

At present, most deepfake videos are reasonably easy to detect from just the naked eye if you know what to look for, as it is mostly amateurs who produce much of the deepfake content online. Yet, a study that investigated people trying to identify well-made deepfakes found that participants could only correctly identify when a video was fake in 50% of all cases which is as good as random selection. The term deepfake originated in 2017 after a user on Reddit posted deepfake content on the platform, mostly of celebrities’ faces being superimposed onto pornographic material. The actual technology, however, can be traced back to 1997 with the creation of the Video Rewrite programme which could edit the mouthing of someone’s words to a different audio track. The threat of deepfakes today and in the future is very real and present. Governments, businesses and society need to fully engage with each other to understand and tackle this issue.

Political and Security Implications

Deepfakes could be used for political manipulation and foreign influence operations such as affecting the outcome of another country’s election. In theory, a foreign entity could produce and distribute a deepfake video of a political candidate saying highly damaging or offensive things. If this happened right before election day, it would be unlikely that the validity of the video could be verified before people cast their ballots. Alternatively, if a highly damaging but legitimate video of a politician is distributed online, they could claim it was a deepfake, potentially diminishing accountability. On the issue of security implications, last March a video surfaced of Ukrainian President Volodymyr Zelensky directing Ukrainian forces to surrender to Russian forces. Even though this deepfake video was not very convincing, it demonstrates how this technology can generate confusion and deception in high stakes scenarios.

Social Implications

Cynicism in what you see online is a reasonably healthy attitude to have, however, deepfakes have the potential to diminish trust in any source of information. Research conducted by political scientist, Cristian Vaccari, concluded that deepfakes can elicit uncertainty that results in a reduction of trust towards news sources in general. This could easily lead to increases in polarisation and confirmation bias, due to the idea that since nothing is trustworthy anymore, it is more difficult to know what facts are real. This has significant ramifications for conspiracy theories and disinformation. Events such as 9/11 and the moon landing are already the subject of many conspiracy theories, and deepfakes supporting these narratives could add to the scepticism and distrust towards authority.

Policy Recommendations:

Tech Company Regulations

Meta already has a policy whereby they delete deepfake and manipulated videos from the platform. However, this approach does not go far enough. Videos on social media spread quickly and widely and by the time the deepfake content has been validated, many people could have seen it already. Whilst removing these videos is a good start, what is really necessary is ensuring that the people who may have been influenced by the video are swiftly and directly informed that the content was fake as there is no guarantee they will learn the content was fake by other means.

Accessible deepfake detection software.

To avoid being fully reliant on tech companies to inform people certain content is deepfake, making deepfake detection software widely accessible to all members of the public will allow individuals to confirm for themselves if a video is manipulated. The biggest advantage of this is that it allows people sceptical of authority to confirm for themselves if what they are seeing is real. Additionally, it allows for more niche and obscure videos to be checked for validity as there is a wider pool of people having the ability to carry out this process. Governments should invest in research to ensure deepfake detection software remains ahead of more advanced deepfake creation methods.

Banning of deepfake creation software

As with most tools and technologies, there are both good and bad uses for it. However, the dangers of deepfakes immensely outweigh the benefits. The only positive uses of deepfake are for use in the entertainment industry such as recreating Luke Skywalker in the recent “Mandalorian” TV series. As the threat to truth and democracy far outweighs the benefits of the convenience of film producers, banning a software that creates deepfake outright will lead to little controversy.

International Agreement(s)

Underpinning all these potential solutions is the need for international consent on deepfakes. Due to the nature of the internet, domestic legislation will be insufficient to tackle the proliferation of deepfake as it won’t be able to account for deepfakes originating abroad. Additionally, banning deepfake creation software will only work if there is a worldwide agreement to do so. This is arguably idealistic as nations who may want to use deepfake for foreign influence will unlikely comply.

It is clear the dilemma of deepfake is a challenge that has no easy solution. There are complex and diverse issues surrounding free speech and the risks of making governments and tech companies the arbiters of truth. Legislation on deepfakes will have to be clear, concise and specific to ensure governments do not overreach or allow for the suppression of legitimate content.


Merriam Webster. “deepfake”. Accessed 11/11/22

Department for Homeland Security. Increasing Threat of Deepfake Identities. 2022.

A. Ro ̈ssler, D. Cozzolino, L. Verdoliva, C. Riess J. Thies, M. Nießner. FaceForensics: A Large-scale Video Dataset for Forgery Detection in Human Faces. 2018.

Congressional Research Service. Deepfakes and National Security. 2022.

Cristian Vaccari. Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. 2020.

Leave a Reply