Artificial Intelligence, Deepfakes and Disinformation

The Dystopian Triad: Artificial Intelligence, Deepfakes, and Disinformation

In the digital age, a triumvirate of threats to the integrity of our information landscape has emerged: artificial intelligence (AI), deepfakes, and disinformation. These interconnected phenomena present an ever-looming challenge to our society, politics, and individual privacy, as they have the potential to erode our shared reality, induce widespread deception, and disrupt democratic processes.

Deepfakes, a portmanteau of ‘deep learning’ and ‘fake’, are digital manipulations of synthetic media content (e.g., videos, images, sound clips) where people are shown to say or do things that never existed or happened in the real world. These deceptions are incredibly realistic, powered by advances in AI-particularly machine learning and deep neural networks.

This sophisticated technology has the capacity to create digital puppets, which are sometimes used to distort public figures. In one notorious instance, a deepfake video falsely depicted Volodymyr Zelensky, the president of Ukraine, announcing a surrender. In a less conventional use, fictitious news anchors were created and used to spread state-aligned disinformation.

Deepfakes are not simply a threat to individuals who are manipulated in these synthetic videos. They can be weaponized in information operations with the objective of manipulating public opinion, degrading public trust in media and institutions, discrediting political leadership, and influencing citizens’ voting decisions. This technology significantly compounds the dangers posed by traditional disinformation efforts, ushering in a new era of information warfare.

But the news isn’t all doom and gloom. As we grapple with the potential dangers of deepfakes and AI-powered disinformation, a variety of solutions have been proposed to combat these issues. These range from legal and regulatory measures to the development of technology for deepfake detection content authentication and disinformation prevention. Other proposed methods include improving education and training, fostering societal resilience, and driving AI transparency.

Yet, the relentless pace of technological advancement and the very nature of AI mean that these solutions, though necessary, may never be entirely sufficient. There is an urgent call for the development of effective tools and techniques to identify and counteract deepfakes and AI-powered disinformation, as well as the implementation of robust regulatory frameworks to manage these technologies.

In conclusion, the dangerous triad of AI, deepfakes, and disinformation presents significant challenges to our modern society. However, as we recognize and begin to comprehend the gravity of these issues, we are better equipped to develop and employ strategies to combat them. The fight for the integrity of our information landscape is ongoing, and its outcome will have profound implications for our societies and democratic institutions.

References:

1. How AI Changes the Way Disinformation is Produced, Disseminated, and Can Be Countered – Katarina Kertysova

2. The Emergence of Deepfake Technology: A Review – Mika Westerlund

3. The People Onscreen Are Fake. The Disinformation Is Real. – Adam Satariano and Paul Mozur

4. Dealing with deepfakes – An interdisciplinary examination of the state of research and implications for communication studies – Alexander Godulla, Christian Pieter Hoffmann, Daniel Seibert

5. Deepfakes: Deceptions, mitigations, and opportunities – Mekhail Mustak, Joni Salminen, Matti Mäntymäki, Arafat Rahman, Yogesh K. Dwivedi

 

Share