Getting your Trinity Audio player ready...
|
Introduction
The world is moving at a fast pace with advancements in technology. Artificial intelligence tools are one of the sophisticated technologies used for deepfakes. According to iProov, deepfakes are not new. The tools used to create them are increasing. Also, detecting sophisticated tools for artificial intelligence will continue to impact how data is being manipulated.
For example, the growth in e-commerce and online stores has also brought an increase in those selling counterfeit products. As a result of that, online retailers are using AI tools to differentiate between original and fake products to gain consumer trust. Even worse, the contribution of social media to the spreading of deepfakes is enormous. The way people share content on social media is important. According to research, videos and photos are more likely to be shared on Twitter than news articles or online petitions. During the US presidential campaign, tweets from Donald Trump and Hillary Clinton, along with images and videos, received significantly more likes and retweets, as researchers found. Even more, getting the public to differentiate between real and fake deepfake content calls for a campaign and public enlightenment. Most times, as a result of the trust people have in social media information, deepfakes content (a photo or video that has been skillfully edited and altered to falsely portray someone as having done or said something they didn’t actually do or say) would have caused havoc before they were detected.
In Nigeria, for instance, a conversation between the presidential candidate of the Labour Party, Peter Obi and the founder of Living Faith Worldwide, Bishop David Oyedepo, went viral just weeks before the 2023 general elections. The alleged leaked audio showcased a voice implied to be that of Obi urging the clergy to canvas for Christian votes for him. The post sharing the audio has been viewed over 10.3 million times based on Twitter statistics. Though the presidential candidate denied the circulating audio and his party regarded it as a deepfake, a forensic audit by a Twitter handle named Democracy Watchman also found the audio to have been manipulated. However, a rather opposing finding by the Foundation for Investigative Journalism suggested otherwise. Whatever the truth, such viral audio can affect people’s trust in their proposed candidates. This can also influence the electoral outcome.
What are Deepfakes?
Deepfakes are synthetic media informed by audio, images, video, which are generated through artificial intelligence and machine learning algorithms. Mostly, they are shared on social media with wide virality. According to DeepMedia, about 500,000 videos and voice deepfakes were shared on social media worldwide in 2023. This is expected to reach 8 million by 2025, possibly doubling every six months. Studies conducted by iproov show that one-third of global consumers know what deepfakes are. This signifies that misinformation has the potential to spread rapidly across different platforms.
How deep fakes influence misinformation
Deepfakes generate wide virality when they are shared. They distort the authenticity of real content because of their high similarity. The misemploy of deep fakes can constitute a huge threat by provoking social unrest, manipulating elections or inciting violence through the spreading of falsehood that blurs the lines between reality and propaganda. Deepfakes are powerful tools for disseminating misinformation, which challenges society. (Choras et al., 2021). Ukrainian President Volodymyr Zelensky was seen calling on his soldiers to surrender their weapons. Whereas it was an attack uploaded to hack Ukrainian news.
Some authors shared ways in which deepfakes contribute to the sharing of information, which are Authenticity and Trust, Amplification of False Narratives, Virality and Speed, Targeted Manipulation and Difficulty in Detection.
- Authenticity and Trust (Godulla et al., 2021):
Due to the similarity between deepfakes and real content, it is hard to differentiate, which challenges the authenticity of the real content. Misinformation will likely gain some credibility from people who would not be able to suspect the differences.
2. Amplification of False Narratives (Chesney & Citron, 2019):
Since the essence of deepfake is to control the opinion of people, the possibility of manipulating people’s voices or images to portray who they aren’t is high. These negative narratives will affect people’s opinions and emotions, which can amplify misinformation.
3. Virality and Speed (Hobbs, 2020):
Most deepfake are given a high but deceptive quality, which can attract. Through the use of social media, deepfakes can go viral within a short period, which aids in the spread of misinformation. When such information is debunked, the damage may not be rectified.
4. Targeted Manipulation (Hartmann & Giles, 2020):
Some deepfakes are specifically shared to attack some public figures or key influential people. Through it, it can affect people’s perception towards them and redefine their perception towards making a decision. For example, some malicious misinformation regarding investment can push people to make decisions they are not expected to make.
5. Difficulty in Detection (Gosse & Burkell, 2020):
The tragic thing about deepfakes is that they will require another more sophisticated artificial intelligence to detect and debunk them. Most especially the audio, video and images. The close reality between deepfake and real content makes it difficult to ascertain, which leads to another challenge: misinformation.
The Perils of Disinformation
Despite some helpful artificial intelligence functions, its negative use can cause tremendous damage. A typical example of the perils of disinformation is the aftermath of what it can cause. For example, In India, a WhatsApp ( instant messaging app) viral video shows footage probably from a CCTV camera showing certain children playing cricket around the street. Two men on a motorbike instantly grab one of the kids and then speed off. This video created widespread confusion and panic for about an 8-week period of mob violence that killed at least nine innocent people. The video was an edited video from a public education campaign in Pakistan which was designed to create awareness of child abduction. The video started with the kidnapping, but after one of the actors got off the motorbikes and showed a sign cautioning viewers to look after their children. In the viral video, this scene was cut off and left with a shockingly realistic scene of a snatched child (BBC News, 2018). This incident is one of many ways disinformation content can negatively impact the public.
Mitigating against the threat of deepfakes
According to Deloitte on how to safeguard against the menace of deepfake technology, and the battle against digital manipulation (March 2024), the following are measures noted that can be used to mitigate against the threat of deepfakes.
1. Emerging technology: A watermark can be used to embed information in audio to differentiate it from deep fake content, which will help verify the audio.
2. Continuous monitoring: it is essential for organisations and people to continuously have a close look at audio, video and image content.
3. User awareness: Campaigns should be created on the effect of deepfakes to encourage people to be cautious about malicious information.
4. Analysis of stored audio/facial data: Biometric information should be combined with other factors, such as password and PIN, to reduce the risk of having only one factor verify a transaction.
5. Behaviour analysis: A close look at deep fake content will signal some abnormal reactions, such as tone and lip-sync, making it detectable.
Conclusion
This article emphasises the importance of creating awareness and advanced technology in combating the threats posed by deepfakes. Deepfakes content can affect information consumption. Many believe what they see online without giving it a second thought to verify the authenticity. To ensure that deepfakes content does not contribute to information disorder through misinformation, disinformation and malinformation, people need to be informed about their prevalence.
Reference
Al-Khazraji, S.H., Saleh H.H., Khalil, A.I., & Mishkal, I. A. (2023). Impact of deepfake technology on social media: Detection, misinformation, and societal implications. The Eurasia Proceedings of Science, Technology, Engineering & Mathematics (EPSTEM), 23, 429-441.
BBC News. (2018, June 11). India WhatsApp “child kidnap” rumours claim two more victims. https://www.bbc.co.uk/news/ world-asia-india-44435127
Chesney, B., & Citron, D. (2019). Deep fakes: a looming challenge for privacy, democracy, and national security. California Law Review, 107, 1753.
Choraś, M., Demestichas, K., Giełczyk, A., Herrero, Á., Ksieniewicz, P., Remoundou, K., Urda, D., & Woźniak, M. (2021). Advanced machine learning techniques for fake news (online disinformation) detection: A systematic mapping study. Applied Soft Computing, 101.
Godulla, A., Hoffmann, C.P., & Seibert, D. (2021). Dealing with deepfakes–an interdisciplinary examination of the state of research and implications for communication studies. SCM Studies in Communication and Media, 10(1), 72-96
Goel, S., Anderson, A., Hofman, J., & Watts, D. J. (2015). The structural virality of online diffusion. Management Science, 62(1), 180–196.
Gosse, C., & Burkell, J. (2020). Politics and porn: How news media characterizes problems presented by deepfakes. Critical Studies in Media Communication, 37(5), 497-511.
Hartmann, K., & Giles, K. (2020). The next generation of cyber-enabled information warfare. 12th International Conference on Cyber Conflict (CyCon),1300, 233-250. IEEE.