“All media exist to invest our lives with artificial perceptions and arbitrary values.”
Marshall McLuhan
The word disinformation is a cognate for the Russian dezinformatsia, the name of a KGB division devoted to propaganda. The world has come a long way from the Stalin days: USSR split, KGB became extinct, the Berlin Wall came down, and Fukuyama announced the End of History. Once the cold war ended, we thought the worst was over. Except, propaganda is a perpetual motion machine.
In the age of AI, propagandists have co-opted several advanced technologies such as fake bots, deepfake videos, user-profiling or microtargeting to spread disinformation. However, the same AI can be used to counteract propaganda.
In this article, we explore the factors to consider for using AI to tackle disinformation.
Is AI Ready?
Perfecting an algorithm requires a lot of training data. At the same time, debunking disinformation is all about timing. “A lie can travel halfway around the world while the truth is still putting on its shoes,” a saying commonly attributed to Mark Twain, is so true. Hence, a major challenge of using AI to tackle disinformation is to find relevant data to train AI. Such large labelled training data is not easy to obtain.
Even if one can overcome this challenge, the data labels’ reliability cannot be established in such a short period. Besides, an adversary can always find workarounds to fool your model.
Responding quickly to disinformation requires addressing the twin hurdles of limited data and unreliable or intentionally wrong data labels.
Secondly, an AI model always runs the risk of over-blocking, resulting in the deletion of accurate content. AI is in a developing phase and may generate false positives/negatives identifying accurate information as fake and vice versa. For instance, AI systems aren’t good at telling sarcasm from normal speech.
Facebook’s Linformer uses machine learning to understand the nuances of human language. While the company is making progress, the CTO of Facebook, Mike Schroepfer, said deploying and maintaining such models is difficult.
Thirdly, automated algorithms tend to pick up human biases and personality traits, either from the model’s creators or flawed, incomplete or unrepresentative training data. Such models could prove disastrous to individuals and communities at large.
Experts also think there are no easy AI fixes to address disinformation. Samuel Woolley, assistant professor at the Moody School of Communication at UT Austin, has said it would be ‘unreasonable’ to expect any AI solution soon to quickly and unambiguously identify a rapid disinformation attack.
Are Platforms Willing?
WhatsApp’s end-to-end encryption blocks the messages from being analysed for disinformation and obstructs moderators from finding out the source of information.
The government of India has been at loggerheads with WhatsApp in the issue of tracing the originator of messages on the platform. Last month, it even notified stricter guidelines making it mandatory for platforms such as WhatsApp to aid in identifying the origin of an ‘unlawful’ message.