Using AI To Fight Propaganda

“All media exist to invest our lives with artificial perceptions and arbitrary values.”

Marshall McLuhan

The word disinformation is a cognate for the Russian dezinformatsia, the name of a KGB division devoted to propaganda. The world has come a long way from the Stalin days: USSR split, KGB became extinct, the Berlin Wall came down, and Fukuyama announced the End of History. Once the cold war ended, we thought the worst was over. Except, propaganda is a perpetual motion machine.

In the age of AI, propagandists have co-opted several advanced technologies such as fake bots, deepfake videos, user-profiling or microtargeting to spread disinformation. However, the same AI can be used to counteract propaganda.

In this article, we explore the factors to consider for using AI to tackle disinformation.

Is AI Ready?

Perfecting an algorithm requires a lot of training data. At the same time, debunking disinformation is all about timing. “A lie can travel halfway around the world while the truth is still putting on its shoes,” a saying commonly attributed to Mark Twain, is so true. Hence, a major challenge of using AI to tackle disinformation is to find relevant data to train AI. Such large labelled training data is not easy to obtain.

Even if one can overcome this challenge, the data labels’ reliability cannot be established in such a short period. Besides, an adversary can always find workarounds to fool your model.

Responding quickly to disinformation requires addressing the twin hurdles of limited data and unreliable or intentionally wrong data labels.

Secondly, an AI model always runs the risk of over-blocking, resulting in the deletion of accurate content. AI is in a developing phase and may generate false positives/negatives identifying accurate information as fake and vice versa. For instance, AI systems aren’t good at telling sarcasm from normal speech.

Facebook’s Linformer uses machine learning to understand the nuances of human language. While the company is making progress, the CTO of Facebook, Mike Schroepfer, said deploying and maintaining such models is difficult.

Thirdly, automated algorithms tend to pick up human biases and personality traits, either from the model’s creators or flawed, incomplete or unrepresentative training data. Such models could prove disastrous to individuals and communities at large.

Experts also think there are no easy AI fixes to address disinformation. Samuel Woolley, assistant professor at the Moody School of Communication at UT Austin, has said it would be ‘unreasonable’ to expect any AI solution soon to quickly and unambiguously identify a rapid disinformation attack.

Are Platforms Willing?

WhatsApp’s end-to-end encryption blocks the messages from being analysed for disinformation and obstructs moderators from finding out the source of information.

The government of India has been at loggerheads with WhatsApp in the issue of tracing the originator of messages on the platform. Last month, it even notified stricter guidelines making it mandatory for platforms such as WhatsApp to aid in identifying the origin of an ‘unlawful’ message.

Secondly, all major social media platforms already have rolled out AI-enabled tools to detect disinformation. For instance, Facebook has been using SimSearchNet++, an image matching model, to detect deep fakes.

UK’s intelligence agency GCHQ or other security agencies, for instance, are not allowed to run their tools on Facebook. Same with a lot of governments. But what if the government itself is the propagandist in chief? Then, it’s a whole different can of worms.

Hence, social media platforms, governments, users and other stakeholders should be involved at a policy level to devise strategies — with the help of technology — to fight disinformation.

Accountability

The most critical question in tackling disinformation on social media is how do you define it. Governments have abused their authority to curb freedom of speech under the guise of tackling disinformation.

In India, a 21-year old activist, Disha Ravi was recently arrested for ‘spreading a toolkit’. The Indian government has also directed Twitter to block multiple accounts. Later, Twitter said in a blog post: “We do not believe that the actions we have been directed to take are consistent with Indian law, and, in keeping with our principles of defending protected speech and freedom of expression, we have not taken any action on accounts that consist of news media entities, journalists, activists, and politicians. To do so, we believe, would violate their fundamental right to free expression under Indian law.”

Hence, accountability is a huge problem while using AI to curb disinformation.

Wrapping Up

Human fact-checking is time-consuming and counterproductive, given the vast amount of fake content generated daily. We certainly need AI to tackle disinformation. But at the same time, before deploying any AI for housekeeping, it is essential to put robust frameworks in place. Further, AI also needs to overcome its technical limitations to become more efficient in taking on the disinformation juggernaut.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link