The Rise of AI Fake News

Fake news is becoming more and more prevalent on the internet as a result of artificial intelligence’s ability to automate the creation of material that looks like real articles but actually spreads misleading information about elections, conflicts, and natural catastrophes.

NewsGuard, a disinformation tracking organization, reports that since May, the number of websites hosting false articles generated by artificial intelligence (AI) has surged by over 1,000%, from 49 to over 600.

In the past, propaganda campaigns have constructed websites that seem authentic by enlisting legions of low-wage labourers or highly organized intelligence services. However, AI is making it simple for almost anyone to create these outlets, whether they are a teenager in their basement or a member of a spy agency. As a result, content that is occasionally difficult to distinguish from real news is produced.

One AI-generated article told a made-up story about Benjamin Netanyahu’s psychiatrist, claiming that he had passed away and left a note implying the Israeli prime minister was involved. Although the psychiatrist’s claim seems to be untrue, it was mentioned in an Iranian TV programme, and it was later repeated in Arabic, English, and Indonesian media outlets. Additionally, users on TikTok, Reddit, and Instagram shared the claim widely.

Political candidates, military leaders, and humanitarian efforts may suffer as a result of the increased churn of divisive and false content that makes it harder to figure out what is true. The quick expansion of these websites, according to misinformation specialists, is especially concerning in the lead-up to the 2024 elections.

According to Jack Brewster, a NewsGuard researcher who carried out the investigation, some of these websites are producing hundreds or even thousands of articles each day. We refer to it as the next big disinformation superspreader for this reason.

An era in which chatbots, image makers, and voice cloners can create content that appears human-made has been brought about by generative artificial intelligence.

Pro-Beijing bot networks are amplifying the pro-Chinese propaganda that well-dressed AI-generated news anchors are spewing. Days before voters went to the polls, politicians in Slovakia who were running for office discovered that their voices had been cloned to say divisive things they had never said. Fake news presented as real is being distributed by an increasing number of websites, posing as legitimate in dozens of languages, including Arabic and Thai, and going by names like iBusiness Day or Ireland Top News.

The websites have the ability to easily trick readers.

The article on Netanyahu’s alleged psychiatrist was published on Global Village Space, which is overflowing with articles on a wide range of significant subjects. Articles about the United States’ sanctions against Russian arms suppliers, the massive oil company Saudi Aramco’s investments in Pakistan, and the country’s deteriorating ties with China are included.

However, Brewster noted that among these commonplace stories are AI-generated pieces, like the one about Netanyahu’s psychiatrist that was reclassified as “satire”. The story seems to have been inspired by a satirical article about the death of an Israeli psychiatrist that was published in June 2010.

Simultaneously displaying real and AI-generated news increases the credibility of misleading articles. According to University of Cincinnati journalism professor Jeffrey Blevins, a misinformation specialist, there are people who just aren’t media literate enough to understand that this is untrue. “It is misleading.”

According to media and AI experts, websites akin to Global Village Space might become widespread during the 2024 election and turn into an effective means of disseminating false information.

As per Brewster, the websites function in two ways. Certain stories are produced manually; users ask chatbots to find articles that support a particular political viewpoint, and the bots then posts the results to a website. Additionally, the procedure can be carried out automatically by using web scrapers to find articles that contain specific keywords. These articles are then fed into a large language model, which rewrites the content to sound original and avoid accusations of plagiarism. The outcome is automatically shared on the internet.

According to the organization, NewsGuard finds websites created using artificial intelligence (AI) by looking for error messages or other language indicating that the content was created using AI tools without proper editing.

The reasons behind the creation of these websites differ. Some are meant to cause chaos or change people’s political opinions. To get clicks and earn money from advertisements, other websites frequently publish divisive content, according to Brewster. However, he continued, the capacity to greatly enhance fraudulent content poses a serious security risk.

Misinformation has always been fueled by technology. Before the 2020 election, professional propaganda-promoting groups from Eastern Europe known as “troll farms” amassed a sizable following on Facebook by posting offensive material on Christian and Black group pages, reaching 140 million monthly users.

As the meat byproduct suggests, pink-slime journalism websites frequently appear in small towns where local news outlets have vanished, producing articles that serve the interests of the financiers funding the venture, according to media watchdog Poynter.

However, in comparison to artificial intelligence, Blevins stated that those methods require more resources. AI’s scope and scale present a threat, particularly when combined with increasingly complex algorithms, he stated. They have never witnessed an information war of this scale before.

Although it’s unclear, it’s a serious worry that intelligence services might be using AI-generated news to support foreign influence operations. He wouldn’t be surprised in the slightest if this was used—definitely next year for the elections, Brewster stated. It’s difficult to avoid seeing a politician create one of these websites in order to disseminate false information about their opponent and puff pieces about themselves.

According to Blevins, readers should keep an eye out for “red flags” in articles, like “really odd grammar” or sentence construction mistakes. Nonetheless, raising average readers’ media literacy is the most powerful instrument.

Inform people about the existence of these kinds of websites. He said they could do this kind of damage. However, keep in mind that not all sources are reliable. There is no guarantee that a website claiming to be a news source has journalists creating content.

He continued, “Regulation is essentially nonexistent.” Governments may find it challenging to impose strict regulations on fake news content due to concerns about violating free speech rights. That is left to social media companies, who haven’t performed well enough thus far.

There are simply too many of these sites to handle promptly. Like playing whack-a-mole, Blevins compared it to that.

He said, ‘You find one [website], you close it down, and another one is created somewhere else. It will never find you in its entirety.

Source link

Most Popular