Can AI Undermine Democracy ?

Could businesses utilize ChatGPT or other AI language models to influence voters to act in a certain way?

During a May 16, 2023, U.S. Senate session on artificial intelligence, Sen. Josh Hawley posed this query to Sam Altman, CEO of OpenAI. Altman retorted that he was certainly worried about the possibility that some people would use language models to influence, persuade, and interact personally with voters.

Altman did not clarify, but he may have had a scenario in mind similar to this one. Imagine if political technologists eventually create a device called Clogger that functions as a black box political campaign. Clogger tirelessly pursues just one goal: to increase the likelihood that its candidate, or the campaign that hires Clogger Inc.’s services, would win an election.

While social media sites like Facebook, Twitter, and YouTube utilise AI to entice users to stay on their platforms longer, Clogger’s AI would aim to alter people’s voting habits.

How Clogger would work

As political scientists and legal scholars who research the relationship between technology and democracy, we think that tools like Clogger could use automation to greatly expand the scope and possibly the efficacy of the behavior modification and microtargeting strategies that political campaigns have used since the early 2000s. Clogger would pay attention to you and hundreds of millions of other voters on an individual basis, much as how advertisers currently target political and commercial ads based on your surfing and social media history.

Over the current cutting-edge algorithmic behaviour modification, it would provide three improvements. First, communications targeted to you personally would be generated by its language model and sent to you via text, social media, email, and possibly even photos and videos. While advertisers strategically put a limited number of adverts, language models like ChatGPT can produce millions of distinct messages for other people as well as countless unique messages for you throughout the course of a campaign.

Second, Clogger would create a series of messages that became progressively more likely to sway your vote using a method called reinforcement learning. Reinforcement learning is a machine-learning technique that relies on trial-and-error in which the computer performs actions and receives feedback on which ones are more successful as a means of learning how to complete a task. Reinforcement learning has been used to create machines that are better at Go, Chess, and many video games than any human.

How reinforcement learning works.

Third, over the course of a campaign, Clogger’s messages may change in reaction to your feedback on its earlier messages and what it has discovered about persuading others to change their beliefs. Over time, Clogger would be able to have dynamic “conversations” with millions of other people in addition to you. The messages from Clogger are comparable to advertisements that track you across different websites and social media.

The character of AI

Three more characteristics—or flaws—need to be mentioned.

First, Clogger’s messages may or may not contain political content. The machine’s sole objective is to increase vote share, and it will probably come up with tactics to do so that no human campaigner would have considered.

To bury the political messaging they hear, one option is to offer material about the nonpolitical interests of prospective opposition voters, such as their love of sports or entertainment. Another option is to time the delivery of unsettling messages to coincide with the messaging of the opposition, such as incontinence commercials. Another strategy is to deceive voters’ social media friend networks into believing that they are supporting a specific candidate.

Second, Clogger doesn’t care about the truth. In fact, it is unable to determine what is real or false. As this machine’s goal is to change your vote rather than offer correct information, language model “hallucinations” are not a concern.

Thirdly, because it is a form of artificial intelligence that operates in a “black box,” nobody would be able to determine what methods it employs.

The goal of the discipline of explainable AI is to reveal the inner workings of numerous machine-learning models.

Clogocracy

If Clogger were to be used by the Republican presidential campaign in 2024, the Democratic campaign would probably feel obligated to retaliate in kind, possibly with a comparable machine. Just call it Dogger. If the campaign managers believed these machines to be efficient, the presidential race may very well come down to Clogger vs. Dogger, with the customer of the more efficient machine coming out on top.

Political scientists and commentators would have a lot to say about which AI won, but it’s unlikely that anyone would actually understand why. The president will have been elected due to having a more potent AI, not because his or her policy proposals or political ideals won over more voters. Instead of coming from candidates or parties, the content that ultimately won the day would have been created by an AI that was only interested in winning and had no political opinions of its own.

One of two options would then be available for the AI president. He or she might advocate Republican or Democratic party policies while holding the office of president. However, since party ideologies may not have been a major factor in why voters cast their ballots the way they did (Clogger and Dogger don’t care about policy views), the president’s actions may not accurately represent the will of the voters. Instead of having a free choice in their political representatives and policies, voters would have been controlled by AI.

Another option is for the president to pursue the messages, behaviours, and policies that the machine predicts will increase his reelection chances. On this road, the president would have no platform or agenda other than to keep power. Clogger-guided presidential actions would be those most likely to influence voters rather than serve their true interests or even the president’s own beliefs.

Avoiding Clogocracy

If candidates, campaigns, and consultants all abstained from using such political AI, it might be possible to avoid AI election manipulation. That is implausible in our opinion. The temptation to employ them would be almost impossible to resist if black boxes with political effectiveness were discovered. In fact, it’s possible that political consultants will view employing these technologies as necessary for upholding their duty to aid in the victory of their clients. And after one candidate employs such a powerful weapon, it is unlikely that the opposition will retaliate by unilaterally disarming.

Improved privacy safeguards would be beneficial. Clogger would require access to enormous amounts of personal data in order to target people, create messages that were specifically designed to influence or control them, and track and retarget them throughout a campaign. The machine would become less effective for every piece of information that businesses or policymakers withhold.

Strong data privacy regulations may be able to prevent AI from being manipulative.

Election commissions can provide another solution. They might try to outlaw or strictly restrict these devices. There is a heated argument about whether such “replicant” speech, even if it is political in nature, can be restricted. Many eminent scholars believe it cannot because of the extreme heritage of free expression in the United States.

However, there is no justification for automatically extending the First Amendment’s protection to the output of these devices. The country may well decide to grant rights to machines, but that choice should be based on the issues of the present, not on the incorrect assumption that James Madison’s opinions from 1789 should be applied to AI.

Regulators in the European Union are taking this step. Artificial intelligence (AI) systems that are used in political campaigns are now classified as “high risk” and are therefore subject to regulatory review by policymakers, who altered the draught of the Artificial Intelligence Act of the European Parliament.

Making it illegal for bots to pose as people is a less drastic but constitutionally sound step that California and certain European internet regulators have already taken. If the content of a campaign message is produced by a machine rather than a human, for instance, regulations may mandate that disclaimers be included.

This would be similar to the required advertising disclaimer – “Paid for by the Sam Jones for Congress Committee” – but changed to highlight its AI origin: “This AI-generated ad was paid for by the Sam Jones for Congress Committee.” The Sam Jones for Congress Committee is sending you this AI-generated message because Clogger predicts that doing so will boost your likelihood of voting for Sam Jones by 0.0002%, according to a stronger version that might be required. We think voters ought to at the very least be informed when a bot is speaking to them, and they ought to be informed as to why.

The existence of a system like Clogger demonstrates that disempowerment of the human race as a whole may not necessitate superhuman artificial general intelligence. It might just take overzealous campaigners and experts with strong new tools who can successfully push the numerous buttons of millions of individuals.

Source link

 

Most Popular