Addressing Generative AI in 2024

The next presidential election in America will be overshadowed by generative artificial intelligence (AI), a new and developing concern amid the regular discussions about immigration and inflation.

For the first time in American history, the general public is worried about AI’s potential negative effects on the 2024 election. Millions of Democrats and Republicans share the same worries that artificial intelligence-related misinformation may influence the outcome of the 2024 presidential election. Over half of American voters share this concern. It is hardly surprising that Senator Chuck Schumer, the majority leader in the Senate, convened an AI Insight Forum earlier this month given that government authorities and industry executives are now debating federal legislation to regulate AI.

Silicon Valley is avidly pushing the disclosure of so-called synthetic content, which has caught the attention of corporate America. Google has mandated that political marketers disclose the use of AI in images and audio snippets.

We are approaching unknown territory as Americans plunge headfirst into the 2024 election season. As was painfully clear this past May when a phoney image of a Pentagon explosion jolted social media and the financial market, AI-generated material can have real-world and real-time repercussions. What if Joe Biden or Donald Trump fall victim the following time? Or even worse, the offender?

People have a right to wonder, yet the following are the pertinent inquiries: Is it possible to accurately sort through a deluge of AI-generated content and determine what is true or fake using common sense practices or policies? Who is in charge of enforcing such laws?

The scene that follows is simple to picture: The polls show that Candidate A is leading and appears to be winning easily. Just before the votes are counted, Candidate B’s crew produces an unfunny but vicious piece of AI-generated content. Candidate A’s electability suffers greatly as a result of the scandalous content becoming viral. A week or two after Candidate B’s election is called, it is revealed that the entire affair was a hoax and that Candidate A was wrongfully cheated out of an election.

Perhaps an even more likely scenario is this: Candidate A is accused of having compromising material come to light, but the accusation is refuted with the explanation that the candidate was the victim of an AI-generated plot. Candidate A wins the election, but when it turns out that the accusation was accurate, everyone in America has moved on to other pressing concerns, like Netflix’s revival of Suits.

Many regulatory fixes will take effect after a harmful application of AI has already occurred and probably done significant harm. A proactive strategy is riskier, but unquestionably worthwhile.

The back-burn solution is one response to the AI quandary. Back burning, which entails igniting tiny fires along a man-made or natural firebreak in front of a bigger fire front, is a popular method for putting out wildfires. In reality, the controlled flames remove the fuel in the way of a wide wildfire, building a barrier in front of an approaching forest fire as it consumes all available fuel.

In a similar vein, we can add more fire to the spreading AI fire. If AI can be taught to create phoney images and audio recordings, it can also be taught to recognize and report AI-generated content. One type of content can already be distinguished from another using technologies, which makes this distinction visible to the audience. The content can then be either carefully read or heard, put up for review, or removed (let’s say by Facebook or X).

To prevent previous hoaxes from spreading again, a central database of “known offenders” may be developed and verified. This kind of remedy would address the immediate problem of reputational harm while also removing the potential for repeat offences in the future. A digital firebreak is effectively created by integrating a real-time monitoring system against a database of well-known falsehoods while combining AI detection and centralization. The fire is therefore contained.

As the election year approaches, IT companies should first take a look at the back-burn option. Who, after all, is more familiar with generative AI than generative AI?

In 2024, there will still be more work to be done in order to stop negative actors using AI. There are bad actors, and this isn’t going away anytime soon. Important free speech issues need to be taken into account to prevent governmental regulators and other organizations from unnecessarily restricting individual liberty. Any regulation also raises moral and ethical concerns about the capacity of generative or even predictive AI to recognize “truth,” and those concerns should be handled carefully. But one thing is certain: In this election season, we need to be ready to put a broad end to AI-generated falsehoods.

In the short term, as answers to larger-scale questions emerge, the fire-fighting-fire tactic seems to be an efficient, effective strategy. Americans may resume their normal political lives and more episodes of Suits as AI silently battles itself. Who knows: After the writer’s strike is resolved, perhaps Netflix will release another season of stunning court cases and somber-looking file folders—all without using any artificial intelligence, of course.

Source link