A recording was uploaded to Facebook just two days before the elections in Slovakia. Two voices could be heard on it: Monika Tódová from the daily newspaper Dennk N and Michal imeka, the purported leader of the liberal Progressive Slovakia party. They appeared to be planning to buy votes from the nation’s marginalized Roma minority in order to rig the election.
Imeka and Dennk N slammed the tape as false right away. The audio appeared to have been artificially intelligently altered, according to the fact-checking division of the news agency AFP. However, the audio was published during a 48-hour embargo before votes opened, during which time politicians and media organizations are expected to keep quiet. That meant that the post was difficult to widely refute under Slovakia’s election laws. Additionally, because the post was audio, it took advantage of a gap in Meta’s manipulated-media policy, which states that only videos that have been altered to make someone say something they never said are prohibited.
There was a close contest for the nomination between the two front-runners, who had different plans for Slovakia. It was revealed on Sunday that SMER, which ran a campaign to end military support for its neighbor, Ukraine, defeated Progressive Slovakia, a pro-NATO party.
Slovakia’s election, according to the EU’s digital director Vra Jourová, will serve as a test of how susceptible European elections are to the “multimillion-euro weapon of mass manipulation” employed by Moscow to intervene in elections. Now that it has occurred, nations all over the world will be examining what happened in Slovakia for hints about the difficulties they may have in the future. In two weeks, voters in the nearby country of Poland, which a recent EU study revealed was particularly vulnerable to becoming the target of disinformation, will cast their ballots. Elections are scheduled to take place in the US, the EU, the UK, and India the next year. The fact-checkers in Slovakia who are fighting back against false information spread on social media claim that their experience demonstrates that AI is already sufficiently advanced to sabotage elections while they are unable to stop it.
According to Veronika Hincová Frankovská, project manager at the fact-checking company Demagog, we are not as prepared as we ought to be.
Throughout the election, Hincová Frankovská’s staff put in a lot of overtime, splitting their time between fact-checking statements made during TV debates and keeping an eye on social media sites. Demagog collaborates with Meta as a fact-checking partner, which entails that it writes fact-check labels for alleged misinformation that is believed to be disseminating on websites like Facebook.
To their work, AI has brought a new, difficult dimension. A recording of imeka proposing to double the price of beer if he won the election was gaining popularity, Meta informed the Demagog team three days before the election. The footage was referred to as phoney by Imeka. According to Hincová Frankovská, fact-checking cannot, of course, be limited to what politicians say.
It was difficult to demonstrate that the audio was altered. Though her team had heard about AI-generated posts, Hincová Frankovská and her colleagues had never had to fact-check one. They tracked the recording’s origins and learned that it was first posted on an unidentified Instagram account. They started calling specialists and asked them if they thought the recording was likely to be faked or altered. They eventually tested an AI voice classifier developed by the American business Eleven Labs.
They were prepared to express their suspicions about the recording’s alteration after a short while. When readers come across the post, they may still see their label on the Slovak-language version of Facebook, which reads: “Independent fact-checkers say that the photo or image has been edited in a way that could mislead people.” The Facebook user then has the option of deciding if they wish to view the video.
The fact-check label and the beer and vote-rigging audios are still available on Facebook. According to Ben Walter, a representative for Meta, when information is fact-checked, they label it and de-rank it in feed so fewer users see it. This is what happened with both of these situations. No matter if a piece of content was made by a person or an AI, it must adhere to the Community Standards, and an action must be taken against it.
Following the introduction of the EU’s digital services legislation in August, this election was one of the first crucial elections to occur. The legislation, which was created to improve online human rights protection, established new regulations that were meant to compel platforms to be more aggressive and transparent in their attempts to filter misinformation.
Richard Kuchta, an analyst at Reset, a research organization that focuses on how technology affects democracy, claims that Slovakia served as a test case to determine what works and where some modifications are required. According to him, the new regulation increased the pressure on platforms to improve their capabilities for fact-checking or content monitoring. For the Slovak election, Meta reportedly added extra fact-checkers; nevertheless, it remains to be seen whether this was sufficient.
Kuchta observed the far-right group Republika upload two additional videos on social media that included AI audio impersonations in addition to the two deepfake audio recordings. One pretended to be Michal Imeka, while the other was Zuzana Aputova, the president. The voices in these audios have been identified as false, and it has been said that any resemblance to real people is coincidental. According to Kutcha, this phrase, which he suspected was an attempt to deceive viewers, did not flash until 15 seconds into the 20-second video.
In Poland, there was keen interest in the Slovakian election. As the president of the Polish fact-checking organization Pravda Association, Jakub Li affirms, “AI-generated disinformation is something we are obviously very afraid of because it’s very hard to react to it fast.” Li claims that because voice cloning is so challenging to spot, he is especially concerned about the Slovakian pattern of disinformation being packaged as audio recordings as opposed to videos or images.
Similar to Hincová Frankovská in Slovakia, Li similarly lacks the resources to confidently assist him in determining what has been artificially made or altered. He claims that the tools at your disposal allow you to calculate your chance. But a black box issue plagues these tools. He doesn’t know how they evaluate whether a post is likely to be fraudulent. How he is expected to communicate this information to his audience if he has a tool that magically tells him this is 87% AI generated? he asks.
In Poland, li claims that there hasn’t been much AI-generated content in circulation. However, some people are utilizing the possibility of AI generation to discredit legitimate sources. Polish voters will decide in two weeks whether or not the conservative Law and Justice party should be allowed to serve a historic third term in office. This past weekend, a sizable crowd gathered in Warsaw to show their support for the opposition. According to the opposition-run city council, the number peaked at 1 million people. However, individuals on X, formerly known as Twitter, claimed that videos of the march had been artificial intelligence (AI) enhanced to make the crowd appear larger.
Li thinks it’s simple to cross-reference many sources to verify the accuracy of this kind of content. It would be far more difficult if, as happened in Slovakia, AI-generated audio recordings started spreading in Poland in the final hours before the vote. He claims that as a fact-checking organization, they don’t have a clear strategy for handling it. Therefore, it will hurt if something similar occurs.