HomeArtificial IntelligenceArtificial Intelligence NewsDeepfakes from the Gaza war highlights AI's power to mislead

Deepfakes from the Gaza war highlights AI’s power to mislead

Amid the photos of Gaza’s destroyed houses and streets, some images—of bloodied, abandoned infants—came to light with unimaginable horror.

These photos, which have been viewed millions of times on the internet since the start of the war, are deepfakes made with AI. The telltale signs of digital deception are visible if you look closely: fingers that curl strangely, eyes that shimmer with an unnatural light.

But the fury the pictures were meant to arouse is all too real.

Images from the Israel-Hamas conflict have painfully and graphically demonstrated AI’s ability to be used as a propaganda tool to produce realistic pictures of bloodshed. Digitally altered images have been circulated on social media since the start of the war, and they have been used to fabricate reports of casualties or to mislead people about atrocities that never occurred.

Even though the majority of the untrue statements regarding the war that have been making the rounds online were produced by more traditional means rather than using artificial intelligence, technology is advancing more frequently and with less regulation. This has shown how easily AI could develop into a weapon once more and provided a preview of what to expect from upcoming wars, elections, and other significant events.

According to Jean-Claude Goldenstein, CEO of CREOpoint, a tech company with offices in Paris and San Francisco that employs AI to evaluate the veracity of internet claims, things will get worse before they get better. The business has compiled a list of Gaza’s most widely circulated deepfakes. Images, audio, and video will all advance in ways you haven’t seen before because of generative AI.

Certain images from past wars or natural disasters have been altered and presented as brand-new in some instances. In other cases, images have been created from scratch using generative AI programmes. One such image, of a baby sobbing amid bombing debris, went viral in the early days of the conflict.

AI-generated images also include videos purportedly of Israeli missile strikes, tanks tearing through destroyed neighbourhoods, and families searching through debris for survivors.

The use of babies’, kids’, or families’ bodies in the fakes appears to be a deliberate attempt to elicit strong emotions in many cases. Photographic “evidence,” which was quickly presented as proof, was provided by deepfake images of crying infants to bolster the claims made by supporters of both Israel and Hamas during the brutal early days of the conflict.

According to Imran Ahmed, CEO of the Centre for Countering Digital Hate, a nonprofit that has tracked misinformation from the war, propagandists adeptly exploit people’s baser instincts and fears when creating such images. The emotional impact on the viewer is the same whether the baby is a deepfake or an actual image of a child from a different conflict.

An image’s likelihood of being remembered and shared increases with its level of obsceneness, inadvertently propagating more misinformation.

People are currently being told, “Ahmed said, look at this picture of a baby.” The misinformation aims to compel you to interact with it.

After Russia invaded Ukraine in 2022, similarly false AI-generated content started to circulate. In one manipulated video, Volodymyr Zelenskyy, the president of Ukraine, was seen telling people to give up. Even after being refuted, these claims have persisted in circulation as recently as last week, demonstrating the persistence of misinformation.

Disinformation peddlers exploit every new conflict or election season to showcase the newest developments in artificial intelligence. Since several nations, including the United States, Taiwan, Indonesia, Pakistan, India, and Mexico, will be holding significant elections next year, many political scientists and experts in artificial intelligence are cautioning about the potential risks.

Lawmakers in Washington, DC, of both parties, are concerned about the possibility that artificial intelligence and social media could be used to propagate false information to American citizens. Virginia Democrat and U.S. Representative Gerry Connolly stated during a recent hearing on the risks of deepfake technology that funding must be allocated to the United States for the creation of AI tools that are meant to thwart other AI.

Connolly said that this is something that our country needs to address.

Many start-up tech companies worldwide are developing new software that can detect deepfakes, add watermarks to photos to show where they came from, or scan text for any false statements that might have been added by artificial intelligence.

The next big thing in AI will be how to validate publicly available content. How can one identify false information and evaluate the reliability of textual content? According to Maria Amelie, co-founder of Factiverse, a Norwegian startup that developed an AI programme capable of sifting through content to find bias or inaccuracies left by other AI programmes.

Teachers, journalists, financial analysts, and anyone else interested in exposing lies, plagiarism, or fraud would be immediately interested in such programmes. To detect altered images or videos, similar programmes are being developed.

While this technology is promising, David Doermann, a computer scientist who oversaw a project at the Defence Advanced Research Projects Agency to address the threats AI-manipulated images posed to national security, says those who use AI to lie are frequently one step ahead.

In order to effectively address the political and social challenges presented by AI disinformation, according to Doermann—who is currently a professor at the University at Buffalo—better technology as well as stronger regulations, voluntary industry standards, and significant funding for digital literacy initiatives that teach internet users how to distinguish fact from fiction are all necessary.

Our enemies can use AI to hide that trace evidence every time we release a tool that finds it, according to Doermann. The days of detecting and attempting to take this stuff down are over. A much more comprehensive solution is required.

Source link

Most Popular