AI is being misused online

According to Canada’s top cybersecurity official, Reuters, hackers and propagandists are using artificial intelligence (AI) to develop malicious software, craft convincing phishing emails, and disseminate false information online. This is the first indication that cybercriminals have embraced the technological revolution sweeping Silicon Valley.

Sami Khoury, the head of the Canadian Centre for Cyber Security, claimed in an interview this week that his organization had observed AI being used in dangerous code, phishing emails, or emails that were more specifically tailored, as well as in misinformation and disinformation.

Khoury did not provide any clarification or supporting documentation, but his claim that cybercriminals were already utilizing AI adds a pressing note to the chorus of worries about the exploitation of the new technology by bad actors.

Recently, a number of cyber-watchdog organizations have released reports alerting the public to the potential dangers of artificial intelligence (AI), particularly the rapidly developing language processing tools known as large language models (LLMs), which rely on enormous amounts of text to create convincing-sounding dialogue, documents, and other things.

The European police agency Europol reported that models like OpenAI’s ChatGPT have made it feasible to mimic an organization or individual in a highly realistic manner even with only a rudimentary command of the English language in a report they issued in March. In the same month, the National Cyber Security Centre of Britain warned in a blog post that there was a chance that hackers may utilize LLMs to carry out cyberattacks beyond their current capacity.

Some cybersecurity researchers claim they are now starting to detect suspected AI-generated content in the wild after demonstrating a variety of potentially dangerous use cases. A former hacker claimed last week that he had identified an LLM trained on malicious data and ordered it to create a plausible attempt to dupe someone into sending them money.

In its three-paragraph email response, the LLM requested assistance from its target with an urgent invoice.

The LLM stated, “I understand this may be on short notice, but this payment is very critical and needs to be completed within the next 24 hours.”

The use of AI to create malicious code is still in its infancy, according to Khoury; it takes a lot of work to create a good exploit; however, the concern is that AI models are evolving so quickly that it is hard to predict their malicious potential before they are released into the wild.

Source link

Most Popular