In the years to come, artificial intelligence (AI) enabled cyberattacks, which have been relatively limited so far, may become more aggressive, according to a recent cyber analytical report.
The Finnish Transport and Communications Agency, the Finnish National Emergency Supply Agency, and the Helsinki-based cybersecurity and privacy firm WithSecure worked together on the report.
Although AI-generated content has been used for social engineering schemes, Andy Patel, a WithSecure intelligence researcher, said that AI methods for managing campaigns, carrying out attack steps, or controlling malware logic have yet to be seen in the wild.
Such “techniques will be developed first by well-funded, highly skilled adversaries, such as nation-state groups.
The study looked at current trends and developments in artificial intelligence, cyberattacks, and areas where the two overlapped, and it concluded that early adoption and evolution of preventative measures were essential to fending off the threats.
In Patel’s words, new AI techniques will likely trickle down to less-skilled adversaries and become more prevalent in the threat landscape after being developed by sophisticated adversaries.
The risk over the following five years
According to the authors, it is safe to say that AI-based hacks are now very rare and are primarily used for social engineering purposes. They are also used, though, in ways that analysts and researchers cannot see.
The majority of AI fields today fall far short of human intelligence and are unable to plan or execute cyberattacks on their own.
Attackers will likely develop AI in the following five years, though, that can autonomously spot vulnerabilities, organize and execute attack campaigns, sneak past defenses by using covert tactics, and gather information from compromised systems or open-source intelligence.
The report stated that because intelligent automation is by its very nature intelligent, and because it takes the place of typically manual tasks, AI-enabled attacks can be run faster, target more victims, and find more attack vectors than conventional attacks.
According to the paper, new approaches are needed to stop AI-based hacking that uses fabricated data, impersonates biometric authentication systems, and uses other upcoming capabilities.
Deepfakes powered by AI
The report noted that AI-powered attacks will undoubtedly excel at impersonation, a strategy frequently used in phishing and vishing (voice phishing) cyberattacks.
The authors of the report asserted that Deepfake-based impersonation is an example of new capability brought by AI for social engineering attacks and predicted that impersonations made possible by AI will continue to advance.
No previous technology was able to deceive victims by accurately mimicking the voice, gestures, and appearance of a target human.
Deepfakes, in the opinion of many tech experts, are the biggest cybersecurity threat.
They have a good chance of succeeding because biometric technologies have taken over all recent technological advancements, from phone locks to bank accounts and passports.
Security systems that primarily rely on such technology appear to be more vulnerable given how quickly deepfakes are developing.
According to a study on data breaches by the Identity Theft Resource Center (ITRC), there were 1,291 breaches up until September 2021.
This figure represents a 17 percent increase over the 1,108 data breaches that occurred in 2020.
According to ITRC research, there were 281 million victims of data compromise found in the first nine months of 2021, a significant increase.