According to the UK’s top intelligence agency, threats from malicious cyberactivity are expected to rise as nation-states, financially motivated criminals, and amateurs use artificial intelligence more frequently in their operations.
According to the UK’s Government Communications Headquarters assessment, over the next two years, ransomware is expected to become a greater danger as a result of artificial intelligence. Artificial intelligence (AI) will reduce entrance barriers, which will result in a spike in the number of new criminal enterprises. More seasoned threat actors—such as nation-states, the businesses that support them, and criminal organizations driven by money—will probably gain more from AI as it makes it easier for them to spot vulnerabilities and get around security measures.
As stated by Lindy Cameron, CEO of GCHQ’s National Cyber Security Centre, “the emerging use of AI in cyberattacks is evolutionary rather than revolutionary, meaning that it enhances current threats like ransomware but does not transform the risk landscape in the near term.” According to Cameron and other intelligence experts in the UK, their nation needs to strengthen its defenses in order to combat the growing threat.
The evaluation, which was released on Wednesday, concentrated on the potential impact of AI during the next two years. The highest confidence level given by GCHQ to the likelihood of AI escalating the number and impact of cyberattacks over that period was “almost certain.” Other, more precise forecasts that were indicated as virtually certain were:
- AI is enhancing social engineering and reconnaissance skills, making them more difficult to spot and more effective.
- Attacks against the UK are becoming more significant as threat actors employ AI to build AI models and analyze exfiltrated data more quickly and efficiently.
- After two years, the commoditization of AI will increase the capacity of state actors and financially driven parties.
- In 2025 and beyond, ransomware thieves and other threat actors will continue to use artificial intelligence.
According to Wednesday’s study, social engineering would be the area where AI would have the largest impact, especially for less experienced individuals.
According to intelligence officials, phishing may already be prevented by using Generative AI (GenAI) to create convincing interactions with victims, such as the development of lure papers, without the need for translation, spelling, or grammar errors. Over the next two years, as models develop and uptake rises, this will most certainly increase.
According to the study, by 2025, it will be challenging for people to recognize phishing, spoofing, or social engineering attempts, or to determine whether an email or request for a password reset is legitimate, owing to the development of GenAI and large language models (LLMs).”
There are some warnings
According to security expert Marcus Hutchins, some sections of the evaluation exaggerated the advantages AI might give to those who engage in hostile cyberactivity. He claimed that the idea that AI will lower entrance barriers for beginners is one of the exaggerations.
In an interview, he stated that he thought the best phishing lures would always be those written by humans. Better scale, not better lures, is what he believes AI will enable. You may be able to produce several hundred good phishing lures at the same time, as opposed to only one flawless one. Although AI excels at volume, these models continue to have significant quality issues.
By analyzing vast volumes of internal data gleaned from prior breaches, AI may also contribute to the improvement of phishing and other social engineering tricks. To make the pretext seem more plausible, attackers can craft lures that make reference to specific attributes of a target, like the suppliers the target utilizes, by training a large language model on the data of that target.
A post on Mastodon offered a more comprehensive perspective of responses to the evaluation from security specialists.
The evaluation’s “key judgments” were:
- Over the next two years, artificial intelligence (AI) will most likely increase the volume and intensify the impact of cyberattacks. Table 1 illustrates the uneven impact on the cyber threat.
- The development and improvement of current tactics, methods, and procedures (TTPs) poses a threat to 2025.
- AI is already being used, to varied degrees, by all kinds of cyber threat actors, both state and non-state, expert and less skilled.
- Artificial intelligence (AI) boosts the capabilities of social engineering and reconnaissance, making them more likely to be successful, efficient, and difficult to spot.
- More sophisticated applications of AI in cyber operations are likely to be limited to threat actors with access to high-quality training data, extensive skill (in both AI and cyber), and resources. More advanced applications are unlikely to emerge before 2025.
- Because threat actors will be able to analyze exfiltrated material faster and more effectively and use it to train AI models, AI will most likely increase the impact of cyberattacks against the UK.
- Artificial Intelligence (AI) makes it easier for inexperienced cybercriminals, hired hackers, and hacktivists to conduct efficient access and data collection activities. Throughout the following two years, this increased access will probably add to the global ransomware problem.
- It is highly likely that cybercrime and state actors will have access to enhanced capabilities as AI-enabled capabilities become more widely available in criminal and commercial markets, especially as we approach 2025 and beyond.
The evaluation contained the following table, which summarized the different advantages, or “uplifts,” that AI will provide over the next two years and how they will relate to different kinds of threat actors:
HIGHLY CAPABLE STATE THREAT ACTORS | CAPABLE STATE ACTORS, COMMERCIAL COMPANIES SELLING TO STATES, ORGANIZED CYBERCRIME GROUPS | LESS-SKILLED HACKERS-FOR-HIRE, OPPORTUNISTIC CYBERCRIMINALS, HACKTIVISTS | |
---|---|---|---|
Intent | High | High | Opportunistic |
Capability | Highly skilled in AI and cyber, well-resourced | Skilled in cyber, some resource constraints | Novice cyber skills, limited resource |
Reconnaissance | Moderate uplift | Moderate uplift | Uplift |
Social engineering, phishing, passwords | Uplift | Uplift | Significant uplift (from low base) |
Tools (malware, exploits) | Realistic possibility of uplift | Minimal uplift | Moderate uplift (from low base) |
Lateral movement | Minimal uplift | Minimal uplift | No uplift |
Exfiltration | Uplift | Uplift | Uplift |
Implications | Best placed to harness AI’s potential in advanced cyber operations against networks, for example use in advanced malware generation. | Most capability uplift in reconnaissance, social engineering and exfiltration. Will proliferate AI-enabled tools to novice cyber actors. | Lower barrier to entry to effective and scalable access operations – increasing volume of successful compromise of devices and accounts. |
The evaluation, which was released on Wednesday and was headed “The near-term impact of AI on the cyber threat,” was released two weeks after NBC News reported that NSA Cybersecurity Director Rob Joyce stated the agency anticipates AI would assist threat actors in creating more authentic phishing documents. He claimed that the National Security Agency (NSA) had already observed hackers and cybercriminals working for foreign intelligence services posing as native English speakers using a variety of chatbots.