The anticipated AI revolution has begun. The world of technology has undergone a significant change as a result of OpenAI’s ChatGPT breaking the record for the fastest-growing user base and the wave of generative AI spreading to other platforms.
Additionally, it is significantly altering the threat landscape, and some of these concerns are already becoming a reality.
AI is being used by attackers to enhance fraud and phishing. The release of Meta’s language model with 65 billion parameters will likely result in new and enhanced phishing attempts. Daily prompt injection attacks are what we witness.
Users frequently input critical corporate information into AI/ML-based applications, leaving security teams scurrying to support and regulate their use. As an illustration, Samsung engineers included confidential information in ChatGPT while trying to debug it. According to a Fishbowl survey, 68% of people who use ChatGPT for business purposes don’t let their managers know.
Consumers, corporations, and even the government are becoming more and more concerned about the misuse of AI. In addition to upcoming public evaluations and regulations, the White House announced significant investments in AI research. The AI revolution is advancing quickly and has given rise to four main categories of problems.
Attacker-defender dynamic asymmetry
Attackers will probably develop AI more quickly than defenders, giving them a definite advantage. They will be able to launch sophisticated attacks with low-cost, high-scale AI/ML support.
The initial uses of synthetic text, voice, and images will be in social engineering attacks. Many of these attacks that take some manual labor, such as phishing attempts that pose as IRS or real estate agents and ask victims to deposit money, will eventually become automated.
Using these tools, attackers will be able to produce more advanced malicious programmes and launch new, more successful attacks on a large scale. They’ll be able to quickly produce polymorphic code, for instance, so that malware can avoid being discovered by signature-based security measures.
Geoffrey Hinton, one of the pioneers of AI, recently made headlines when he told  that he regrets the technology he helped create because it is hard to see how you can prevent the bad actors from using it for bad things.
AI and security: further loss of public confidence
Social media has demonstrated how quickly false information can spread. According to a University of Chicago Pearson Institute/AP-NORC poll, 91% of Americans from all political backgrounds believe there is a problem with disinformation, and nearly half are concerned they may have shared it. If a machine is used, social trust can be lost more quickly and cheaply.
As a result of their intrinsic knowledge limitations, the existing large language model (LLM)-based AI/ML systems make up answers when they don’t know how to respond. This is known as “hallucinating,” an unforeseen side effect of this developing technology. Accuracy issues are very problematic while looking for reliable solutions.
Human trust will be violated, and there will be dramatic errors made with dramatic results. For example, an Australian mayor claims he may file a defamation lawsuit against OpenAI because ChatGPT incorrectly said he was imprisoned for bribery when, in fact, he was the case’s whistleblower.
New attacks
We’ll witness a fresh wave of attacks on AI/ML systems over the coming ten years.
Attackers will sway the classifiers systems utilize to skew models and regulate outputs. They’ll produce malicious models that seem exactly like the real models, and depending on how they’re used, they could actually do serious damage. Additionally, attacks involving prompt injection will rise in frequency. The model revealed its internal rules after being persuaded by a Stanford University student the day after Microsoft unveiled Bing Chat.
Attackers will start a data-mining arms race with adversarial tools that manipulate AI systems in different ways, taint the data they utilize, or steal private information from the model.
Attackers may be able to infiltrate apps at scale by taking advantage of inherent vulnerabilities that these systems unintentionally introduced as more of our software code is produced by AI systems.
Externalities of scale
We may not yet be able to forecast the externalities that will result from the costs of developing and maintaining large-scale models since they will lead to monopolies and entry obstacles.
Citizens and customers will ultimately suffer as a result of this. False information will proliferate, and consumers who lack the means to defend themselves will be the target of widespread social engineering attacks.
Although the federal government’s announcement that governance is coming is a fantastic place to start, there is still more to be done to stay ahead of the AI race.
AI and security: What comes next
An open letter published by the nonprofit Future of Life Institute urged a halt to AI development. It received a lot of press coverage, with people like Elon Musk joining the chorus of worried parties, but pressing the pause button isn’t an option. Even Musk is aware of this, as he appears to have adjusted his strategy and launched his own AI business to rival others.
To claim that innovation should be muzzled has always been deceptive. Attackers are not likely to comply with that request. To ensure that AI is applied properly and ethically, we need to innovate more and take more action.
The bright side is that this opens the door for cutting-edge, AI-based security strategies. Threat hunting and behavioral analytics will improve, but these advances will take time and money to implement. There is always a paradigm shift brought about by new technology, and things always get worse before they get better. We’ve had a taste of the dystopian potential that arises when AI is misused by the wrong people, but we must take action right away to give security experts time to plan and respond to major problems.
We are currently utterly unprepared for the future of AI.