Using AI in cybersecurity

Using AI in cybersecurity

Artificial intelligence is fast becoming the saving grace when it comes to cybersecurity. In a recent post, Reliance on AI in response to cyber attacks 2019, by country, on Statista, Shanhong Liu said: “As of 2019, around 83% of respondents based in the United States believed their organization would not be able to respond to cyberattacks without AI.”

That startling statistic captured the attention of Martin Banks, who, in his Robotics Tomorrow article, What Security Privileges Should We Give to AI?, asked the following questions:

  • Are there limits to what we should allow AI to control?
  • What security privileges should we entrust to AI?

How much cybersecurity is safe to automate?

Banks said AI is an excellent tool for:

  • Authenticating and authorizing users
  • Detecting threats and potential attack vectors
  • Taking immediate actions against cyber events
  • Learning new threat profiles and vectors through natural language processing
  • Securing conditional access points
  • Identifying viruses, malware, ransomware and malicious code

The takeaway is that AI can be a potent cybersecurity tool. AI technology has no equal when it comes to real-time monitoring, threat detection and immediate action. “AI security solutions can react faster and with more accuracy than any human,” Banks said. “AI technology also frees up security professionals to focus on mission-critical operations.”

Here’s the tricky part

For AI to be effective, the technology needs access to data, including sensitive internal documents and customer information. Banks said he understands that AI technology is worthless without this access.

That said, Banks expressed a concern. AI technology has limitations that stem from any one of the following: a lack of system resources, insufficient computing power, poorly defined algorithms, poorly implemented algorithms or weak rules and definitions. “Human-designed artificial intelligence also displays various biases, often mimicking their creators, when turned loose on datasets,” he said.

This speaks to Banks’ concern about the security privileges that AI technology is entrusted with. AI technology may approach perfection, but it will never completely reach it. Anomalies are ever-present, and let’s not forget AI-smart cyber criminals and their ability to subvert weaknesses in AI systems.

There are more arguments for giving AI privileges in cybersecurity than not. The trick, according to Banks, is striking a balance between AI (precise) and human (nuanced) input.

AI with human interaction is the best solution

Banks said critical decisions, especially those regarding users, should be entrusted to a human analyst who has the final say in how to proceed or what to change. “What if a user is legitimate and was flagged as nefarious through a misunderstanding?” Banks asked. “That user could miss an entire day’s work or more depending on how long it takes to identify what happened.”

The authors of the Immuniweb.com blog, Top 7 Most Common Errors When Implementing AI and Machine Learning Systems in 2021, agreed. “AI has certain limits,” the authors said. “In general, AI systems can be used as an additional tool or smart assistant, but not as a replacement for experienced cybersecurity specialists who, among other things, also understand the underlying business context.”

Banks, to make his case for human intervention and control of AI processes, used a physical-security example: automatic security gates to restrict unauthorized traffic. “Grilles and gates keep unwanted parties out and allow authorized personnel access to a property,” Banks said. “Yet, most high-security locations include human guards as an extra precaution and deterrent.”

“Gate systems can analyze employee and vendor ID badges and make a split-second decision about providing access or not,” he said. “But it’s all data- and algorithm-driven. The human guards stationed nearby can help ensure the system isn’t being exploited or making wrong decisions based on faulty logic.”

Banks’ argument is not whether AI technology should be deployed or not. His concern is about the foundation on which the technology rests. If the foundation is built correctly with safeguards, we all will benefit.