Many chief information security officers are in a tizzy as a result of big tech companies’ soaring investments in artificial intelligence and chatbots, which come at the expense of massive layoffs and a slowing of growth.
Generative AI is encroaching on the workplace, and chief information security officers must approach this technology with caution and be ready with the essential security safeguards. OpenAI’s ChatGPT, Microsoft’s Bing AI, Google’s Bard, and Elon Musk’s idea for his own chatbot are among the technologies generating headlines.
Large language models (LLMs), which are algorithms that build chatbots that converse in human-like ways, are the technology underpinning GPTs, or generative pretrained transformers. However, since not every business has its own GPT, employers must keep an eye on how employees use this technology.
According to Michael Chui, a partner at the McKinsey Global Institute, who compared the use of generative AI to how employees use their personal computers or cell phones at the office, if people find generative AI valuable for their jobs, they will utilize it.
People find [chatbots] useful even when they are not approved or encouraged by IT, according to Chui.
He remarked that throughout history, we have discovered technologies that are so alluring that people are willing to pay for them. Long before businesses offered to sell consumers smartphones, people were already purchasing them. We saw something similar with PCs, and now we are witnessing it with generative AI.
As a result, Chui stated, businesses must “catch up” in terms of how they would approach security measures.
Experts believe there are some areas where CISOs and businesses should start, whether it’s a common business practice like observing what information is shared on an AI platform or integrating a company-sanctioned GPT in the workplace.
Start with information security’s fundamentals
CISOs already struggle with numerous issues, such as possible cybersecurity breaches and growing automation requirements. CISOs can begin by learning the fundamentals of security as AI and GPT enter the workplace.
Companies can license access to an existing AI platform, according to Chui, so they can keep an eye on what workers are saying to chatbots and ensure that any shared information is secure.
Chui said that if you’re a corporation, you don’t want your staff members providing a chatbot that is accessible to the general public with private information. You might therefore implement technical measures that would allow you to license the program and have an enforceable legal agreement over where your data travels or doesn’t go.
Software licensing entails additional safeguards, according to Chui. When businesses license software, whether or not it uses AI, normal practices including protecting confidential information, controlling where the information is stored, and setting usage rules for staff are followed.
If you have a contract, you can audit the program to check if it’s safeguarding your data the way you want it to be secured, according to Chui.
According to Chui, most businesses already use cloud-based software to store information, thus jumping ahead and providing staff with a company-approved AI platform shows that a company is already compliant with norms in the market.