Biggest generative AI risks

At times, it appears as though an infinite number of individuals are experimenting with generative artificial intelligence. Individuals in diverse professional capacities, including marketers who generate content and developers who produce code, are discovering methods to increase their efficiency through the use of novel technologies.

Analyst Gartner predicts that by 2026, over eighty percent of organizations will be utilizing generative AI application programming interfaces, models, and software in production environments due to the haste to adopt AI.

It is important to note, however, that despite the considerable enthusiasm surrounding generative AI, many businesses are not officially experimenting with the technology.

The majority of organizations are still in the exploratory phase of generative AI; Gartner estimates that less than 5% of organizations are utilizing the technology in production.

According to Lily Haake, head of technology and digital executive search at the recruitment firm Harvey Nash, her experience working with clients indicates that ambitious AI projects are not currently on the agenda.

She reports witnessing extremely impressive small pilots, including the use of AI by legal clients to generate documents in order to scan caseloads and increase productivity. Everything is very positive and exciting. However, this is on a modest scale. This is due to the fact that her clients do not appear to be investigating generative AI to the extent that it would have a business-wide impact.

Before considering how to integrate new generative services into the core, the majority of digital leaders are experimenting at the periphery of the enterprise rather than utilizing AI to transform organizational operations and enhance customer service.

Nevertheless, the absence of a company policy mandating the utilization of generative AI does not preclude professionals from employing the technology, whether or not it is approved by the supervisor.

According to research conducted by technology expert O’Reilly, 34% of IT professionals are experimenting with AI and 44% already use it in their programming work. 38% of IT professionals are experimenting with AI for data analytics, while nearly a third (32%) are already utilizing it.

According to the report’s author, Mike Loukides, O’Reilly is taken aback by the extent of adoption.

However, despite his acclaim for the “explosive” growth of generative AI, Loukides warns that businesses that adopt the technology hastily may be doomed to a “AI winter” if they fail to recognize the attached risks and dangers.

Avivah Litan, a distinguished vice president analyst at Gartner, concurs, stating that CIOs and their C-suite counterparts cannot afford to simply wait.

“Before the risks manage you, you must manage them,” she advises in a video interview one-on-one.

In a recent webinar, Litan reports that Gartner surveyed over 700 executives regarding the risks of generative AI and found that CIOs are most concerned with data privacy, followed by security and hallucinations.

Let us now examine each of those areas individually.

1. Risks to privacy and data protection

It is probable that data will be transmitted by CIOs and other senior executives who adopt an enterprise version of generative AI to the hosted environments of their respective vendors.

Litan concedes that this type of arrangement is not novel; for at least a decade, organizations have been transferring data to software-as-a-service and cloud providers.

She claims that CIOs perceive AI to be associated with a distinct type of risk, specifically concerning the manner in which vendors handle and employ data, such as when training their own large language models.

According to Litan, Gartner has conducted an exhaustive analysis of a significant number of IT service providers that offer AI-enabled solutions.

She says that if you use a third-party foundation model for data protection, it’s all about trust, but you can’t check. You have to believe that the vendors are protecting your data well and that it won’t get out. Yes, mistakes do happen in cloud systems. The vendors will not be responsible if your private data gets out. You will be.

2. Risks of input and output

Companies must not only look at the risks of data protection in external processes, but also know how their employees use data in generative AI models and apps.

Litan says that these types of risks include using data in an improper way, which can affect the decision-making process. For example, not being careful with private inputs, getting false hallucinations as results, or stealing someone else’s intellectual property.

It’s not just input and output risks that business leaders have to deal with. There are also ethical concerns and worries that models may be biased.

Litan says that the people in charge of introducing generative AI must make sure that no one in the company takes anything for granted.

She says you should use data and generative AI in a way that your organization approves of. You shouldn’t hand over the keys to the kingdom, and the information you get back should be checked for errors and hallucinations.

3. New threats to cybersecurity

Every day, organizations confront a variety of cybersecurity threats, including the possibility that hackers will gain access to company information via a system flaw or an employee oversight.

AI, according to Litan, constitutes an entirely different threat vector.

She declares that these are novel dangers. In addition to vector database and prompt injection attacks, hackers can also gain access to model states and parameters.

In addition to data and financial loss, attackers may decide to alter models and replace accurate data with inaccurate information.

According to Litan, organizations can’t handle new risks by relying solely on time-tested, tried-and-true solutions because of this new threat vector.

Attackers may contaminate the model, she claims. The model will need to have security around it, and this type of security is different. Data model protection is not something that endpoint protection can assist you with.

What your business needs to do now

The combination of hazards associated with generative AI may appear to CIOs and other senior executives as a formidable challenge.

Nevertheless, new solutions are emerging at the same rate that the risks and opportunities associated with generative AI do, according to Litan.

She advises, There is no need to freeze in fear.

A novel market is currently undergoing development. Entrepreneurs are more likely to capitalize on challenges in order to generate financial gain, as one might expect.

Positive developments indicate that viable resolutions are approaching. Additionally, while the technology market stabilizes, business leaders should prepare, according to Litan.

Essentially, what they advise CIOs to do is to first organize themselves before defining their acceptable use policies. Ensure that you have access control and that your data is classified. She says.

Create a system where users can submit requests for applications, you can monitor the data they use, the appropriate parties can approve these requests, and you can audit the process twice a year to ensure proper implementation. Simply take each step towards generative AI slowly.

Source link

Most Popular