Big Tech’s bet on AI assistants can be risky

Tech firms have been frantically searching for the technology’s “killer app” ever since the generative AI boom started. Online searches came up with a mixed bag of results at first. AI assistants are currently in. Last week, OpenAI, Meta, and Google introduced new capabilities for their AI chatbots that let them perform web searches and serve as a kind of personal assistant.

OpenAI introduced new ChatGPT features that allow users to converse with the chatbot as if they were on the phone and receive immediate answers to their spoken queries in a lifelike synthetic voice. Additionally, OpenAI disclosed that ChatGPT will have web search functionality.

Bard, a Google competitor bot, is integrated into the majority of the company’s services, including Gmail, Docs, YouTube, and Maps. The concept is that users would be able to ask inquiries about their own information using the chatbot, such as asking it to search through their emails or set up their schedule. Additionally, Bard will be able to quickly access data from Google Search. In a similar spirit, Meta also declared that it is bombarding everything with AI chatbots. On WhatsApp, Messenger, and Instagram, users will be able to ask AI chatbots and celebrity AI avatars questions. The AI model will retrieve the information from the internet using Bing search.

Given the limitations of the technology, this is a dangerous wager. Some of the ongoing issues with AI language models, such their inclination to “hallucinate,” have not been resolved by tech corporations. However, what concerns the most is that they are a security and privacy catastrophe. Millions of people are being given access to this seriously flawed technology by tech corporations, who also give AI models access to their emails, calendars, and private messages. By doing this, they significantly increase all of our susceptibility to fraud, phishing, and hacks.

The serious security issues with AI language models have already been discussed. AI assistants are particularly vulnerable to an attack known as indirect prompt injection now that they can browse the web and have access to personal data. There is no known fix, and it is absurdly simple to carry out.

A third party “alters a website by adding hidden text that is meant to change the AI’s behavior”, in an indirect prompt injection attack. Attackers may utilize email or social media to drive consumers to websites that contain these covert instructions. After that, the AI system might be tricked into allowing the attacker to try to steal people’s credit card information, for instance. Hackers have limitless opportunities with this new generation of AI models integrated into social media and emails.

Regarding AI’s propensity to invent things, a Google spokesman did confirm that the company was releasing Bard as a “experiment” and that users may use Google Search to verify Bard’s assertions. Users are encouraged to give feedback by clicking the thumbs-down button if they encounter a hallucination or something that isn’t accurate. One way Bard will learn and advance is in this method, the spokeswoman stated. Naturally, this method places the onus on the user to catch the error, and people have a propensity to place too much faith in the responses provided by a machine. The question about quick injection was unanswered by Google.

Google said that the issue of prompt injection is still being researched and has not yet been resolved. The company, according to the spokesperson, uses other technologies, such as spam filters, to recognise and block attempted attacks. Additionally, the company engages in adversarial testing and red teaming activities to find ways that hostile actors can target products based on language models. The representative explained that in order to help identify known hazardous outputs and known harmful inputs that breach the rules they set, they are utilizing specifically trained models.

Every time a new product is introduced, there will be some growing pains. However, it says a lot that even the early proponents of AI language model products haven’t been overly thrilled. A New York Times journalist named Kevin Roose discovered that Google’s assistant was effective in summarizing emails and also informed him about emails that weren’t in his inbox.

In spite of the alleged “inevitability” of AI technologies, tech businesses shouldn’t be so smug about it. Ordinary people don’t typically adopt technology that is constantly breaking in irritating and unforeseen ways, and it’s only a matter of time until we see hackers abusing these new AI assistants.

Source link