HomeArtificial IntelligenceArtificial Intelligence NewsUnderstanding how AI chatbots can ‘hallucinate’

Understanding how AI chatbots can ‘hallucinate’

When you hear the word “hallucination,” you might picture yourself hearing noises that other people don’t appear to hear or thinking that your coworker has developed an extra head while you’re speaking with them.

However, hallucination has a slightly different meaning in the context of artificial intelligence.

An artificial intelligence model that experiences hallucinations creates false information in response to a user’s request and presents it as accurate and factual.

Imagine requesting an essay about the Statue of Liberty from an AI chatbot. Instead of saying the monument is in New York, the chatbot would be hallucinating if it said it was in California.

But errors aren’t often so clear-cut. The AI chatbot may also fabricate the identities of designers involved in the project or claim the Statue of Liberty was constructed in a different year in answer to the request.

Sometimes, though, the mistakes aren’t as clear. If the AI chatbot is asked to name the designers who worked on the Statue of Liberty, it can also invent names or claim the building was constructed in the incorrect year.

This is caused by the fact that large language models, also known as AI chatbots, are trained on massive volumes of data in order to identify patterns and relationships between words and subjects. Their ability to read prompts and produce original material, such writing or images, is based on this information.

AI chatbots are growing in popularity, and OpenAI even allows users to create their own personalized chatbots that they can share with other users. As chatbots proliferate in the industry, it’s critical to comprehend how they operate and recognize when they’re in error.

Actually, Dictionary.com’s word of the year for AI is “hallucinate,” as it best captures the possible influence AI may have on “the future of language and life.”

According to a post about the word, “hallucinate” appears appropriate for a period of time when new technologies can look like the stuff of fiction or dreams, particularly when they create their own fictions.

How Google and Open AI handle AI hallucinations

Users are advised to double verify their responses by both OpenAI and Google, who both caution users that their AI chatbots may make mistakes.

Reducing hallucinations is another goal of both tech companies.

Google claims that user feedback is one method it accomplishes this. The business states that in order for Bard to grow and learn, users should click the thumbs-down button and explain why the response was incorrect.

OpenAI has employed a technique known as “process supervision.” Under this method, the AI model would reward itself for applying sound reasoning to produce the output, as opposed to only rewarding the system for producing an accurate response to a user’s query.

Karl Cobbe, mathgen researcher at OpenAI, stated in May that identifying and addressing a model’s logical errors, or hallucinations, is a crucial first step in developing aligned AGI [or artificial general intelligence].

Also keep in mind that although ChatGPT and Google’s Bard are useful AI tools, they are not perfect. Even if the answers are given as accurate, while applying them, make sure to check for factual errors.

Source link

Most Popular