HomeArtificial IntelligenceArtificial Intelligence NewsBlenderBot 3, Meta’s latest AI Chatbot, starts beta testing

BlenderBot 3, Meta’s latest AI Chatbot, starts beta testing

BlenderBot 3 is now available to the general public in the United States. Meta believes BlenderBot 3 can engage in regular conversation and answer digital assistant questions, such as locating child-friendly locations.

BlenderBot 3 converses and responds to queries in the same way that Google does

Meta’s previous work with large language models (LLMS) served as the foundation for the bot’s prototype. BlenderBot uses massive text datasets to discover statistical patterns and generate language. These algorithms have been used to generate code for programmers and to help writers overcome mental blocks.

The biases in their training data are repeated by the models and frequently provide answers to users’ questions (a concern if they are to be effective as digital assistants).

Meta would like BlenderBot to investigate this issue. The chatbot can search the internet for specific topics. Users can learn where their information came from by clicking on the answers. Citations are used in BlenderBot 3.

By publishing a chatbot, Meta hopes to gather feedback on enormous language model difficulties. Users of BlenderBot can report suspicious responses, and Meta has worked to reduce the bots’ use of filthy language, insults, and culturally incorrect remarks. If users choose to participate, Meta will save their discussions and comments for AI researchers.

We’re committed to openly disclosing all demo data to boost conversational AI, said Kurt Shuster, a Meta research engineer who helped design BlenderBot 3.

How AI’s progress over time benefits BlenderBot 3

Generally, tech companies have avoided making public prototype AI chatbots. In 2016, Microsoft’s Twitter chatbot Tay learned from public interactions. Twitter users taught Tay to say racist, anti-Semitic, and sexist things. After 24 hours, Microsoft removed the bot.

Meta contends since Tay’s mishap, AI has evolved, and BlenderBot now includes safety rails to prevent a repeat.

According to Mary Williamson, a research engineering manager at Facebook AI Research (FAIR), BlenderBot is a static model. It can remember what users say in a discussion (and will save this information through browser cookies if a user leaves and returns), but this information will only be used to improve the system later.

That Tay incident is bad since it caused this chatbot winter, in my opinion, Williamson tells The Verge.

According to Williamson, most chatbots are task-oriented. Consider customer service bots, which provide customers with a preprogrammed conversation tree before transferring them to a human representative. According to Meta, the only way to design a system that can have genuine, open-ended discussions like humans is to allow bots to do so.

Williamson thinks it’s unfortunate that bots can’t say anything useful. We are responsibly releasing this for further research.

Meta also makes available the source code, training dataset, and smaller model versions of BlenderBot 3. The 175 billion-parameter model is available for researchers to request here.

Most Popular