Before the release of Google’s most sophisticated artificial intelligence model to date, the world will have to wait a little while longer.
Gemini, which can process various forms of data and understand and produce text, images, and other types of content, including websites, from a sketch or written description, has been dubbed the next generation of artificial intelligence.
Unannounced launch events that were originally planned to take place in New York, Washington, and California next week have been quietly rescheduled for early 2024, according to The Information, which cited two anonymous sources with knowledge of the decision. The reason for the change was due to concerns that the AI might not be able to respond to some non-English prompts and inquiries with enough reliability.
Gemini, which has not yet been made available to the general public, is said to perform noticeably better than OpenAI’s GPT-4 as it makes use of a lot more processing power.
According to Sissie Hsiao, Google’s vice president and manager of Bard and Google Assistant, she has witnessed some pretty incredible things regarding Gemini. For example, if she is attempting to bake a cake, Gemini will actually draw her three pictures showing the steps involved in icing a three-layer cake.
Hsiao continued, saying: These are entirely new images. These images aren’t from the internet. It can now communicate with humans through images rather than just text.
Although ChatGPT has so far enjoyed greater consumer awareness, analysts argue that could change when Gemini finally launches. Google already has its own generative AI model called Bard.