HomeArtificial IntelligenceArtificial Intelligence NewsGoogle I/O 2022: Google's AI and ML developments

Google I/O 2022: Google’s AI and ML developments

Google, which held its I/O 2022 developer conference late Wednesday, has increased its focus on artificial intelligence (AI) and machine learning (ML) development. Its preferences comprise not only research but also product development.

One of Google’s priorities is to make its products, particularly those involving communication, more “nuanced and natural.” This includes the creation and implementation of new language processing models.

AI Test Kitchen

Following the release of LaMDA (Language Model for Dialog Applications) last year, which enabled Google Assistant to have more natural conversations, Google has announced LaMDA 2 and the AI Test Kitchen, an app that will give users access to this model.

The AI Test Kitchen will allow users to experiment with these AI features and gain an understanding of what LaMDA 2 is capable of.

Google has commenced the AI Test Kitchen with three demos. The first, dubbed ‘Imagine It,’ permits users to propose a conversation idea, and Google’s language processing model responds with “imaginative and relevant descriptions” regarding the idea.

The second, ‘Talk About It,’ ensures that the language model stays on topic, which can be difficult. The third model, ‘List It Out,’ will recommend a potential list of to-dos, things to remember, or pro-tips for a given task.

Pathways Language Model (PaLM)

PaLM is a novel approach to natural language processing and artificial intelligence. Google claims that it is its largest model to date, having been trained on 540 billion parameters.

For the time being, the model can answer Math word problems and explain a joke using what Google calls chain-of-thought prompting, which allows it to describe multi-step problems as a series of intermediate steps.

The AI model answering questions in both Bangla and English was one instance shown with PaLM. For example, when Google and Alphabet CEO Sundar Pichai asked the model about popular pizza toppings in New York City, the answer appeared in Bangla, despite the fact that PaLM had never seen parallel sentences in the language.

Google hopes to apply these capabilities and techniques to more languages and complex tasks in the future.

Multisearch on Lens

Google also announced updates to its Lens Multisearch tool, which will allow users to conduct searches using only an image and a few words.

You can search with images and text at the same time in the Google app – similar to how you might point at something and ask a friend about it, the company explained.

Users will also be able to add “near me” to a picture or screenshot to see options for local restaurants or retailers that sell apparel, home goods, and food, among other things.

Users will be able to use Multisearch to pan their camera and instantly glean insights about multiple objects in a larger scene, thanks to a feature called “scene exploration.”

Immersive Google Maps

Google Maps now offers a more immersive experience. Using computer vision and artificial intelligence, the company has combined billions of Street View and aerial images to create a rich, digital model of the world. Users can now experience what it’s like to live in a neighborhood, landmark, restaurant, or popular venue thanks to the new immersive view.

Support for new languages in Google Translate.

Google Translate now supports 24 new languages, including Assamese, Bhojpuri, Konkani, Sanskrit, and Mizo. These languages were added using ‘Zero-Shot Machine Translation,’ in which a machine learning model only sees monolingual text – that is, it learns to translate into another language without ever seeing an example.

However, the company stated that the technology is not perfect and that it will continue to improve these models.

Source link

Most Popular