Google announced today that it will be making improvements to its AI model for making Google Search a safer experience and one that is better at handling sensitive queries, such as those about suicide, sexual assault, substance abuse, and domestic violence. Other AI technologies are also being used to enhance their ability to remove unwanted explicit or suggestive content from Search results when people aren’t specifically looking for it.

When people search for sensitive information – such as suicide, abuse, or other topics – Google currently displays the contact information for relevant national hotlines above its search results. However, the company explains that people in crises may search in a variety of ways, and it’s not always obvious to a search engine that they’re in need, even if their search queries would raise red flags if a human saw them.

Google claims that with machine learning and the most recent improvements to its AI model called MUM (Multitask Unified Model), it will be able to automatically and more accurately detect a broader range of personal crisis searches due to MUM’s ability to better understand the intent behind people’s questions and queries.

At its Search On event last year, the company announced its plan to redesign Search using AI technologies, but it did not address this specific use case. Instead, Google focused on how MUM’s improved understanding of user intent could be used to help web searchers gain deeper insights into the topic they’re researching and lead them down new search paths.

For instance, if a user searches for “acrylic painting,” Google may suggest “things to know” about acrylic painting, such as different techniques and styles, painting tips, cleaning tips, and more. It may also direct users to queries they had not previously considered searching for, such as “how to make acrylic paintings with household items.” In this case, Google claimed to be able to identify more than 350 different topics related to acrylic paintings.

MUM will now be used in a similar way to help better understand the types of topics that someone in crisis might search for, which aren’t always as obvious as typing in a direct cry for help.

If we can’t accurately recognize that, we won’t be able to code our systems to display the most useful search results. That is why using machine learning to understand language is critical, Google explained in a blog post.

For instance, if a user searched for “Sydney suicide hot spots,” Google’s previous systems would interpret the query as one of information-seeking because the term “hotspots” is commonly used in travel search queries. However, MUM recognizes the query as being related to people looking for a jumping spot for suicide in Sydney and would identify this search as potentially coming from someone in crisis, allowing it to display actionable information such as suicide hotlines.

Another suicide query that MUM could improve is “most common ways suicide is completed,” which Google would have previously understood only as an information-seeking search.

MUM also performs better on longer search queries where the context is obvious to humans but not always to machines. A question like “why did he attack me when I said I don’t love him” suggests a domestic violence situation. Long, natural-language queries, on the other hand, have proven difficult for Google’s systems to handle in the absence of advanced AI.

Furthermore, Google notes that MUM can transfer its knowledge across the 75 languages in which it has been trained, allowing it to more quickly scale AI improvements like this to global users. This means it will be able to display actionable information from trusted partners, such as local hotlines, to a broader audience for these types of personal crisis searches.

This isn’t the first time MUM has been tasked with directing Google searches. According to the company, MUM was previously used to improve searches for COVID-19 vaccine information. Google says it will use MUM in the coming months to improve its spam protection features and expand them to languages with little training data. Other MUM enhancements will follow shortly.

Google’s ability to filter explicit content from search results is another area benefiting from AI technology. Even when Google’s SafeSearch filtering technology is disabled, Google attempts to reduce unwanted explicit content from searches in which finding racy content was not the goal. Today, as users conduct hundreds of millions of searches around the world, its algorithms improve on this ability.

However, the AI technology known as BERT is now being used to assist Google in better understanding whether people were looking for explicit content. According to the company, BERT has lowered undesired distressing search results by 30% in the last year, based on an analysis conducted by “Search Raters” who measured oversexualized results across random samples of web and image search queries.

The technology has also been particularly effective in reducing explicit content for searches related to “ethnicity, sexual orientation, and gender,” which Google claims disproportionately affects women, particularly women of color, according to the analysis.

According to Google, the MUM AI enhancements will begin to be rolled out to Search in the coming weeks.

Source link