HomeMachine LearningMachine Learning NewsHandling Anomalies In ​Machine Learning Models: Uber Technique

Handling Anomalies In ​Machine Learning Models: Uber Technique

Relying solely on past events is no longer sufficient for anomaly detection since the dawn of COVID, because every other day feels like an anomaly.

Artificial intelligence models are only as good as the data they are powered by. But what if the data fed is no longer relevant? What kind of results can you expect when there is not a single anomaly in the dataset, but the dataset becomes abnormal?

Uber ran into a similar problem last year, with hundreds of terabytes of its historical data suddenly adding no value to anomaly detection. Uber therefore decided to adopt a recency bias in its ML models.

“If you look at last year, then yes this looks anomalous, but you look at the time window, it’s a historical moving average. Based on what you see in the recent past, the recency bias—where we look at the most recent data—is not really an anomaly, considering this is the direction in which the trend is moving. These models can catch such variations. It has a recency bias to look at most recent data and give more weightage to the recent average than something that happened months ago,” Pallavi Rao, Uber staff software engineer, told ETCIO.

Historical moving averages are an example of a statistical model. Uber looks at what the historical average has been and keeps moving that average during that window of time and says that is how the average should be.

“So based on the average trend let’s make a prediction since it’s going like this, say an hour later it should be there. If our prediction doesn’t match what actually happened, then that’s an anomaly. We predicted the average would be X, but suddenly we saw Y, which is very different from our prediction, meaning the data we’re seeing isn’t really following the average. This is called statistical models in which we use forecasts and the difference between a forecast and the actual to predict if there is an anomaly. But yes, it depends on how sophisticated you want to achieve in anomaly detection. It can be as simple as a “Z score” or as sophisticated as a statistical model.” Rao said.

For a company that works in more countries, after the same model ML can be difficult, especially when local cultural nuances are taken into account. And then suddenly we see a spike in those tickets on the customer support site. So if we’re just sitting on it, we might not even realize it. But what we do is constantly analyze it and when we see a spike in such issues we realize that there must be something wrong with the product, there must be a bug or a functionality issue that is causing this. Unless you are looking for this peak and unless the model detects this anomaly, we will not be able to detect it.” explained Rao.

Another example was when the company started imposing masks on its drivers. But few drivers paid attention to the new rule, and it led to authentication failure every time they tried to start a trip.

“If the mask check didn’t work, we would suddenly see a spike in support issues. Although users cannot report directly to the app, saying ‘I have a problem’ which looks like a support issue, so by looking at something else or analyzing support issues, we are able to spot some product issues.” said Rao.

Source link

Most Popular