HomeMachine LearningMachine Learning EducationTreating Data Leakages In Trained ML Models

Treating Data Leakages In Trained ML Models

Such data require stricter security measures for data processing, storage and transmission. The goal here is to reduce the risk of losing sensitive personal information that can be traced back to the identity of a real person. The actual risk of identity association is complex and depends on the frequency of the data points, the size of the source dataset, the availability of public datasets to support general re-identification strategies, and publicly available information from specific individuals.

Types of Data Leakages

In the work by Jegorova et al., the authors examined the ML landscape for possible routes for data leaks:

1|Based on the type of data

Text data 

This data includes names of people (users, customers, patients, security guards, etc.), dates of birth, zip codes, telephone numbers, unique identifiers, etc. It is possible to capture data, characteristics, or entire records during implementation.

Leakage in image data

With the improved generative models, leakage of data into images can be catastrophic. Think of a flaw in the model that allows hackers to lock your iPhone or even your home with smart locks. Data leaks include human faces or other identifying features. When an ML model is trained with such sensitive image data, the authors write, a generative model can produce a similar appearance based on re-identifiable bone implants / prostheses and other individual features.

Leakage in tabular data 

In tabular data, according to the survey, records are limited to predefined variables and values, which increases the risk of more precisely identifying a person based on

  • statistical disclosure risks,
  • governing features such as the sensitivity of the tabular data,
  • geography and population size,
  • zero-value entries, and
  • small group linkage to specific clinical providers.

2| Based on the type of tasks

Regression

In areas such as financial forecasting, marketing trends, and weather forecasting, where regression techniques are widely used, many previous publications have spoken of model-level data loss for various types of data, including financial and medical time series, tabular numerical and tabular data of mixed feature tabular data.

Classification

According to the authors, image classification is the best-researched task for leaks and privacy attacks. Researchers have already shown that data samples can be reconstructed from as little information as a class name using Membership Inference Attacks (MIA), Property Inference attacks and mosel extraction. Within a classification, however, applications with tabular data are the least explored and least explored when it comes to classifiers in time series problems.

Generation

Generative models have contributed greatly to the AI ​​hype over the past two years. The algorithms were able to generate paintings that were auctioned off for millions of dollars, but at the same time these models unleashed a genius in the form of deep fakes that can deceive an ordinary viewer and harbor a core of misinformation of catastrophic proportions. A well-trained Adversarial Generative Network (GAN) can capture the underlying distribution of real data, which explains the effectiveness of deep fakes. These models can reveal sensitive information about the people in the training set.

3| Miscellaneous

Samples of specific training data are stored when the model assigns a probability to some of the samples that is significantly higher than expected by chance; However, when it comes to deep learning or deep reinforcement learning, some level of memorization is always welcome and can be inevitable. This makes it even more difficult as the line between a feature and a bug is thin. The authors caution that memorization can raise serious privacy and legal concerns when trained machine learning models are publicly shared or provided as a service.

Loss of characteristics is characterized by the loss of sensitive data characteristics. Feature leakage implicitly enables property inference attacks that can pose a threat to collaborative learning models.

The leaks mentioned above are exploited through membership inference attacks, ownership inference attacks, model inversion attacks and model extraction attacks. These attacks are the most popular and are actively being investigated. According to the survey, these attacks can be fended off with the following defense mechanisms.

For example, data obfuscation is a method of disrupting confidential information through encryption or masking. This technique deliberately adds noise to the data. Simulates the trade-off between user privacy and quality of service, influenced by the severity of the data disruption.

In contrast, data sanitization overwrites the sensitive information in the data with realistic-looking synthetic data using techniques such as label flipping. These defenses allow researchers to predict how a model will behave in the event of an attack.

However, resorting to brute force defenses on ML systems that do the heavy lifting is not recommended. Varies depending on the application. For example, data loss is more serious in a healthcare facility than in a recommendation system that stores favorite movies. And as mentioned earlier, mistakes like memorization can sometimes be great traits too. Also, the most popular defenses are case-specific and have yet to be widely questioned.

Thanks to federated on-device learning and other similar techniques, privacy protection applications for machine learning already exist, but with each new paradigm of machine learning models, a new challenge arises: Generative models outperformed classification models in creating problems. Problems a “catastrophic forgetting” problem is discovered. With governments tightening the rules through GDPR, PDPA and their equivalents, investigating and defending data breaches has never been more important.

Source link

Most Popular