HomeMachine LearningMachine Learning NewsMaking ML Technology Trustworthy

Making ML Technology Trustworthy

Machine learning (ML) has advanced dramatically over the past decade and continues to achieve impressive performance on a human level in non-trivial tasks in image, speech and text recognition. It is increasingly powering many high-stake application domains such as autonomous vehicles, self–mission-fulfilling drones, intrusion detection, medical imaging detection and financial predictions. However, ML must make several advances before it can be used safely in areas where it directly affects people during training and operations, in which cases it is security, privacy, safety and fairness are all essential considerations.

The development of a reliable ML model must include protective measures against different types of enemy attacks. An ML model requires training data sets that can be “poisoned” by inserting, changing or deleting training samples in order to influence the decision limit of a model to serve the opponent’s intention. Adverserially manipulated inputs can evade ML models through deliberately designed inputs that are referred to as contradicting examples. In an autonomous vehicle, for example, a control model for its navigation can rely on the recognition of traffic signs. By attaching a small sticker to a stop sign, an opponent can avoid the model mistakenly recognizing the stop sign as a driving ban or as a speed limit “45”, while a human driver would optically ignore the sticker and brake at the stop sign.

Attacks can also abuse the input-output interaction of a model’s prediction interface to steal the ML model itself . By supplying a batch of inputs (for example, publicly available images of traffic signs) and obtaining predictions for each, a model serves as a tagging oracle that enables an opponent to train a substitute model that is functionally equivalent to the model. Such attacks pose greater risks to ML models that learn from high risk data such as intellectual property and national security or military intelligence.

When training models for predictive analysis of privacy sensitive data, such as: B. Clinical patient data and bank transactions of customers, data protection is of the utmost importance. Data protection-motivated attacks can reveal confidential information in training data through simple interaction with implemented models . The reason for such attacks is that ML models tend to “remember” additional pieces of their training data and inadvertently reveal details about the people who contributed to the training data at the time of the prediction. A common strategy called membership inference enables an adversary to exploit differences in a model’s response to members and non-members of a training set .

In response to these threats to air defense models, the search for countermeasures holds promise. Research has made advances in poisoning and unwanted input detection to limit what an adversary can learn by simply interacting with a model to limit the scope of model theft or membership inference attacks . A promising example is the formally strict formulation of privacy. The notion of differential privacy promises a person involved in a dataset that whether or not your dataset is part of a model’s training dataset, what an adversary learns about you by interacting with the model is basically the same .

Technical remedial measures aside, the lessons learned from the ML anti-attack arms race provide opportunities to motivate wider efforts to make ML truly trustworthy in relation to the needs of society. Issues include how a model “thinks” when it makes decisions (transparency) and fairness of an ML model if you are able to solve high-risk inference problems that are biased when those decisions are made by humans. Issues of transparency, fairness, and ethics when using ML to meet human needs.

Several worrying cases of resulting bias in ML applications have been documented , such as race and gender misidentification, wrongfully scoring darker-skin faces for higher likelihood of being a criminal, disproportionately favoring male applicants in resume screenings, and disfavoring black patients in medical trials. These harmful consequences require AA model developers to look beyond technical solutions in order to gain the trust of people working by affected by these harmful consequences.

On the research front, especially for LA security and privacy, the defensive countermeasures mentioned above have cemented understanding of the blind spots of ML models in hostile environments . More than enough evidence to point out the pitfalls of LA, especially with underrepresented subjects from the training datasets. So much remains to be done through inclusive and people-centered formulations of what it means for LA to be fair and ethical. The main cause of distortion in AA is that the distortion is attributed to the data and only to the data. Data collection, sampling, and annotation play critical roles in creating historical bias, but there are several places in the data processing process where bias can occur. Feature Extraction – From aggregation during training to scoring methods and metrics during testing, bias issues manifest themselves throughout the ML computational process.

There is currently a lack of generally accepted definitions and formulations of opponent robustness and data protection-preserving LD (with the exception of differential data protection, which is formally attractive but is not widespread). Domain-to-domain is also an urgent problem preventing progress towards a reliable ML. For example, most of the ML membership escape and inference attacks outlined above are mainly found in applications such as image classification (recognition of traffic signs by an autonomous vehicle), object recognition (identification of a flower in a photo of a living room with multiple objects), Language processing (language assistants) and natural language processing (machine translation) Threats and countermeasures proposed in connection with vision, language and text mastery are rarely translated, often of course contradicting domains, such as the detection of network attacks and the Financial fraud detection.

Another important consideration is the inherent tension between some reliability properties. For example, transparency and data protection are often in contradiction, because if a model is trained with data that is sensitive to data protection, the pursuit of the greatest possible transparency in production would inevitably lead to data details relevant to data protection being disclosed. Therefore, a decision has to be made as to the extent to which transparency is compromised for privacy and vice versa, and these options have to be made clear to buyers and users of the system. legal implications if not enforced (e.g. patient privacy under the Health Insurance Portability and Accountability Act in the United States) Additionally, privacy and fairness may not always synergize( Data protection) offers a limited guarantee for the indistinguishability of individual training examples, in terms of usefulness research shows that minority groups in training data(eg based on race, gender, or sexuality) tend to be negatively affected by the model outputs .

In general, the scientific community needs to take a step back and balance LA’s robustness, privacy, transparency, fairness, and ethical standards with human standards. For this it is necessary to develop and accept clearer standards for solidity and fairness and transparency should be replaced by generally applicable formulations, such as differential privacy offers. In policy formulation, there needs to be concrete steps toward regulatory frameworks that spell out actionable accountability measures on bias and ethical norms on datasets (including diversity guidelines), training methods (such as biasaware training) and input decisions (such as supplementing model decisions with explanations). It is hoped that this regulatory framework will eventually evolve towards LA governance modalities supported by legislation to lead to machine learning systems of the future.

Above all, there is a great need for knowledge from various scientific communities in order to take into account social norms about what a user feels safe using ML for high-risk decisions, such as a passenger in an autonomous car, a bank customer who bots investment recommendations and a patient who trusts an online diagnostic interface. Policies need to be developed that govern the safe and fair introduction of ML in high-risk applications. Equally important, the fundamental tensions between adversarial robustness and model accuracy, privacy and transparency, and fairness and privacy invite more rigorous and socially grounded reasonings about trustworthy ML. Fortunately, at this point in the introduction of machine learning, there is a consistent window of opportunity to fix your blind spots before machine learning becomes widespread and unmanageable.

Source link

Most Popular