How to fix AI Bias

In just the previous year, artificial intelligence has advanced rapidly. AI bias is a significant issue that the technology still has to deal with and could have disastrous real-world repercussions.

Individuals may harbour implicit or explicit prejudices towards other people on the basis of their ethnicity, gender, or social class. AI bias arises from the fact that humans develop AI models, which lead to skewed or biased outputs that “reflect and perpetuate human biases within a society.

The data that an AI system was trained on is one of the reasons bias can appear in the system. AI models process enormous volumes of data by using a complex set of algorithms. In order to find comparable patterns in new sets of data, they must learn how to recognize patterns in the training data.

However, if the training set contains biased data, the AI model may detect skewed patterns and generate outputs that are also biased.

Let’s say a business wishes to find suitable applicants by filtering through job applications using an AI system. In the event that the AI system is trained using historical data, and the company has a history of hiring more men than women, the model may be biased towards rejecting female job applicants while classifying male applicants as qualified.

Theodore Omtzigt, chief technology officer at Lemurian Labs, tells that the core data on which it is trained is effectively the personality of that AI. By choosing the incorrect dataset, you are inadvertently building a biased system.

AI bias won’t always go down if training datasets are mixed

Biased AI models cannot be fixed by merely diversifying the data.

Suppose you are using dataset “A,” which is biased in a specific way, and dataset “B,” which is biased in a different way, to train an AI chatbot. It’s not a given that merging datasets with distinct biases will result in those biases cancelling each other out, according to Omtzigt.

The bias remains, he claims, despite their combination. It has now provided you with [a biased AI system].

According to Omtzigt, every dataset has some form of bias due to limitations. He claims that this is the reason why there should be systems or people who examine an AI model’s outputs for possible bias and determine whether they are immoral, unethical, or fraudulent. Once the AI has that input, it can utilize it to improve its subsequent responses.

He claims that the AI lacks morality. As the person receiving the information, you must possess the skepticism or critical thinking abilities to question whether the information is true.

Google and OpenAI claim to be tackling AI bias

Google and OpenAI claim to be tackling AI bias.
A few tech companies developing AI systems claim to be addressing bias in their models.

According to OpenAI, it pretrains its AI models using a large amount of data that “contains parts of the Internet” to predict the most likely word to appear in a sentence.

Still, some of the biases in those billions of sentences can be detected by the models. According to OpenAI, it uses human reviewers who adhere to guidelines to “fine-tune” the models in order to combat this. After providing the models with a variety of input examples, the reviewers examine and score the AI model’s results.

Similar to this, Google claims that in order to enhance its Bard AI chatbot, it draws on human input and evaluations in addition to its own “AI Principles”.

Source link

Most Popular