HomeMachine LearningMachine Learning NewsHow can machine learning models overcome biased data sets?

How can machine learning models overcome biased data sets?

AI systems can get the job done quickly, but they don’t always do it honestly. If the dataset used to train the machine learning model contains biased data, the system may actually exhibit the same bias in decision making. For example, if your dataset contains images of mostly white males, a facial recognition model trained on that data may be less accurate for females or people with different skin colors.

A team of researchers from the Massachusetts Institute of Technology collaborated with researchers from Harvard University and Fujitsu, was trying to understand when and how machine learning models can overcome this kind of dataset bias. They used a neuroscience approach to study how training data could affect whether artificial neural networks can learn to recognize objects they’ve never seen before. A neural network is a machine learning model that mimics the human brain in that it contains layers of interconnected nodes or “neurons” that process data.

New results show that the diversity of training data has a significant impact on whether neural networks can overcome bias, but at the same time the diversity of datasets can affect network performance. .. It also shows that neural network training methods and certain types of neurons that emerge during the training process can play an important role in the ability to overcome biased datasets.

Neural networks can overcome dataset bias. This is promising. However, the important point here is that you need to consider the diversity of your data. You need to stop believing that you can go anywhere just by collecting a large amount of raw data. In the first place, we need to pay close attention to how the dataset is designed, “said Xavier Boix, a research scientist at the Faculty of Brain Cognitive Science and the Center for Brain, Mind, and Machinery, and the lead author of this article.

Co-authors include former graduate student Spandan Madan, the corresponding author who currently holds a PhD. At Harvard, Timothy Henry, Jamel Dozier, Helen Ho, Nishchal Bhandari. Former visiting researcher Tomotake Sasaki turned into a researcher at Fujitsu. Frédo Durand, Professor of Electrical Engineering and Computer Science, Member of the Institute of Computer Science and Artificial Intelligence. Hanspeter Pfister, Professor of Computer Science at Harvard Institute of Technology and Faculty of Applied Sciences.

Thinking like a neuroscientist:

Boix and his colleagues tackled the problem of dataset bias by thinking like a neuroscientist. In neuroscience, it is common to use experimentally controlled datasets, Boix explains. This means a dataset that the researcher knows as much as possible about the information it contains.

The team created a dataset containing images of different objects in different poses and carefully controlled the combinations so that some datasets were more diverse than others. In this case, if you have many images that display the object from only one angle, the dataset will be less diverse. The more diverse dataset contained more images showing objects from multiple angles. Each dataset contained the same number of images.

Using these carefully constructed datasets, researchers trained neural networks for image classification to see how well they could identify objects from perspectives that the network did not recognize during training. For example, when a researcher trains a model to classify cars into images, he wants the model to learn what different cars appear like. However, if all Ford Thunderbirds are viewed from the front of the training dataset, the trained model will be trained with millions of car photos given the Ford Thunderbird photos taken from the side. Even if it does, it may be misclassified.

Researchers have found that if the dataset is more diverse, that is, if more images show objects from different angles, the network can be generalized to new images or angles. Data diversity is the key to overcoming stigma, Boix says. However, the diversity of data is not always great. As neural networks become better at recognizing new things they haven’t seen yet, it becomes harder for them to recognize what they have already seen, “he says.

Testing training methods:

In machine learning, you typically train a network to perform several tasks at same time. The idea is that if there is a relationship between tasks, the network learns to do each task better if they learn it together. However, researchers have found the opposite. Models trained individually for each task were much better at overcoming bias than models trained together for both tasks.

The result was really impressive. In fact, when I first ran this experiment, I thought it was a mistake. It was unexpected, so it took me a few weeks to realize that this was the real result, “he says.

They dig deep into neural networks to understand why this is happening.

They found that neuronal specialization appear to play a major role. When a neural network is trained to recognize objects in an image, it appears that two types of neurons emerge. One specializes in object category recognition and the other specializes in perspective recognition.

Boix explains that these specialized neurons become more prominent as the network is trained to perform tasks individually. However, if the network is trained to perform both tasks at the same time, some neurons will be diluted and no longer specialized in one task. He says these unspecialized neurons are likely to be confused.

But the next question is how did these neurons get there? They train neural networks and emerge from the learning process. No one instructed the network to include these types of neurons in its architecture. That’s fascinating, “he says.

This is an area that researchers want to explore in future research. They want to see if neural networks can evolve neurons with that specialization. We also want to apply the approach to more complex tasks such as: B. Objects with complex textures or different lighting.

Boix is ​​encouraged that neural networks can learn to overcome bias and hope that her work can encourage others to think about datasets to use in AI applications.

source link

Most Popular