Diversity is the answer to AI bias conundrum

The “Cambrian explosion” of generative AI tools and applications that followed ChatGPT’s two-year anniversary has made it clear that two things can be true simultaneously: Both the dangers of widespread bias in these models and the promise for this technology to improve our lives are undeniable.

Less than two years have passed since AI first assisted with routine chores like ordering rideshares and making online purchases. Now, it is used as a judge and jury in extraordinarily important cases like welfare, credit, insurance, and housing disputes. Even if the well-known but frequently disregarded bias in these models was amusing or irritating when it suggested using glue to make cheese stick to pizza, it becomes inexcusable when these models act as gatekeepers for the services that have a direct impact on our ability to make a living.

Given that the data used to train AI models is biased by nature, how can we prevent AI bias beforehand and develop less dangerous models? Is it even feasible that those who develop the models are unaware of bias and unintentional repercussions in all of its complex forms? More women, minorities, elders, and a wider range of AI expertise are the solutions.

Initial education and exposure

While women account for nearly half (49%) of all employment in non-STEM fields, the World Economic Forum reported that they make up less than a third (29%) of STEM workers overall. Black professionals in math and computer science make up barely 9% of the workforce, according to the U.S. Department of Labor Statistics. These miserable figures have not changed significantly in 20 years, and when you restrict the scope from entry-level jobs to the C-suite, the percentage for women drops to a pitiful 12%.

As it is, we require all-encompassing approaches that begin in elementary school to increase the appeal of STEM to women and minorities. The toy company Mattel posted a video on their social media platform showing first or second graders playing with toys on a table. The majority of girls selected stereotypically “girl toys” like a doll or ballet, ignoring toys like race cars, which were considered the domain of boys. The girls’ perspective was drastically altered when they watched a film of Ewy Rosqvist, the first female winner of the Argentinean Touring Car Grand Prix.

It serves as a reminder that perception is shaped by representation and that we should be much more deliberate in the subliminal messages we convey to young females about STEM. Equal opportunities for inquiry and exposure must be provided, both through regular curriculum and non-profit partners like Data Science for All or the AI bootcamps run by the Mark Cuban Foundation. In order to give girls the impression that women can succeed in STEM fields, we also need to recognize and highlight the women who are leading the way in this field. Examples of these women are Joy Buolamwini, the founder of The Algorithmic Justice League, CEO of AMD Lisa Su, and CTO of OpenAI Mira Murati.

Nearly all professions in the future—from sports to astronauts, fashion designers to filmmakers—will rely heavily on data and artificial intelligence. Inequalities that prevent minorities from obtaining STEM education must be addressed, and girls must be made to understand that a STEM education genuinely opens up a world of professional opportunities.

We must first acknowledge bias in order to lessen it

Two main avenues for bias in AI are the large data sets used to train models and the individual logic or opinions of those who create them. We must first recognize and accept the existence of this bias, believe that all data is flawed, and accept that people’s unconscious biases are at play in order to effectively reduce it.

You only need to look at some of the most well-known and frequently utilized picture generators, such as DALL-E, Stable Diffusion, and MidJourney. The Washington Post writers asked these models to draw a “beautiful woman,” and the results revealed a startling lack of diversity in terms of skin tone, body shape, and cultural characteristics. These instruments defined feminine beauty as predominantly young, European, slim, and white.

Only 2 percent of the photos showed obvious aging, and only 9 percent featured dark skin tones. Regardless matter where prejudice comes from, The Post’s investigation revealed that popular picture programs have difficulty producing accurate photos of women that defy the western ideal. This was one especially startling phrase from the article. Furthermore, academics have discovered that ethnic dialect might result in “covert bias” when determining a person’s intelligence or suggesting capital punishment.

There comes a point in some geographic areas when a model that has been trained on enormous repositories of historical credit data for women just doesn’t exist. When you combine this with the months or even years that some women spend away from the job due to childcare or maternity leave, how can developers account for the gaps in employment or credit history and are they aware of any potential discrepancies? One solution to this might be synthetic data made possible by gen AI, but only if data scientists and model builders are aware of these issues.

Because of this, it’s critical that women from a variety of backgrounds not only sit at the AI table but also actively participate in the creation, supervision, and training of these models. This just cannot be left to chance or the moral and ethical standards of a small group of chosen engineers who historically have made up a very small portion of the wealthier global population.

More diversity: A no-brainer

It’s unlikely that we will ever completely eradicate bias from our AI innovation given the quick drive for profits and the prejudice entangled in our digital libraries and personal experiences. However, that cannot imply that ignorance or inaction are appropriate. More diversity in the STEM fields and among the people directly involved in AI development will surely result in more inclusive, accurate models, which will benefit all of us.

Source link