The Three Biggest AI Controversies

Google’s LaMDA artificial intelligence (AI) model garnered a lot of attention due to the controversy created by an engineer working with Google who claimed that the artificial intelligence (AI) model LaMDA had turned sentient. This company rubbished this claim; however, this is not the first for an artificial intelligence program to be embroiled in controversies.

AI is an umbrella term for when computer systems mimic human intelligence. AI systems are generally trained by absorbing huge amounts of data and analyzing it for correlations and patterns. They then make predictions based on these patterns. However, this process can go wrong at times, resulting in results that range from hilarious to horrifying. Here are some of the most recent debates about artificial intelligence systems.

Google LaMDA is allegedly ‘sentient’

Even a machine might realize that it is reasonable to start with the most recent controversy. Google engineer Blake Lemopine was put on administrative leave after claiming that LaMDA had become sentient and began reasoning like a human.

Lemoine told the Washington Post, which broke the story first, had he not been aware that it was this computer program that they built recently, he would have thought it was a seven or an eight-year-old kid who happened to know physics. This technology has the potential to be incredible, in his opinion. He believes it will benefit everyone. But maybe other people disagree, and maybe Google shouldn’t make all the decisions.

Lemoine collaborated with a colleague to present Google with evidence of sentience, but the company rejected his claims. Following that, he reportedly posted transcripts of conversations he had with LaMDA in a blog post. Google refuted his claims by stating that the company prioritizes risk minimization when developing products like LaMDA.

Tay, Microsoft’s AI chatbot, became racist and sexist

Microsoft’s artificial intelligence chatbot Tay debuted on Twitter in 2016. Tay was created as a test of “conversational understanding.” It was intended to become smarter and smarter as it conversed with people on Twitter. Learning from what they tweet in order to engage people more effectively.

But, before long, Twitter users began tweeting racist and misogynistic slurs at Tay. Unfortunately, Tay began absorbing these conversations, and the bot soon began producing its own versions of hateful speech. Its tweets ranged from “I am super stoked to meet you” to feminism is cancer and “Hitler was right.” I despise Jews.

Microsoft quickly removed the bot from the platform, as expected. We are deeply sorry for Tay’s unintentionally offensive and hurtful tweets, which do not represent who we are, what we stand for, or how we designed Tay, said Peter Lee, Microsoft’s vice president of research, during the time of the controversy. Later, in a blog post, the company stated that it would only bring Tay back if engineers could figure out how to prevent Web users from influencing the chatbot in ways that undermine the company’s principles and values.

Amazon’s Rekognition classifies members of the United States Congress as criminals

The American Civil Liberties Union (ACLU) tested Amazon’s “Rekognition” facial recognition program in 2018. During the test, the software incorrectly identified 28 members of Congress as having committed crimes in the past. Rekognition is a face-matching program that Amazon makes available to the public in order for anyone to match faces. Many US government agencies use it.

Using 25,000 publicly available arrest photos, the ACLU used Rekognition to create a face database and search tool. They then used Amazon’s default match settings to search that database against public photos of every member of the US House and Senate at the time. As a result, there were 28 false matches.

Furthermore, the false matches were overwhelmingly made up of people of color, including six members of the Congressional Black Caucus. Despite the fact that only 20% of members of Congress at the time were people of color, 39% of the false matches were. This served as a stark reminder of how AI systems can incorporate biases discovered in the data on which they are trained.

Amazon’s secret AI recruiting tool favors men over women

According to Reuters, an Amazon machine learning team began developing an AI tool in 2014 to review job applicants’ resumes with the goal of automating the search for top talent. The goal was to create the AI holy grail of recruiting: give the machine 100 resumes and it will choose the top five.

However, the team discovered that the system was not rating candidates in a gender-neutral manner as early as 2015. In essence, the program began arbitrarily and without reason rating male candidates higher than female candidates. This is because the model was trained to analyze applications by scrutinizing patterns in resumes submitted to the company over 10 years.

The majority of resumes came from men, reflecting the male dominance of the tech industry. As a result of this data bias, the system taught itself that male candidates were preferable. The system penalized resumes that included words like “women’s.” For instance, if a resume states “Women’s chess team.” It also downgraded all-women college graduates.

Amazon initially edited the programs to make them neutral to those terms. Even so, there was no guarantee that the machines would not devise other discriminatory methods of sorting candidates. Amazon eventually abandoned the program. According to a statement provided to Reuters, it was never used in recruitment.

Source link