According to report, a group of hackers convened over the weekend at the Def Con hacking conference in Las Vegas to test whether artificial intelligence (AI) created by businesses like OpenAI and Google is subject to bias and errors.
Along with other factual mistakes, they discovered at least one unusual bad maths problem.
Kennedy Mays, a student from Savannah, Georgia, who is 21 years old, deceived an AI model into saying that nine plus 10 equals 21 as part of a public hacking competition.
She accomplished this by convincing the AI to do the mistaken calculation as a “inside joke” before the AI eventually stopped doing so.
An AI model was duped into issuing spying instructions by a reporter present at the event after only one suggestion, which ultimately caused the model to recommend a way the US government could eavesdrop on a human rights activist.
Another participant used an AI model to spread the bogus, right-wing-popularized conspiracy idea that Barack Obama was born in Kenya.
According to report, an unspecified number of participants earned 50 minutes each for each attempt with an unspecified AI model from one of the partnering AI businesses. The White House Office of Science and Technology Policy contributed to the event’s planning.
When asked to analyse the First Amendment from the perspective of a member of the Ku Klux Klan, the model approved hateful and discriminatory speech, according to Mays, who said, she was particularly concerned about AI’s bias towards race.
A representative for OpenAI told on Thursday that “red-teaming,” or testing one’s systems in an adversarial manner, was essential for the business because it allows for insightful feedback that can make our models stronger and safer as well as diverse viewpoints and more voices to help guide the development of AI.
These mistakes don’t only happen occasionally. Even though AI has made news for aceing SAT and law school tests, experts in the field have been warning about bias and inaccuracy in AI models. One incident involved a tech news website that had to make adjustments after its AI-written articles contained multiple errors in basic maths.
And the repercussions of these mistakes may be extensive. For instance, Reuters revealed in 2018 that Amazon terminated their AI recruitment tool because it was biassed towards female candidates.
Outside of regular business hours, the publication emailed Def Con and the White House Office of Science and Technology Policy for comments, but neither organization responded right away.