HomeArtificial IntelligenceArtificial Intelligence NewsDoes Artificial Intelligence Needs to Slow Down?

Does Artificial Intelligence Needs to Slow Down?

The AI ​​researcher, who left Google last year, says the incentives surrounding AI research are all wrong.

ARTIFICIAL INTELLIGENCE RESEARCHERS face an accountability problem: How do you try to ensure that decisions are accountable when the decision maker is not a responsible person but an algorithm? Currently, only a handful of people and organizations have the power – and the resources – to automate decision-making.

Organizations rely on artificial intelligence to approve a loan or shape a defendant’s judgment, but the foundations on which these intelligent systems are built are prone to bias. Bias from the data, from the programmer, and from a powerful company’s bottom line can snowball into unintended consequences. The artificial intelligence researcher Timnit Gebru warned of this reality in a RE: WIRED talk on Tuesday.

There were companies that wanted to [assess] the likelihood that someone would redefine a crime, Gebru said, that was scary for me.

Gebru was a star engineer from Google specializing in AI ethics.She put together a team to protect against algorithmic racism, sexism and other prejudices. Gebru also co-founded the Black in AI nonprofit, which aims to improve the inclusion, visibility and health of Black people in their field.

Google forced her out last year, but she hasn’t given up her fight to prevent accidental damage to machine learning algorithms.

On Tuesday, Gebru spoke with WIRED senior editor Tom Simonite about incentives in AI research, the role of worker protection and the vision of his planned independent institute for ethics and accountability of AI. Her central point: artificial intelligence needs to slow down.

We haven’t had time to think about how it should be built because we always put out fires, she said.

As an Ethiopian refugee attending public school in the Boston suburbs, Gebru was quick to pick up on America’s racial dissonance. Lectures referred to racism in the past tense, but that didn’t jibe with what she saw, Gebru told Simonite earlier this year. She has found a similar misalignment repeatedly in her tech career.

Gebru’s professional career began in hardware. But she changed course when she saw the barriers to diversity and began to suspect that most AI research had the potential to harm already marginalized groups.

The confluence of these has taken me in a different direction, namely trying to understand and limit the negative social impact of AI, she said.

For two years, Gebru brought Google’s ethical AI team together with computer scientist Margaret Mitchell. The team developed AI crash protection tools for Google’s product teams. However, over time, Gebru and Mitchell found that they had been banned from meetings and email threads.

The GPT3 language model was released in June 2020 and showed the ability to create coherent prose on occasion, but the Gebru team were concerned about the excitement involved.

We haven’t had the time to think about how it should even be built because we’re always just putting out fires.”

TIMNIT GEBRU, AI RESEARCHER

“ Let us build ever larger language models, ”said Gebru, recalling the general opinion. say, ‘Let’s pause for a moment and calm down so we can think about the pros and cons and perhaps alternative ways of doing this.

Her team helped write an article on the ethical implications of language models, entitled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”

Others on Google weren’t happy. Gebru has been asked to either withdraw the document or remove the names of Google employees. She responded with a request for transparency: who had called for such tough measures and why? Neither party moved. her direct reports that she had “resigned”.

The Google experience reinforced their belief that oversight of the ethics of AI should not be left to a company or government.

“The incentive structure is not one that slows you down. First, think about how you should approach research, how you should approach AI, when it should be built, when it should not be built, said Gebru. I want us to be able to do AI research the way we see fit and prioritize voices that we believe will actually be harmed.

Since leaving Google, Gebru has built an independent research institute to present a new model of responsible and ethical AI research. The institute aims to answer questions similar to its ethical AI team, with no strong incentives from private, government, or academic research, and no ties to corporations or the Department of Defense.

Our goal is not for Google to get more money; It’s not about helping the Department of Defense kill more people more efficiently, she said.

At Tuesday’s session, Gebru said the institute would be unveiled on December 2nd, the anniversary of his departure from Google. Maybe I’ll start celebrating this every year, she joked.

Slowing the pace of AI could cost businesses money, she said. Either use more resources to make security a priority or [don’t] implement things, she added. And if there is no regulation that gives priority, it will be very difficult for all of these companies to voluntarily regulate themselves.

Nevertheless, Gebru finds room for optimism. The conversation has really changed and some of the people in the Biden administration who are working on this are the right people, she said. I have to be hopeful. I don’t think we have any other options.

Source link

Most Popular