HomeArtificial IntelligenceArtificial Intelligence NewsGoogle Turmoil Exposes Cracks in Top AI Watchdog

Google Turmoil Exposes Cracks in Top AI Watchdog

The ouster of lead researchers Gebru, Mitchell followed years of friction over how Google handled allegations of harassment and bias

For more than three years, Google held up its Ethical AI research team as a shining example of a concerted effort to address thorny issues raised by its innovations. Created in 2017, the group assembled researchers from underrepresented communities and varied areas of expertise to examine the moral implications of futuristic technology and illuminate Silicon Valley’s blind spots. It was led by a pair of star scientists, who burnished Google’s reputation as a hub for a burgeoning field of study.

In December 2020, the division’s leadership began to collapse after the contentious exit of prominent Black researcher Timnit Gebru over a paper the company saw as critical of its own artificial intelligence technology. To outsiders, the decision undermined the very ideals the group was trying to uphold. To insiders, this promising ethical AI effort had already been running aground for at least two years, mired in previously unreported disputes over the way Google handles allegations of harassment, racism and sexism, according to more than a dozen current and former employees and AI academic researchers.

One researcher in Google’s AI division was accused by colleagues of sexually harassing other people at another organization, and Google’s top AI executive gave him a significant new role even after learning of the allegations before eventually dismissing him on different grounds, several of the people said. Gebru and her co-lead Margaret Mitchell blamed a sexist and racist culture as the reason they were left out of meetings and emails related to AI ethics, and several others in the department were accused of bullying by their subordinates, with little consequence, several people said. And in the months before Gebru was let go, there was a protracted conflict with Google sister company Waymo over the Ethical AI group’s plan to study whether its autonomous-driving system effectively detects pedestrians of varying skin tones.

The collapse of the group’s leadership has provoked debate in the artificial-intelligence community over how serious the company is about supporting the work of the Ethical AI group—and ultimately whether the tech industry can reliably hold itself in check while developing technologies that touch virtually every area of people’s lives. The discord is also the latest example of a generational shift at Google, where more demographically diverse newcomers have stood up to a powerful old guard that helped build the company into a behemoth. Some members of the research group say they believe that Google AI chief Jeff Dean and other leaders have racial and gender blind spots, despite progressive bona fides—and that the technology they’ve developed sometimes mirrors those gaps in understanding the lived experiences of people unlike themselves.

“It’s so shocking that Google would sabotage its efforts to become a credible center of research,” said Ali Alkhatib, a research fellow at the University of San Francisco’s Center for Applied Data Ethics. “It was almost unthinkable, until it happened.”

The fallout continues several months after Gebru’s ouster. On April 6, Google Research manager Samy Bengio, who Ethical AI team members came to regard as a key ally, resigned. Other researchers say they are interviewing for jobs outside the search giant.

Through a spokesman, Dean declined a request to be interviewed for this story. Bengio, who at Google had managed hundreds of people in Ethical AI and other research groups, didn’t respond to multiple requests for comment.

“We have hundreds of people working on responsible AI, with 200+ publications in the last year alone,” a Google spokesman said. “This research is incredibly important and we’re continuing to expand our work in this area in keeping with our AI principles.”

Google Turmoil Exposes Cracks in Top AI Watchdog 1
Attendees of a Google AI event in San Francisco in Jan. 2020 explore Media Pipe, an artificial intelligence hand recognizing application.

Before Google caused an uproar over its handling of a research paper in the waning weeks of 2020, Mitchell and Gebru had been co-leads of a diverse crew that pressed the technology industry to innovate without harming the marginalized groups many of them personally represented.

Under Dean, the two women had developed reputations as valued experts, protective leaders and inclusion advocates, but also as internal critics and agitators who weren’t afraid to make waves when challenged.

Mitchell arrived at Google first, in 2016, from Microsoft Corp. In her first six months at Google, she worked on ethical AI research for her inaugural project, trying to find ways to alter Google’s development methods to be more inclusive and produce results that don’t disproportionately harm particular groups. She found there was a groundswell of support for this kind of work. Individual Googlers had started to care about the subject and formed various working groups dedicated to the responsible use of AI.

Around this time, more people in the technology industry started realizing the importance of having employees focused on the ethical use of AI, as algorithms became deeply woven into their products and questions of bias and fairness abounded. The prevailing concern was that biases in both the data used to train AI models and the people doing the programming were encoding inequalities into the DNA of products already being used for mainstream decision-making around parole and sentencing, loans and mortgages, and facial recognition. Homogenous teams were also ill-equipped to see the impact of these systems on marginalized populations.

Mitchell’s project to bring fairness to Google’s products and development methods drew support within the company, but also skepticism. She held many meetings to describe her work and explore collaborations, and some Google colleagues reported complaints about her personality to human resources, Mitchell said. A department representative told her she was unlikable, aggressive and self-promotional based on that feedback. Google said it found no evidence that an HR employee used those words.

“I chose to go to Google knowing that I would face discrimination,” Mitchell said in an interview. “It was just part of my calculus: if I really want to make a substantive and meaningful difference in AI that stretches towards the future, I need to be in it with people. And I used to say, I’m trying to pave a path forward using myself as the pavement.”

She had made enough of an impact that two colleagues who were interested in making Google’s AI products more ethical asked Mitchell if she would be their new manager in 2017. That shift marked the foundation of the Ethical AI team.

Most Popular