Harvard anthropologist Mary Gray explains why some of the biggest problems can arise in the early stages.
Relying on artificial intelligence in your company comes with a great deal of responsibility. Ask Mary Gray, a Harvard anthropologist and Senior Principal Investigator at Microsoft Research, who this week highlighted the importance of consciously collecting data in building AI and how it can lead to social injustice. Gray spoke at the Conference on Neural Information Processing Systems about the relationship between AI and social justice.
“Data,” she warned the online audience, “is power”.
Gray reported on a study published in 2019 by a group of CalBerkeley researchers that found racial biases in artificial intelligence, which is widespread in the healthcare sector. The group studied a software program called Optum, owned by insurance company UnitedHealth, that uses algorithms and AI to predict which patients will benefit from additional care. Healthcare professionals rely on software to guide their decisions about who receives what treatments.
The researchers carefully examined nearly 50,000 medical records and found that the software recommended extra care to black patients about half the time, while recommending extra care to white patients at a much higher rate. The algorithms that are included in medical records to predict how much each patient could cost the healthcare system if left untreated. This meant that white patients, who generally have better access to health care due to a variety of factors rooted in systemic racism, were given priority for certain treatments.
In other words, Gray said, the way the data was collected and organized perpetuates racial differences. Although the Berkeley researchers only focused on one particular tool, they found the same inequality across 10 different algorithms used in the healthcare industry. , published in Science, these algorithms are applied to a total of 200 million people each year.
After researchers shared their findings with UnitedHealth, researchers worked with the company to develop a new algorithm that predicts future health, such as the likelihood of a disease getting worse rather than expected future costs. between black and white patients by 80 percent.
By adding more data to the model, said Gray, “they found a mathematically sound and scalable way to reduce racial inequalities and increase social justice.
The example is a lesson for any business owner who builds or trusts algorithms, and all industries from real estate to finance rely on algorithms to make decisions that affect customers’ lives. In some cases, the prejudices that persist are not visible on the surface as they form during the data collection phase.
As data has become so powerful,” said Gray, it is imperative that we make it our shared responsibility to bring the tools from engineers to communities and members of society who can take advantage of what we can build.