Artificial intelligence and machine learning have the potential to help solve some of the toughest problems of our time, such as climate change and pandemics, but they can also wreak havoc and, if not used properly, can perpetuate historical injustices and structural inequalities.
To mitigate its potential harm, the world needs data governance frameworks that are economically strengthening and safeguard rights.
Artificial intelligence and machine learning work on the basis of huge datasets from which algorithms are programmed to discern patterns. These models can be used to infer new information and also predict behaviors and outcomes. Increasingly, artificial intelligence and machine learning are being used to replace human decisions with automated decision making on behalf of humans. It is often in areas that can have a significant impact on people’s lives. Take access to loans or even access into a country.
However, everything happens in a black box that even the algorithm designer may not have access to. So it is important to decide what is in the box.
The largest data sets and algorithmic activity are generated by the global social networks that monitor all of our online actions. These records can be used to anticipate and shape our needs and desires.
Large tech companies, multilateral agencies, and development banks have harnessed much of the potential of artificial intelligence to fuel economic growth and national development, and it is increasingly being used in social and economic applications, as well as technology and resource allocation. This includes targeting court decisions, selecting job applicants, and assigning students to education systems.
The COVID19 pandemic has also highlighted the tremendous value of public data and the potential value of combining public and private data in managing public health crises and disasters.
However, there is growing concern about the uneven distribution of the opportunities and disadvantages associated with artificial intelligence.
The threats
The increasing use of artificial intelligence and machine learning in public decision-making raises critical questions about justice and human rights.
In particular, the way in which digital data is produced is marked with red flags. Some of the data sets have large gaps. This makes certain people visible, underrepresented and discriminated against when collecting data. The population is not connected to the internet and the global social networks that are fueling the new data-driven economy means they just don’t exist.
Artificial intelligence also poses a risk to advances towards gender equality worldwide. There are many stories of artificial intelligence systems biased against women and gender minorities.
In addition, artificial intelligence systems may rely on assumptions and data that exclude or misrepresent groups already exposed to multiple and interrelated forms of discrimination, often producing outcomes that reflect and reinforce gendered, racial, and ableist inequalities and biases.
These systems are not subject to the strict enough controls and regulations necessary to mitigate the risks they pose to society.
This threat is so significant that several international forums have emerged determined to develop “good”, “ethical” and “responsible” artificial intelligence.
But most of the initiatives present technical solutions to social and political problems. This means that they are developed outside the framework of human rights. These are largely initiatives from the northern part of the world as well, with limited multi-stakeholder participation from the southern hemisphere.
A right-based approach
There are rights-based data frameworks that influence the development of artificial intelligence. This includes the General Data Protection Regulation of the European Union. However, they tend to focus mainly on first generation rights or fundamental rights like privacy. It will not always be the primary value in more community-oriented societies.
The COVID19 pandemic has made it clear that data must be regulated in the interest of the collective or the common good. This does not mean that the right to privacy has been lost.
The collective interest also relates to the management of data in the context of identifiable groups or communities where the potential consequence of individual identification leads to the disclosure of collective identity.
The literature and practice on data governance has been viewed and conducted primarily from this negative regulatory perspective. In other words, with an emphasis on privacy and cybersecurity compliance and penalties for violations.
This is a necessary condition only for artificial intelligence, but not sufficient. There are many areas of data governance that require positive intervention. For example, allow access to data, its usability and integrity, issues of inclusion, equality, etc., redress and social justice.
These are issues that can be understood as social and economic rights of the second and third generation.
AI that respects human rights
To address these problems, a new global project is being launched on the sidelines of the Summit for Democracy.
The Summit is an international forum promoting commitments in support of democracy and human rights, the aim of which is to assess countries’ progress in promoting artificial intelligence that respects human rights and democratic values.
The project, known as the Global Index on Responsible AI, is led by the African digital think tank Research ICT Africa and an independent Data 4 Development network.
Governments and the international community have begun to respond to the global call for responsible artificial intelligence: in 2019, 42 countries signed up to the OECD Principles on Reliable AI and committed to making AI systems safe, fair and reliable.
UNESCO recently issued a Recommendation on Ethics in Artificial Intelligence, which was adopted by its 41st General Assembly. The recommendation focuses on the protection of fundamental rights and freedoms, environmental sustainability and diversity.
The Global Index addresses the need for an integrative and measurable standard that complements the rapidly evolving understanding of what responsible artificial intelligence means in practice and promotes and pursues the implementation of governance principles by relevant stakeholders.
The Global Index tracks the implementation of responsible AI principles in more than 120 countries. An international network of independent researchers is being set up to assess the extent to which the principles are being applied. In addition, primary and secondary data are becoming indicators of responsible artificial intelligence.
This will provide governments, civil society, researchers and other stakeholders with the key evidence they need to uphold the principles of good governance in the development and implementation of artificial intelligence systems.
- meet development and human rights obligations,
- build capacity for responsible AI around the world, and
- deepen international co-operation.
The public and other stakeholders have the opportunity to help shape the design and scope of the index, which deliberately unfolds through global south lens.
Its development represents an important opportunity for experts from the African continent and the Global South to help shape the new global agenda for the responsible use and development of artificial intelligence at the forefront.