PARIS, France, December 1, 2021 (ENS) – The first global agreement on the ethics of artificial intelligence, AI, was passed by 193 countries on Thursday. All member states of the United Nations for Education, Science and Culture, UNESCO, adopts the historic agreement that defines the common values and principles necessary for the healthy development of AI.
“The world needs rules for artificial intelligence for the benefit of mankind,” said UNESCO Director General Audrey Azoulay. “The AI ethics recommendation is an important answer. It creates the first global regulatory framework and gives states the responsibility to apply it at their level. UNESCO will support its 193 member states in the implementation and ask them to report regularly on their progress and practices. ”
The 141 actions set out in the agreement address new ethical issues raised by AI systems: their impact on decision-making, employment and work, social interaction, healthcare, education, media, freedom of expression, access to information, privacy, democracy, discrimination and the use of weapons.
The agreement states that new ethical challenges are created by the potential of artificial intelligence algorithms to reproduce prejudices about gender, ethnicity and age and to deepen existing forms of discrimination, identity prejudice and stereotypes.
Some of these problems are related to the ability of AI systems to perform tasks that previously only humans could do.
“These properties give AI systems a new and profound role in human practices and society, as well as in their relationship with the environment and ecosystems, creating a new context for children and adolescents to grow up, an understanding of the world and themselves Understand media and information critically and learn to make decisions, “it says in the agreement.
“In the long term, artificial intelligence systems could challenge people’s particular sense of experience and action and raise additional concerns about human self-image, social, cultural and ecological interaction, autonomy, ability to act, value and dignity,” warns the agreement .
AI is the ability of a machine, like a computer, to think, learn, plan, and be creative. It enables systems to understand their environment, process what they perceive, solve problems, and change their behavior to achieve a goal by analyzing the effects of previous action and independent work.
Artificial intelligence is already present in everyday life, from booking flights to applying for credit to driving autonomous vehicles and is used in cancer detection or in creating special environments for the disabled.
AI also supports the decision-making of governments and the private sector and helps fight global problems such as climate change and world hunger, according to UNESCO.
However, the agency warns that AI poses unprecedented challenges.
We are seeing an increase in ethnic and gender bias, significant threats to privacy, dignity and agency, the dangers of mass surveillance, and the increased use of unreliable artificial intelligence technologies in law enforcement, to name a few. So far there have been no universal standards that provide an answer to these problems, declared UNESCO in a statement.
With this in mind, the adopted Recommendation aims to guide the establishment of the legal infrastructure necessary to ensure the ethical development of this technology.
The text highlights the benefits of AI while reducing the associated risks. According to UNESCO, it offers a guide to ensure that the digital transformation promotes human rights and contributes to the achievement of the 17 UN Sustainable Development Goals.
Addresses issues related to transparency, accountability and data protection with action-oriented policy chapters on data governance, education, culture, work, health and the economy.
One of their most important demands is to protect data beyond what technology companies and governments are doing to provide better protection for people by ensuring transparency, freedom of choice and control over their personal data. The recommendation also expressly prohibits the use of artificial intelligence systems for social scoring and mass surveillance.
The text also emphasizes that AI actors should prefer data, energy and resource efficient methods that help make AI a more important tool in the fight against climate change and environmental problems.
Gabriela Ramos, UNESCO Deputy Director General for Social and Human Sciences, said: Decisions that affect millions of people must be fair, transparent and controversial. These new technologies must help us face the major challenges of today’s world, such as increasing inequalities and the environmental crisis.
Pew Research Probes AI’s Advantages and Problems
In the summer of 2018, the Pew Research Center, a non-partisan American think tank based in Washington, DC, surveyed 979 technology pioneers, innovators, developers, business leaders as well as politicians, researchers and activists on the topic of artificial intelligence.
Pew canvassers asked: Will people be better off than they are today while the emerging algorithm-driven artificial intelligence continues to spread?
The respondents predicted that networked artificial intelligence will increase human effectiveness, but will also threaten human autonomy, ability to act and abilities.
They said that computers could match or even surpass human intelligence and abilities in tasks such as complex decision-making, reasoning and learning, sophisticated analysis and pattern recognition, visual acuity, speech recognition, and language translation.
They said that “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes save time, money and life and give people the opportunity to enjoy a more personalized future.
Many have focused their optimistic observations on healthcare and the many possible applications of AI to diagnose and treat patients or help older people live fuller, healthier lives. They were also excited about AI’s role in contributing to large public health programs built around huge amounts of data that could be captured on everything from personal genomes to nutrition.
Ethical AI Required for Healthcare
AI holds great promise for improving the delivery of health and medical services around the world, but only if ethics and human rights are at the center of their design, implementation and use, as per the guidelines of the World Health Organization, WHO, published in june.
The report “Ethics and Governance of Artificial Intelligence for Health” is the result of a two-year consultation of a panel of international experts appointed by the WHO.
Like any new technology, artificial intelligence has tremendous potential to improve the health of millions of people around the world, but like any technology, it can also be misused and harmed, said WHO Director General Dr. Tedros Adhanom Ghebreyesus. This important new report provides valuable guidance to countries on how to maximize the benefits of AI, minimize its risks and avoid its pitfalls.
AI is already being used in some affluent countries to improve the speed and accuracy of disease diagnosis and detection; help with clinical care; Strengthening health research and drug development and supporting various public health interventions such as disease surveillance, outbreak response and health systems management.
AI could empower patients to better control their own health care and better understand their changing needs. It could also enable resource-poor countries and rural communities, where patients often have limited access to medical or healthcare professionals, to fill gaps in access to health care Services.
However, the WHO warns against overestimating the health benefits of AI, especially at the expense of the basic investments and strategies required for universal health care.
It warns against “unethical collection and use of health data; AI distortions and risks to patient safety, cybersecurity and the environment encoded in algorithms.
The WHO explains that while public and private sector investments in the development and deployment of AI are essential, the unregulated use of AI could subordinate the rights and interests of patients and communities to the powerful. business interests of technology companies or government interests in monitoring and social control.
The WHO report points out that systems made up primarily of data collected from individuals in high-income countries may not work well for low- and middle-income individuals.
Artificial intelligence (AI ) holds great promise for improving the delivery of healthcare and medicine around the world, but only if ethics and human rights are put at the heart of its design, implementation and operation and use, according to new WHO guidelines released today.
The “Ethics and Governance of Artificial Intelligence for Health” report is the result of two-year consultations carried out by a panel of international experts appointed by WHO.
Like any new technology, artificial intelligence has tremendous potential to improve the health of millions of people around the world, but like any technology, it can also be misused and harmed, said Dr. Tedros Adhanom Ghebreyesus, General Manager by whom. This important new report provides valuable guidance to countries on how to maximize the benefits of AI, minimize its risks and avoid its pitfalls.
Artificial intelligence can and is already used in some rich countries to improve the speed and accuracy of disease diagnosis and detection; help with clinical care; Strengthening health research and drug development and supporting various public health interventions such as disease surveillance, outbreak response and health systems management.
Artificial intelligence could also allow patients to take greater control their health and better understand their changing needs. It could also enable resource-poor countries and rural communities, where patients often have limited access to health workers or medical professionals, to fill gaps in access to health services.
However, new WHO report warns against overestimating the health benefits of AI, especially when it comes at the expense of the basic investments and strategies required for universal healthcare.
It also notes that opportunities come with challenges and risks, including the unethical collection and use of health data; Bias encoded in algorithms and AI risks for patient safety, cybersecurity and the environment.
For example, while public and private sector investment in the development and use of AI is critical, the unregulated use of AI could jeopardize the rights and interests of patients and communities, the powerful commercial interests of technology companies, or the interests of governments in surveillance and social control.
The report also emphasizes that systems trained primarily on data from people in high-income countries may not work well for low- and middle-income people.
Therefore, artificial intelligence systems need to be carefully designed to reflect the diversity of the health care and socio-economic environment. They need to be accompanied by training in digital skills, community engagement and awareness, especially for millions of health workers who need digital literacy or retraining when their roles and functions are automated, and who need to deal with machines that make decision-making and autonomy of providers and patients.
Ultimately, governments, vendors, and designers, guided by existing laws and human rights obligations, as well as new laws and guidelines that embody ethical principles, must work together to address ethical and human rights concerns about artificial intelligence technology.
WHO and UNESCO Are Parallel on Six Ethical Principles
WHO has issued six principles to ensure that AI works in the public interest in all countries:
Protecting human autonomy: People must maintain control over health systems and medical decisions; Privacy and confidentiality must be protected, and patients must give valid informed consent through an appropriate legal framework for data protection.
The first objective of the UNESCO recommendation is “… to promote respect for human dignity and gender equality, to safeguard the interests of present and future generations and to protect human rights, fundamental freedoms and the environment. Environment and ecosystems in all phases of “the AI system life cycle.”
Promoting human well-being and safety and the public interest: Developers of AI technologies must meet regulatory requirements for safety, accuracy and efficiency for precisely defined indications or use cases. Measures for quality control in practice and quality improvement in the use of AI should be available.
The UNESCO recommendation is based on the protection of human security and the public interest, Article 24 states: “This value demands that peace should be promoted throughout the life cycle of AI systems, in so far as the processes of the life cycle of AI systems should not segregate, objectify, or undermine the safety of human beings, divide and turn individuals and groups against each other, or threaten the harmonious coexistence between humans, non-humans, and the natural environment, as this would negatively impact on humankind as a collective.”
Ensuring transparency, explainability and intelligibility: Transparency requires that sufficient and easily accessible information be published or documented before any artificial intelligence technology is developed or implemented; it also requires meaningful public consultation and debate about how technology is designed and how it should or should not be used.
The UNESCO Convention seeks to ensure transparency by stating, This framework and the safeguards relate to the collection, control and use of data and the exercise of their rights by interested parties and the rights of the persons to whom the personal data are entitled to be deleted in order to guarantee a legitimate goal and a valid legal basis for the processing of personal data as well as for the personalization, anonymization and re-personalization of data, transparency, appropriate guarantees for sensitive data and effective independent monitoring.
Fostering responsibility and accountability: While artificial intelligence technologies perform specific tasks, it is the responsibility of those involved to ensure that they are used in the right conditions and by appropriately trained people. Effective mechanisms for questioning and for redress for those that are adversely affected by decisions based on algorithms should be available.
The UNESCO Convention states: “… this recommendation is intended to enable the actors to assume joint responsibility on the basis of a global and intercultural dialogue”.
Ensuring inclusiveness and equity: Inclusion requires that AI for health be designed to promote the greatest possible equitable use and access, regardless of age, sex, gender, income, race, ethnicity, sexual orientation, ability or any other characteristic protected by human rights codes.
The UNESCO Convention also provides for inclusion, which states: “Respect, protection and promotion of diversity and inclusiveness should be ensured throughout the life cycle of AI systems, at a minimum consistent with international human rights law, standards and principles…”
Promoting AI that is responsive and sustainable: Designers, developers and users must continuously and transparently assess AI applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements. AI systems must be designed to minimize environmental impact and increase energy efficiency Companies face expected disruptions in the workplace, including training health workers to adapt to the use of artificial intelligence systems, and potential job losses due to the use of automated systems.
The UNESCO Agreement also emphasizes sustainability, providing, “This must comply with international law as well as with international human rights law, principles and standards, and should be in line with social, political, environmental, educational, scientific and economic sustainability objectives.”