Renaming AI for Better Regulation

In recent years, technology firms have made great strides in advancing artificial intelligence (AI). According to the Organisation for Economic Co-operation and Development, over 300 policy and strategic initiatives in 60 countries focus on the impacts of AI on society, business, governments, and the planet. To regulate these entities and aid in better rulemaking, it is important to get the nomenclature right.

Broadly, AI falls into two categories — artificial general intelligence (which seeks to emulate human intelligence) and artificial narrow intelligence (applied to domain-specific, defined tasks such as medical diagnosis or automobile navigation). Currently, most AI development is concentrated in the artificial narrow intelligence category. Many applications may converge to form artificial general intelligence, but it is not technically feasible right now. Several prominent figures such as Bill Gates, Elon Musk and physicist Stephen Hawking have expressed fear over the development of artificial general intelligence, which could potentially outthink humanity and pose as an existential threat. Thus, AI developments tend to focus on artificial narrow intelligence.

The potential threat of artificial general intelligence gives the false impression that it the only type of AI we need to worry about and pushes the policy imperative to an undefined date.

Artificial narrow intelligence includes various subsets of study, such as machine learning, natural language processing, automation, neural networks and social intelligence, that help virtual assistants ‘converse’ with users. However, these systems are programmed to execute tasks and make decisions unsupervised. Therefore, a more accurate term to describe such programmes is automated or unsupervised decision systems. The potential threat of artificial general intelligence gives the false impression that it the only type of AI we need to worry about and pushes the policy imperative to an undefined date. But by narrowly redefining these systems as unsupervised and automated, policy discussions and public discourse will be rooted in existing AI applications.

Operating with this definition does away with the notion that decisions made AI algorithms and systems are unknowable and originate from a separate objective intelligence in a blackbox. It can help in demanding greater accountability on how these unsupervised decisions are made. But when working with this definition, there is also a need to understand the degree of how unsupervised these systems are and classify them. The decisions of a neutral network used by an e-commerce website to display ads are not of the same complexity as an algorithm managing automation on a factory floor. The impact of automated decisions with both are distinct and carry different risks for users.

The decisions of a neutral network used by an e-commerce website to display ads are not of the same complexity as an algorithm managing automation on a factory floor.

As we progress, governments will encounter more thorny ethics issues with unsupervised decision systems. For example, there have been several instances of accidents and fatalities associated with self-drive cars in the US. Questions of accountability and liability arise since no human was driving those cars and algorithms were managing it. Can an algorithm be held accountable in a car crash just as humans are for their actions? In Uber’s case, the operator who was testing the self-drive feature was held liable for the fatality of a pedestrian crossing the road, and not Uber as a corporation. What would punitive action on an algorithm look like? Can existing legal frameworks regulate them?

“Corporations are people, my friend”

The debate on how to regulate AI and hold it accountable is not an easy one. The field of machine ethics concerns itself with adding moral behaviours to AI to make sure it is more human-centric. Machine ethics also question what kind of rights can be bestowed on an AI and how it can be enacted. Regulating AI could require the creation of a new legal framework, drawing from lessons from past industrial revolutions. If the goal is to encode human morals and values in AI and ultimately grant some human rights to machines, it might be worth delving into corporate law for inspiration for the new framework.

Regulating AI could require the creation of a new legal framework, drawing from lessons from past industrial revolutions.

Through the 18th and 20th centuries, corporations attained personhood and are now considered legal persons subject to the same rights and responsibilities as individual human beings. Like humans, corporations are allowed to own property, raise debt, sue and be sued, exercise human rights and be held responsible for violations of those rights, and be charged for crimes like fraud and manslaughter.

The formation of a corporation sees a group of people or companies act as a single entity for a purpose. The personality of a corporation is separate from the individuals forming it. Similarly, the development of an AI involves various actors contributing resources, financial or otherwise. It can be argued that its personality is composite and separate from the individual contributors and there can be a case to grant it legal personhood.

The personality of a corporation is separate from the individuals forming it.

Borrowing from corporate law is also advantageous for the new framework as it gives structure to register the various forms of unsupervised and autonomous decision systems, like the incorporation of a company. This will aid in monitoring and evaluating the risks of these systems for better regulation.

However, the concept of limited liability, a key feature of what constitutes a corporation, should not be applied to the new AI framework, given the potential dangers the technology poses to humanity. Limited liability encouraged entrepreneurship to help pool large sums of money towards an economically beneficial purpose and protected shareholders from external risks. But critics say that it also promotes corporate irresponsibility and misadventures in the name of furthering shareholder profit. Law professor Joel Bakan, who has studied the behaviour of corporates over the years, likens the personality of a corporation to a psychopath — frequently furthering self-interests (profit) while endangering employees, customers and the public at large by contributing to the destruction of the environment and ignoring social norms on lawful behaviour.

The concept of limited liability, a key feature of what constitutes a corporation, should not be applied to the new AI framework, given the potential dangers the technology poses to humanity.

The AI framework should be held to a higher standard than the limited liability that corporations enjoy. Depending on the risks the AI applications undertake, the doctrines of strict liability or absolute liability — a person is legally responsible for the consequences of an unforeseen activity, even in the absence of fault or criminal intent — should apply to the new framework. The application of this liability doctrine can help in ensuring responsible behaviour from developers of unsupervised decision systems and account for as many externalities as possible before deploying them and interfacing with humans.

Many technology corporations developing AI have made bold and public calls for the responsible use of AI that will uphold human rights and values. Corporations say that they are hiring ethicists and designing code of ethics, and instituting review boards, audit trails and training modules. However, corporate goals to maximise profit compels companies like Palantir and Clearview AI to sell their AI products to state actors and law enforcement agencies, trampling all users’ rights and ignoring deeper concerns on potential misuse. In such an environment, there is a pressing need for a legal framework that compels companies to think about the ramifications of the AI products they develop and open a broader debate on corporate responsibility.

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link