As China and Europe struggle to control artificial intelligence, a new battleground is forming over who will set the standards for the burgeoning technology.
China issued regulations in March governing how algorithms generate online recommendations, suggesting what to buy, watch, or read.
It is the latest salvo in China’s tightening grip on the tech sector, and it sets a significant precedent for how AI is regulated.
For some, the fact that China began drafting the AI regulation last year came as a surprise. It is one of the first major economies to put it on the regulatory agenda, said Xiaomeng Lu, director of Eurasia Group’s geo-technology practice, to CNBC.
While China revises its tech rules, the European Union is hammering out its own regulatory framework to rein in AI, but it has yet to cross the finish line.
With two of the world’s largest economies presenting AI regulations, the field for AI development and business around the world could be about to change dramatically.
A global playbook from China?
Online recommendation systems are at the heart of China’s latest policy. Companies must notify users if an algorithm is being used to display specific information to them, and users have the option to opt-out of being targeted.
According to Lu, this is a significant shift because it gives people more control over the digital services they use.
These rules come as China’s largest internet companies face a changing environment. Several Chinese tech behemoths, including Tencent, Alibaba, and ByteDance, have found themselves in hot water with authorities, primarily over antitrust violations.
I think those trends shifted the government’s attitude on this quite a bit, Lu said, to the point where they started looking at other questionable market practices and algorithms promoting services and products.
China’s moves are notable because of how quickly they were implemented in comparison to the timeframes that other regulatory jurisdictions typically work with.
According to Matt Sheehan, a fellow at the Carnegie Endowment for International Peace’s Asia program, China’s approach could provide a playbook that influences other laws around the world.
I see China’s AI regulations and their decision to move first as essentially running some large-scale experiments that the rest of the world can watch and potentially learn from, he said.
Europe’s approach
The European Union is also working on its own set of rules.
The AI Act is the next major piece of technology legislation on the table in a busy few years.
It recently concluded talks on the Digital Markets Act and the Digital Services Act, two major regulations aimed at limiting Big Tech.
The AI law now seeks to impose a comprehensive framework based on risk, which will have far-reaching implications for what products a company brings to the market. It categorizes AI risk into four categories: minimal, limited, high, and unacceptable.
France, which currently holds the rotating EU Council presidency, has proposed new powers for national authorities to audit artificial intelligence products before they are released to the public.
At times, defining these risks and categories has been difficult, with members of the European Parliament calling for a ban on facial recognition in public places to limit its use by law enforcement. The European Commission, on the other hand, wants to ensure that it can be used in investigations, while privacy advocates are concerned that it will increase surveillance and erode privacy.
According to Sheehan, while China’s political system and desire will be “totally anathema” to European legislators, the technical intentions of both sides share many similarities — and the West should pay attention to how China implements them.
We don’t want to replicate any of China’s ideological or speech controls, but some of these technical issues are similar in different jurisdictions. And, from a technical standpoint, I believe the rest of the world should be watching what happens out of China.
China’s efforts are more prescriptive, he says, and include algorithm recommendation rules that could limit tech companies’ influence on public opinion. The AI Act, on the other hand, is a broad-brush effort to bring all aspects of AI under one regulatory umbrella.
According to Lu, the European approach will be “more onerous” on businesses because it will require premarket evaluation.
That’s a much more restrictive system than the Chinese version; they’re testing products and services on the market before they’re introduced to consumers.
Two different worlds
According to Seth Siegel, global head of AI at Infosys Consulting, these differences could lead to a schism in the way AI develops on a global scale.
If I’m trying to design mathematical models, machine learning, and AI in China versus the EU, I’ll take fundamentally different approaches, he said.
At some point, China and Europe will dominate the way AI is policed, he says, establishing “fundamentally different” pillars for the technology to grow on.
I think what we’re going to see is a divergence in techniques, approaches, and styles, Siegel predicted.
Sheehan disagrees that the world’s AI landscape will splinter as a result of these disparate approaches.
Businesses are getting much better at tailoring their products to work in different markets, he says.
The greater risk, he says, is that researchers will be detained in different jurisdictions.
AI research and development cross borders, and all researchers have much to learn from one another, according to Sheehan.
If we cut ties between technologists, if we prohibit technical communication and dialogue, I would say that poses a much greater threat, having two different universes of AI that could turn out to be quite fatal in how they reach out to each other.