Soon after China’s AI regulations went into force last month, a number of new AI chatbots with government approval started to appear on the market. According to experts, China has not yet implemented the regulations as strictly as it could, and they have already been weakened from what was originally envisaged. The legislative strategy taken by China will probably have a significant impact on the technical rivalry between that nation and its AI powerhouse competitor, the United States.
On August 15, the Cyberspace Administration of China (CAC) implemented some of the strongest generative AI measures ever. They specify that the generative AI services should not produce content that “incites the subversion of national sovereignty or the overturn of the socialist system” or that “advocates terrorism or extremism, promotes ethnic hatred and ethnic discrimination, violence and obscenity, as well as fake and harmful information. It has been difficult for AI developers all over the world to prevent AI chatbots from pouring out undesirable or even harmful content. Chinese AI developers would find it challenging to comply with the new laws if they are strictly enforced, according to some observers.
In an effort to strike a balance between restricting the flow of politically sensitive information and supporting Chinese AI development, Chinese officials are aware of this problem and have responded by defanging some restrictions and adopting a lax enforcement strategy, according to experts. The achievement of this balance will affect not just the political liberties of Chinese individuals and the development of the Chinese AI industry, but it will also probably have an impact on how U.S. politicians see AI policy in the context of the emerging race for AI supremacy.
Regulatory relaxation
The CAC authorized the introduction of eight AI chatbots at the end of August, including Baidu’s Ernie Bot and ByteDance’s Doubao.
The draught regulations that were made available for comment in April were more lenient than the version of the regulations that were published in July. According to Matt Sheehan, a fellow at The Carnegie Endowment for International Peace, the CAC made three significant adjustments.
The first change is the focus, which was reduced from all internal uses of generative AI to simply uses that are visible to the public. Second, there were numerous places where the wording was softened. For instance, the phrase “be able to ensure” was modified to “employ effective measures to increase the quality of training data, and increase the truth, accuracy, objectivity, and diversity of training data.” Thirdly, as opposed to earlier, when the restrictions were only punitive, the new legislation included language that encouraged the development of generative AI.
According to Sheehan, whose research focuses on China’s AI ecosystem, the CAC eased the laws in response to the Chinese economy’s bad condition. Furthermore, a public debate in which think tank and university experts, government officials, and business participated decided that the restrictions were too stringent and could hinder innovation.
Flexible enforcement
According to Sihao Huang, a researcher at the University of Oxford who spent the past year studying AI governance in Beijing, once legislation are finalized, their execution is left up to the discretion of the authorities and is frequently more arbitrary and inconsistent than it is in the West.
When we look at guidelines for recommendation algorithms that have previously been published, or deep synthesis, or the cybersecurity laws of the CAC, they are enforced when the CAC wishes to, claims Huang. Companies have a lot of leeway in developing these systems, but they must be aware that the government can still impose restrictions on them if necessary.
According to Haung, whether the CAC upholds the legislation frequently depends on how well-connected or in good standing the corporation is with the relevant authorities. In order to spur government action against their rivals, Tech businesses frequently attempt to find vulnerabilities in one another’s goods and services. He claims that public pressure can make the CAC enforce the legislation.
China is lot more willing, according to Sheehan, to put something out there and then sort of figure it out as they go. In China, businesses do not think they will succeed in court if they fight the CAC on the legitimacy of this law. To deal with it or go around it, they must devise a plan of action. They lack the same legal system and impartial judges that provide a safety net.
The U.S. debate over regulation and competition
China hawks caution that the United States runs the risk of slipping behind China in the race to create increasingly potent AI systems and that U.S. legislation may allow China to catch up.
Huang disagrees, claiming that Chinese AI systems already lag behind their American counterparts and that China’s tight regulations only make this situation worse. When you utilise Chinese AI systems in reality, he claims, “their capabilities are dramatically diminished because they’re just leaning on the safer side. A combination of “very aggressive fine tuning” and content filters, which prevent the system from responding to any cues even vaguely related to politics, are to blame for the poor performance.
Sheehan concurs that Chinese corporations will face significantly greater compliance obligations than American companies.
The current generation of Chinese chatbots lags behind their American rivals in terms of sophistication and skills, according to Jordan Schneider, an adjunct fellow at the Centre for a New American Security, a military affairs think tank. According to Schneider, these apps may be on the GPT-3 level. Schneider, though, notes that OpenAI’s GPT-3 language model is only about two years old. He claims that the gap is not very wide. (GPT-4 is OpenAI’s most advanced publicly accessible AI system.)
Additionally, Schneider stresses that, contrary to what some developers and politicians, particularly those in China, initially feared, it is now much simpler to manage chatbot outputs. According to him, there haven’t been many issues with American corporations’ AI chatbots turning rogue aside from the alarming weaknesses that were exposed when Microsoft released its Bing chatbot. By and large, American models are not prejudiced. In addition to being exceedingly challenging, jailbreaking is also quickly patched. The majority of these businesses have been successful in figuring out how to adapt their models so that they comply to Western-style discourse norms. hat seems like a hurdle that these Chinese companies have encountered in general. Language models continue to have problems with things like hallucinations (a word for making up misleading information).
Schneider contends that as a result, the trade-off between promoting political stability and growth is exaggerated. According to him, if Chinese tech companies can demonstrate that they are lagging behind, they will be able to effectively request regulatory leniency in the future. Even still, according to Scheider, some regulation will be necessary to avoid any public reaction against AI if the technology quickly begins to have a negative impact on people’s day-to-day lives, such as through the automation of jobs.
And Sheehan concurs. The AI ecosystem in China won’t be completely destroyed by these laws, therefore we shouldn’t bet on it. We should take a closer look at them and understand that even with their heavy regulatory burdens, Chinese businesses are still likely to be competitive, according to Sheehan. That suggests that we may still put some regulatory constraints on our businesses and remain competitive.