The field of artificial intelligence has enormous potential to advance society. However, as many have noted, it may also bring with it previously unimaginable new horrors. The same tools that will further scientific discovery could also be used to create chemical, biological, or cyberwarfare weapons because they are general-purpose technologies. Encouraging AI will mean distributing its advantages broadly while preventing malevolent actors from obtaining the most potent AI. Fortunately, there already exists a template that explains how to accomplish that.
Nations established international organizations in the 20th century to promote the development of peaceful nuclear energy while limiting the spread of nuclear weapons by restricting access to the raw materials, primarily weapons-grade uranium and plutonium. The International Atomic Energy Agency and the Nuclear Non-Proliferation Treaty are two examples of international organizations that have managed the risk. Presently, 32 countries run nuclear power plants, accounting for 10% of global electricity production, while only 9 countries have nuclear weapons.
Similar actions can be taken by nations today regarding AI. By limiting access to the extremely specialized chips required to train the most sophisticated AI models in the world, they can establish fundamental regulation of AI. Business executives have called for an international governance framework for AI that is akin to that for nuclear technology, as has U.N. Secretary-General António Guterres.
Tens of thousands of extremely specialized computer chips are used to train the most sophisticated AI systems. These chips are kept in large data centers, where they grind through months’ worth of data to develop the best AI models. The supply chain for these sophisticated chips is strictly regulated, they are hard to make, and you need a lot of them to train AI models. Governments can impose regulations that limit access to advanced chips for data centers to authorized computing providers only, and restrict access to the processing power required to train the most powerful—and dangerous—AI models to licensed and reliable AI companies.
It might appear like a big order. However, this governance regime can be implemented by a small number of countries. Only in Taiwan are the specialized computer chips manufactured that are used to train the most sophisticated AI models. Three nations—the United States, the Netherlands, and Japan—provide them with essential technology. There are instances where a single business has a monopoly on important components of the supply chain for chip manufacturing. The only company in the world that manufactures the extreme ultraviolet lithography equipment needed to create the most advanced chips is the Dutch business ASML.
To regulate these sophisticated chips, governments have already begun to take action. China can no longer purchase these countries’ chip-making machinery due to export restrictions imposed by the US, Japan, and the Netherlands. Furthermore, it is illegal for the US government to sell China the most sophisticated chips, which are created with US technology. The U.S. government has also suggested that cloud computing providers disclose to their foreign customers’ information and report any instances in which a foreign customer trains a large AI model that may be exploited for cyberattacks. Furthermore, the US government has started discussing limits on the most potent trained AI models and their distribution, but they have not yet been implemented. The same tools can be used to regulate chips to stop hostile countries, terrorists, or criminals from using the most potent AI systems, even though some of these restrictions are related to geopolitical competition with China.
Building on this foundation, the United States can collaborate with other countries to establish a framework that will regulate computing hardware for the duration of an AI model’s lifecycle, including chip manufacturing equipment, chips, data centers, training AI models, and the trained models that come from this production cycle.
Leading the charge on developing a global governance framework that restricts the sale of these extremely specialized chips to nations with established regulatory frameworks for computing hardware are the United States, Japan, and the Netherlands. This would entail monitoring and accounting for chips, tracking their users, and guaranteeing the safety and security of AI deployment and training.
However, by bridging the gap between those who have access to and those who do not, global governance of computing hardware can do more than just keep AI out of the hands of bad actors. It can also empower innovators worldwide. The industry is headed toward an oligopoly because the most advanced AI models require extremely high computing requirements for training. Both society and business cannot benefit from that kind of concentration of power.
Thus, a few AI startups have started making their models available to the public. This promotes scientific innovation and helps level the Big Tech playing field. However, once the AI model is made available to the public, anyone can alter it. Guardrails can be easily removed.
Fortunately, the federal government of the United States has started testing national cloud computing resources as a public utility for researchers, startups, and small companies. The national cloud could be used to make powerful AI models available for use, enabling reputable researchers and businesses to do so without having to post the models publicly online, where they could be misused.
Nations may even unite to construct a worldwide resource for worldwide scientific collaboration in artificial intelligence. Today, 23 countries are involved in CERN, the international laboratory for physics that runs the most sophisticated particle accelerator on the planet. To empower scientists worldwide and enable global collaboration on AI safety, nations should take similar action with regard to AI. This could involve setting up a global computing resource.
AI has a lot of potential. But in order to reap the rewards of AI, society will also need to control its hazards. Governments can safely regulate AI and lay the groundwork for a secure and prosperous future by managing the physical inputs to the technology. Contrary to popular belief, it is simpler.