HomeArtificial IntelligenceArtificial Intelligence NewsGlobal leaders race to control AI in the Future

Global leaders race to control AI in the Future

Without a doubt, during the past year, the development of AI has accelerated. The notion that AI might surpass humans in intelligence in the near future has transitioned from science fiction to a plausible possibility due to the swift advancements in technology.

Turing Award winner Geoffrey Hinton came to the conclusion in May that AI might surpass human intelligence by 2028 rather than the 50–60 years he had originally predicted. Furthermore, co-founder of DeepMind Shane Legg recently stated that he believes the possibility of developing artificial general intelligence (AGI) by 2028 is 50/50. (AGI, as opposed to AI systems’ current narrow focus on completing particular tasks, refers to the point at which AI systems possess general cognitive abilities and can perform intellectual tasks at the level of humans or beyond.)

Strong, sometimes contentious discussions concerning AI, particularly its ethical ramifications and potential regulatory future, have been sparked by this near-term possibility. The discussion of these issues has shifted from academic circles to the forefront of international policy, forcing governments, business executives, and concerned citizens to consider issues that could have a significant impact on humankind’s future.

Several noteworthy regulatory announcements have advanced these discussions significantly, but there is still a great deal of uncertainty.

The existential risks of AI debate

Other than the possibility that there will be significant changes in the near future, there is scarcely any consensus on predictions regarding AI. However, the discussions have raised questions about how and to what degree AI advancements might go wrong.

For instance, during a Congressional hearing in May, OpenAI CEO Sam Altman bluntly stated his opinions regarding the potential risks that AI may pose. This technology has the potential to go very wrong. They also want to speak out about that. To stop that from occurring, they would like to collaborate with the government.

This opinion was not exclusive to Altman. In a single sentence, the nonprofit Centre for AI Safety stated in late May that mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks like pandemics and nuclear war. Hundreds of people signed it, including Altman and 38 employees of Google’s DeepMind AI division. At the height of AI doomerism, when existential risks were most widely discussed, this point of view was expressed.

As 2028 draws near, it is reasonable to consider these matters and to wonder how prepared we are to handle any risks. Still, not everyone thinks the risks are that great—at least, not the more extreme existential risks that are driving a large portion of the regulatory discourse.

Concerned and skeptical industry voices

One person who objects to the doomsday scenarios is Andrew Ng, the former head of Google Brain. He recently said that the bad idea that AI might drive us extinct was blending with the bad idea that one good way to make AI safer is to give the AI industry a lot of license requirements.

Ng believes that big tech is using this as a means of creating regulatory capture to make sure that alternatives like open source software cannot compete. The idea of “regulatory capture” describes the situation in which a regulatory body adopts rules that benefit the industry at the expense of the general public interest—in this example, by making the regulations too costly or onerous for smaller enterprises to comply with.

Yann LeCun, the chief AI scientist at Meta and a recipient of the Turing Award, along with Hinton, went one step further last weekend. He asserted in a post on X, the former Twitter platform, that Altman, Anthropic CEO Dario Amodei, and Google DeepMind CEO Demis Hassabis are all involved in “massive corporate lobbying” by endorsing “preposterous” doomsday AI scenarios.

He argued that the overall result of this lobbying would be laws that, because of the high expenses associated with complying with regulations, would effectively restrict open-source AI projects, leaving only a small number of companies [that] will control AI.

The push for regulations

Still, the rush towards regulation has accelerated. The White House announced in July that OpenAI and other top AI developers, such as Anthropic, Alphabet, Meta, and Microsoft, had made a voluntary commitment to develop methods for testing their tools for security before going public. In September, more businesses committed to this cause, bringing the total to fifteen.

U.S. government stance

This week, the White House unveiled a comprehensive Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, seeking to strike a balance between strict oversight and unrestricted development.

The order, which contains numerous directives that federal agencies must fulfil within the next year, is reportedly intended to both encourage the wider application of AI and place more restrictions on commercial AI. These orders establish new requirements for AI companies to share safety test results with the federal government, covering a wide range of topics from immigration and national security to housing and healthcare.

According to New York Times technology reporter Kevin Roose, the order reflects the White House’s attempt to strike a middle ground when it comes to AI governance because it appears to have something for everyone. The comprehensive analysis was supplied by the consulting firm EY.

Although legislation is not permanent—a new president can easily revoke it—this is a calculated move to position American policymakers at the forefront of the competitive global race to shape AI governance in the future. President Biden claims that in terms of AI safety, security, and trust, the Executive Order represents the biggest step any government has ever taken.

The strategy is more of a carrot than a stick, according to Ryan Heath of Axios, but it might be sufficient to put the United States ahead of its international competitors in the fight to regulate AI. In an article for his Platformer newsletter, Casey Newton praised the leadership. At the federal level, they have gained sufficient experience to draught an executive order that is broad but nuanced, allowing for exploration and entrepreneurship while potentially reducing negative effects.

The ‘World Cup’ of AI policy

Not just the United States is attempting to influence AI’s future. The “World Cup” of AI policy was held last week, according to the Centre for AI and Digital Policy. In addition to the United States, the G7 declared 11 non-binding AI principles and urged “organizations developing advanced AI systems to commit to the application of the International Code of Conduct.”

The G7 code is intended to promote reliable, secure, and safe AI systems, much like the US order. However various jurisdictions might adopt distinct methods in putting these guidelines into practice.

The U.K. AI Safety Summit, which concluded last week, was a grand event that brought together governments, research experts, civil society organizations, and top AI companies worldwide to talk about the risks associated with AI and how to reduce them. The Summit specifically focused on “frontier AI” models, which are the most sophisticated large language models (LLM) that can perform multiple tasks at a level that is comparable to or higher than human performance. These models include those created by Alphabet, Anthropic, OpenAI, and a number of other businesses.

The “Bletchley Declaration,” which was signed by delegates from 28 nations, including the United States and China, as a result of this conclave, according to The report, warned of the risks posed by the most sophisticated frontier AI systems. The UK government refers to the declaration as a “world-first agreement” on how to handle the riskiest kinds of artificial intelligence, and it continues, “We resolve to work together in an inclusive manner to ensure human-centric, trustworthy, and responsible AI.”

Nevertheless, no clear policy objectives were outlined in the agreement. But according to Fortune’s David Meyer, this is a “promising start” for global cooperation on a problem that has only recently come to light.

Finding a balance between regulation and innovation

It’s clear that the stakes in AI development are increasing as we get closer to the future predicted by experts like Shane Legg and Geoffrey Hinton. Regulatory frameworks are now a top priority for the White House, the G7, the EU, the UN, China, and the UK. Although there are still concerns regarding their efficacy and objectivity in real implementation, these early attempts seek to reduce risks while promoting innovation.

It’s obvious that artificial intelligence is a major global concern. The coming years will be critical to navigating the complexities of this duality: striking a balance between the necessity of ethical and societal safeguards and the promise of positive innovations that could change people’s lives, like more effective medical treatments and the fight against climate change. Governments, industry, and academia are not the only key players influencing AI’s future: citizen participation and grassroots activism are becoming more and more important.

It’s a shared challenge that could influence not only the technology sector but also the destiny of humanity.

Source link

Most Popular