HomeArtificial IntelligenceArtificial Intelligence NewsUnderstanding Biden’s AI Executive Order

Understanding Biden’s AI Executive Order

In an effort to address the risks posed by artificial intelligence, President Biden signed a comprehensive executive order this week. However, some experts claim the order raises too many unanswered questions about how the regulation would actually operate.

The directive requires agencies to reconsider how they approach artificial intelligence (AI) and seeks to address risks to consumer privacy, competition, and national security while encouraging innovation and the application of AI for public services.

The order’s requirement that businesses creating the most potent AI models reveal the outcomes of safety testing is one of its most important components. According to the Executive Order, the President instructs the Commerce Department to require companies to tell us about the safety precautions they are taking so that we can determine whether or not those measures are sufficient. They also intend to hold these businesses responsible. Secretary of Commerce Gina Raimondo confirmed this on Tuesday.

However, the 63-page Executive Order doesn’t outline what happens if a business discloses that one of its models might be hazardous. Expert opinions differ; some believe the Executive Order only increases transparency, while others think that if a model is discovered to be dangerous, the government may intervene.

This has led some experts to conclude that in addressing some AI-related issues, the White House has exceeded its executive authority.

Forward guidance

According to Helen Toner, director of strategy and foundational research grants at the Centre for Security and Emerging Technology, a think tank, who attended a virtual briefing prior to the Order’s release, a senior official stated that the President gave his team the assignment of identifying and pulling every lever.

The majority of the order consists of directives to other organizations and agencies to conduct research or create more thorough guidelines. For instance, the Office of Management and Budget has 150 days to guide federal agencies on how to promote innovation while controlling the risks associated with artificial intelligence.

Divyansh Kaushik, associate director for emerging technologies and national security at think tank the Federation of American Scientists, says that while executive orders are dependent on their effective implementation, this one has a better chance than most of having an impact due to strong political support from within the federal government.

In 2019, the former president Donald Trump issued an Executive Order with a focus on AI with the goal of upholding American AI leadership. According to Kaushik, the Department of Health and Human Services was the only agency to implement AI policy in compliance with the Executive Order because there was a lack of coordinated support from senior officials. As opposed to this, Kaushik claims that the Biden Administration’s Executive Order “has buy-in from the very top, which has buy-in from the President’s office, the Chief of Staff’s office, and the Vice President’s office.”

Compute limits

The modifications to the regulations pertaining to high-skill immigration are among the aspects of the Biden Administration’s order that are probably going to take effect right away. By expanding the pool of AI talent, some of which are expected to be produced in the next ninety days, these seek to stimulate innovation in the United States.

A set of requirements placed on businesses creating or planning to create dual-use foundation models is another provision that is probably going to have a more immediate effect on the AI sector. As the U.K. government recently stated in a paper released ahead of the AI Safety Summit, these models are capable of performing a variety of tasks and may be dangerous to national security. The companies will be required to provide the U.S. government with information regarding their plans for AI development, the physical and cyber security precautions they have taken to secure their AI models, and the outcomes of any undertaken safety testing.

It is the Secretary of Commerce’s responsibility to specify which AI models are sufficiently hazardous to meet these criteria. Paul Scharre, executive vice president and director of studies at the military-affairs think tank Centre for a New American Security, claims that experts currently lack the knowledge necessary to accomplish this.

The requirements will be applicable to models that are trained with computational power above a predetermined threshold of 100 million billion billion operations in the interim. With this much processing power, no AI model has been trained yet. Epoch, a research organization, estimates that five times less was used to train OpenAI’s GPT-4, the most powerful AI model that is currently available to the public. However, according to Epoch, during the past ten months, the amount of computing power used to train AI models has doubled every six months.

Scharre attended the briefing as well, and an official from the Biden Administration stated that the threshold was set so that while current models would not be captured, the next generation of state-of-the-art models most likely would.

Scharre claims that computational capacity is a “crude proxy” for the model’s capabilities, which are the real concern of policymakers. However, as Kaushik notes, if the reporting requirements jeopardize trade secrets or intellectual property, then setting a compute threshold could incentivize AI companies to build models that achieve comparable performance while maintaining computational power below the threshold.

Presidential power limitations

The Executive Order only expressly mandates that businesses notify the government of the outcomes of the red-teaming safety tests—a process in which auditors actively seek out problems with AI models—even for models that exceed the computational threshold. The Biden Administration used the Defense Production Act, a law that permits the President to influence domestic industry in order to advance national security, as legal justification.

As for what would happen if a business announced that its AI model had failed the necessary safety tests, Toner of the Centre for Security and Emerging Technologies says it is “totally unclear.”

The underlying idea here is that a small group of businesses are developing these extremely advanced artificial intelligence (AI) systems, and those businesses are informing the government that they are unsure of the potential risks associated with their systems. And Toner thinks that’s kind of crazy. Thus, the government is saying: Alright, please tell us more so that we can make better informed decisions.

According to senior economist Samuel Hammond of the Foundation for American Innovation, the government would intervene and either forbid the model’s use or even issue an order for its removal. In recent years, the defense production axis has been used to compel companies to produce things against their will and even to stop producing things they choose not to, according to Hammond. Beneath the national security umbrella, its powers are fairly broad.

AI developers may file a legal challenge against the use of the U.S. Defense Production Act to force disclosure, according to Charles Blanchard, a partner at Arnold and Porter and a former general counsel of the Army and Air Force. This use of the act is already “very aggressive.” He does point out that practically all of the businesses to which this regulation might be applicable are already cooperating voluntarily with the government on AI safety issues, so they are unlikely to present a problem.

According to Blanchard, there could be less legal support if the government used the Defense Production Act to take action against the creators of dangerous AI models. That seems like a stretch, and it’s one where a legal challenge could be possible, he says.

Toner notes the use of AI in criminal justice and law enforcement as another area where the executive branch’s authority to act is largely absent, adding that the uncertainty surrounding post-disclosure enforcement is just one of many instances of the Biden Administration running against the boundaries of its authority.

According to her, this is essentially placing the onus on Congress to handle certain issues that the executive branch is simply unable to resolve on its own and to support some of the provisions in this document that the White House can only implement in a very tentative manner in the beginning.

Source link

Most Popular