Biden imposes new rules for AI

Monday saw the issuance by President Joe Biden of a comprehensive executive order designed to protect against the risks posed by artificial intelligence and prevent malicious actors from using the technology to create lethal weapons or launch enhanced cyberattacks.

With this action, the federal government is staking out a position in an almost half-trillion dollar industry that is at the epicenter of a fierce competition between some of the biggest businesses in the country, such as Google and Amazon.

The Biden administration further urges Congress to enact data privacy laws, a goal that has eluded legislators for years in spite of numerous attempts.

The Biden administration further urges Congress to enact data privacy laws, a goal that has eluded legislators for years in spite of numerous attempts.

The executive order, among other things, establishes industry standards like watermarks for identifying AI-fueled products and exerts oversight over the safety tests used by businesses to assess conversation bots like ChatGPT.

White House Deputy Chief of Staff Bruce Reed said in a statement that the reform plan represents the most aggressive set of steps any government in the world has ever taken on AI safety, security, and trust.

The executive order that aims to control AI contains the following:

AI businesses are required to carry out safety testing and notify the federal government of the findings

Before the new features are made available to consumers, AI companies are required by a crucial rule set forth in the executive order to test some of their products and report the results to government officials.

Through a process called “red teaming,” developers test new products for safety to make sure they don’t pose a serious risk to consumers or the general public.

The federal government has the authority to compel a company to either discontinue a particular initiative or make product improvements if a safety assessment yields unsettling results.

The Defence Production Act, which was passed over 75 years ago and gave the White House broad authority to supervise industries related to national security, authorises the new government powers, according to the Biden administration.

Before businesses release AI systems to the public, these steps will guarantee that they are reliable, safe, and secure, according to the White House.

A new set of standards establishes AI industry norms

The executive order establishes a broad range of industry standards with the goal of producing transparent products safe from unfavourable outcomes like biological material created by AI or cyberattacks.

One well-known new standard would formalise the use of watermarks to notify users when they come across an AI-enabled product, potentially reducing the threat posed by deepfakes and other types of impostor content.

An additional regulation would guarantee biotechnology companies take the necessary safety measures when utilizing AI to produce or modify biological material.

Enterprises will have the liberty to disregard the government’s recommendations as the industry guidelines will serve as recommendations instead of orders.

The federal government will push for adherence to the biological material warning by using its influence as a major sponsor of scientific research, the White House announced. In the meantime, the White House will mandate the use of watermarks when implementing AI products, which will support the push for them.

However, Sarah Kreps, a professor of government and the director of Cornell University’s Tech Policy Institute, warned in a statement that the executive order runs the risk of offering an ambitious vision for AI’s future but lacking the authority to bring about the change in the industry.

Kreps stated that by acknowledging both the benefits and risks associated with AI, the new executive order sets the correct tone. An implementation and enforcement mechanism is what’s lacking. It is asking for a lot of work that is unlikely to be answered.

Strict oversight is applied to government agencies’ use of AI

The executive order directs a broad range of government agencies to alter how they use artificial intelligence (AI), showcasing federal institutions as examples of procedures that the administration eventually hopes the private sector will follow.

According to the White House, federal benefit programmes and contractors, for example, will take precautions to make sure AI doesn’t exacerbate racial bias in their operations. In a similar vein, the Department of Justice will set guidelines for the most effective way to look into civil rights violations involving AI.

In the interim, action will be taken by the Department of Homeland Security and the Department of Energy to address the threat that artificial intelligence poses to vital infrastructure.

Public Citizen’s president, Robert Weissman, praised the executive order despite noting its shortcomings. Public Citizen is a consumer advocacy group located in Washington, D.C.

According to Weissman, the Biden administration’s executive order today is a crucial first step towards starting the drawn-out process of regulating the quickly developing field of artificial intelligence. It’s just the beginning, though.

Source link