A new report from the US Commerce Department supports “open” AI models

In a report released on Monday, the U.S. Commerce Department endorsed “open-weight” generative AI models, such as Meta’s Llama 3.1, but it also suggested that “new capabilities” be developed by the government to keep an eye on these models for potential hazards.

The National Telecommunications and Information Administration (NTIA) of the Commerce Department wrote the paper, which claimed that open-weight models increase the accessibility of generative AI for small businesses, researchers, nonprofit organizations, and individual developers. It suggests that these factors make it inappropriate for the government to impose restrictions on access to open models, at least not before determining whether such restrictions would be detrimental to the market.

The sentiment is consistent with remarks made recently by Lina Khan, chair of the FTC Commission, who feels that open models can encourage healthy competition by allowing more small businesses to bring their ideas to market.

According to a statement from Alan Davidson, assistant secretary of Commerce for Communications and Information and NTIA administrator, the openness of the largest and most potent AI systems will impact competition, innovation, and dangers in these ground-breaking instruments. “The NTIA paper emphasizes the value of open AI systems and urges more proactive risk management due to the extensive availability of model weights for the largest AI models. The government must play a major role in fostering AI development and enhancing capacity to recognize and manage emerging threats.

The study is being released at a time when both local and international regulators are considering legislation that may limit or add new conditions to those that would allow businesses to release open-weight models.

It is almost certain that California will adopt bill SB 1047, which would require any business training a model with more than 1026 FLOP of compute capacity to strengthen its cybersecurity and create a mechanism for “shutting down” copies of the model under its ownership. The European Union has completed the dates for corporations operating abroad under its AI Act, which sets forth new regulations for copyright, transparency, and AI applications.

According to Meta, the AI laws of the EU will prohibit it from making some of its models publicly available in the future. Several startups and large internet corporations have also expressed opposition to California’s law, arguing that it is excessively burdensome.

There is a degree of laissez-faire in the NTIA’s model governance concept.

In its study, the NTIA requests that the government create an ongoing initiative to gather data about the advantages and disadvantages of open models, assess that data, and take appropriate action based on the assessment, which may include putting some limitations on model availability. The paper also suggests that in addition to funding research on risk reduction, the government investigate the safety of different AI models and establish thresholds for “risk-specific” indicators that would indicate when policy changes might be necessary.

According to U.S. Secretary of Commerce Gina Raimondo, these actions and the others would be compliant with President Joe Biden’s executive order on artificial intelligence. The directive required businesses and government organizations to establish new guidelines for the development, use, and use of artificial intelligence.

According to a press release from Raimondo, the Biden-Harris Administration is working tirelessly to optimize AI’s potential while lowering its risks. By embracing transparency and offering suggestions on how the US government should get ready for and adjust to future difficulties, today’s study offers a road map for responsible AI innovation and American leadership.

Source link