HomeArtificial IntelligenceArtificial Intelligence NewsRise in Support for more open-source in EU AI law

Rise in Support for more open-source in EU AI law

GitHub, Hugging Face, Creative Commons, and other companies have issued a paper to EU politicians urging them to support more open-source development of various AI models as they consider finalizing the AI Act. The report also included cosigners from EleutherAI, LAION, and Open Future.

Their list of recommendations to the European Parliament before the final rules are adopted includes clearer definitions of AI components, clarification that hobbyists and researchers working on open-source models are not profiting commercially from AI, allowing only a limited amount of real-world testing for AI projects, and setting proportionate requirements for various foundation models.

The paper’s objective, according to Peter Cihon, senior policy manager at Github, is to offer recommendations to legislators on how to best promote the development of AI. According to him, businesses want to be heard once other governments release their versions of AI laws. We wish that policymakers could emulate the EU as they put pen to paper.

The EU emerged as one of the first to start seriously debating proposals for AI regulations, and it has been a heated topic for many nations. The EU’s AI Act, however, has come under fire for its overly inclusive definitions of AI technology and its excessive emphasis on the application layer.

The AI Act has the potential to create a precedent for regulating AI globally to address its risks and promote innovation, the businesses claim in the document. The regulation has a significant chance to advance this goal by encouraging the burgeoning open ecosystem approach to AI.

Although the Act is intended to cover rules for various forms of AI, the proposed legislation governing generative AI have received the most of the attention. A proposed policy was approved by the European Parliament in June.

Some creators of generative AI models adopted the open-source philosophy of releasing access to the models, letting the larger AI community experiment with them, and fostering trust. Stability AI made Stable Diffusion open-sourced, and Meta kind of made its huge language model Llama 2 open-sourced. Llama 2 technically violates open-source principles because Meta withholds the source of its training data and places limitations on who is permitted to use the model for free.

A model’s training process should be more transparent, according to open-source proponents, who believe this will improve the performance of AI development. But it has also presented some challenges for the businesses developing these frameworks. Over concerns for safety and competition, OpenAI made the decision to stop disclosing a significant portion of its GPT research.

No matter how big or little the developer is, according to the companies that published the study, some current suggested impacting models that are deemed high-risk could be harmful to individuals without a lot of financial resources. To reduce the risks connected with foundation models, for instance, hiring third-party auditors is expensive and unnecessary.

Additionally, the group maintains that because distributing AI tools on open-source libraries does not constitute commercial activity, such tools should not be subject to legal sanctions.

According to the corporations, regulations that forbid testing AI models in actual-world situations will seriously hamper any research and development. Open testing, according to them, teaches us how to improve operations. AI applications cannot yet be evaluated outside of closed experiments due to potential legal repercussions from unproven products.

As would be expected, AI firms have been highly vocal on the provisions that ought to be included in the EU’s AI Act. Some of the recommendations made by OpenAI, which campaigned EU politicians against stricter regulations around generative AI, were incorporated into the most recent draught of the act.

Source link

Most Popular