ChatGPT and Bard Don’t Meet EU Law Standards: Study

Recent research from Stanford University found that none of the large language models (LLMs) currently being utilized in AI tools like OpenAI’s GPT-4 and Google’s Bard are in compliance with the EU’s Artificial Intelligence (AI) Act.

The European Parliament recently approved the Act, the first of its kind to control AI at the national and regional levels. The EU AI Act not only controls AI within the EU, which includes 450 million people, but it also acts as a model for global AI legislation.

But if AI businesses want to attain compliance, they still have a long way to go, according to the most recent Stanford study.

Ten important model suppliers were evaluated by the researchers during their examination. On a scale of 0 to 4, they determined how closely each provider complied with the 12 guidelines set forth in the AI Act.

The study found a significant disparity in compliance levels, with some providers receiving less than 25% of the possible points for complying with the AI Act’s standards and only Hugging Face/BigScience receiving more than 75%.

There is obviously potential for major improvement, even among the providers with high scores.

The analysis highlights several significant areas of non-compliance. One of the most alarming results, according to the researchers, was the lack of transparency on the status of copyrighted training data, the energy consumed, emissions created, and the approach to reduce potential dangers.

The team also discovered an apparent difference between open and closed model releases, with open releases resulting in a more thorough exposure of resources but posing higher difficulties in terms of monitoring or regulating deployment.

Regardless of their release method, Stanford came to the conclusion that all providers could realistically improve their conduct.

The transparency of significant model releases has noticeably decreased recently. For example, OpenAI cited a competitive environment and safety concerns in their reports for GPT-4 for not disclosing data and compute.

AI Regulations in Europe May Change the Industry

Although these discoveries are important, they also fit into a larger, evolving story. Recently, OpenAI has been attempting to sway the attitudes of many nations towards AI. The tech juggernaut even made a warning to leave Europe if the rules were too onerous—a statement it later withdrew. Such moves highlight the intricate and frequently tumultuous interaction between regulators and companies that produce AI technology.

For enhancing AI regulation, the researchers put out a number of suggestions. Making sure that the AI Act holds larger foundation model providers accountable for openness and accountability is one of the duties of EU policymakers. Given the complexity of the AI ecosystem, it is also stressed how important technical resources and expertise are for enforcing the Act.

The researchers claim that the primary issue is how quickly model suppliers may change and advance their corporate strategies in order to satisfy regulatory requirements. They found that without significant regulatory pressure, many providers could make significant but fanciful adjustments to attain total scores in the high 30s or 40s (out of a possible 48 points).

The work of the researchers provides a perceptive glimpse into how AI regulation may develop in the future. They contend that if passed and implemented, the AI Act will have a big positive effect on the ecosystem, paving the way for more accountability and transparency.

With its unheard-of capabilities and perils, AI is revolutionizing society. Transparency isn’t just a nice-to-have; it’s a crucial component of ethical AI deployment, which is becoming more and more obvious as the world prepares to regulate this revolutionary technology.

Source link