After Member State representatives voted today to approve the draft law’s final wording, the European Union’s AI Act—a risk-based strategy for regulating applications of artificial intelligence—looks to have cleared the last major obstacle to implementation.
This outcome comes after the lengthy “final” three-way negotiations between EU co-legislators that took place over several days in order to secure the political agreement that was agreed in December. Following that, work began to convert consensus positions on flimsy negotiation sheets into a final compromise wording for legislators to approve. This effort culminated in today’s Coreper vote in favor of the draft rules.
The proposed regulation brings in some governance rules for high risk uses (where AI apps might harm health, safety, fundamental rights, environment, democracy, and the rule of law) and for the most potent general purpose/foundational models deemed to pose “systemic risk.” It also lays out a list of prohibited uses of AI (also known as unacceptable risk), such as using AI for social scoring. Finally, it imposes transparency requirements on apps like AI chatbots. However, “low risk” AI applications will be exempt from legal restrictions.
A great sigh of relief will be felt throughout most of Brussels following the voting in favor of the final wording. Even at this late stage, the risk-based AI regulation has been threatened by persistent opposition from France. This opposition stems from a desire to avoid legal restrictions impeding the growth of domestic generative AI startups like Mistral AI into national champions who could compete with the US AI giants.
The text received unanimous support from all 27 ambassadors of EU member states.
With upcoming European elections and the current Commission’s mandate expiring later this year, there was a chance that the entire regulation would collapse if the vote had failed. There would also have been little time for any re-negotiations.
The European Parliament will now take up the task of enacting the draft law, with members having the opportunity to vote on the compromise language in both committee and plenary sessions. However, considering that the majority of opposition came from a small number of EU members (Germany and Italy were also connected to skepticism regarding the AI Act’s requirement to support “foundation models”), the next votes appear academic. Additionally, the EU’s premier AI Act ought to become a law in the upcoming months.
Twenty days after it is published in the EU Official Journal, the Act will become operative upon adoption. The new regulations will then be implemented in stages, with a six-month grace period before a list of prohibited uses of AI outlined in the regulation take effect (probably in the fall). This phase will precede the application of the new restrictions to in-scope apps and AI models.
In addition, a year is permitted before foundational models, often known as general-purpose AIs, are subject to regulations; this means that regulations won’t be applied until 2025. The majority of the remaining regulations won’t go into effect for two years following the law’s release.
A portion of the more potent foundational models that are thought to present a systemic risk will have their compliance monitored by an AI Office that the Commission has already started to establish. It has recently unveiled a set of initiatives aimed at improving the possibilities for domestic AI developers, such as modifying the bloc’s supercomputer network to facilitate the training of generative AI models.