EU’s AI Act gets published in bloc’s Official Journal

The European Union’s landmark risk-based law for artificial intelligence applications, known as the EU AI Act, has been fully and definitively published in the bloc’s Official Journal.

The new law is set to take effect on August 1st, 20 days from now. Within 24 months, or by mid-2026, its requirements will essentially apply to AI developers. Nevertheless, the bill adopts a phased approach to adopting the EU’s AI rulebook, meaning that different legal provisions will start to apply at different times between now and then, with some dates being even later.

In December of last year, European lawmakers managed to reach a political consensus on the bloc’s first comprehensive AI rulebook.

Various responsibilities are placed on AI developers by the framework based on use cases and perceived danger. While there are a few potential use cases for AI that are prohibited by law, the majority of uses of AI will remain unregulated due to their minimal risk.

The law permits so-called “high risk” use cases, such as biometric AI applications and AI employed in essential infrastructure, employment, law enforcement, and education. However, creators of these apps must adhere to regulations pertaining to data quality and anti-bias.

Additionally, manufacturers of AI chatbots are subject to less stringent transparency rules under a third risk tier.

There are additional transparency requirements for vendors of general purpose AI (GPAI) models, such OpenAI’s GPT, the technology that powers ChatGPT. The most potent GPAIs, which are often chosen based on calculate threshold, may also be needed to do systemic risk assessment.

Concerns that the rule would hinder Europe’s potential to develop domestic AI behemoths that can compete with competitors in the U.S. and China led to intense lobbying efforts to weaken the obligations on GPAIs by certain AI sector players, supported by the governments of a few Member States.

Phased implementation

First, starting in early 2025, six months after the law’s enactment, the list of AI applications that are forbidden will be in effect.

China-style social credit scoring, the creation of face recognition databases through untargeted internet or CCTV scraping, and the use of real-time remote biometrics by law enforcement in public areas—aside from a few exceptions, like when searching for missing or abducted people—are among the banned (or “unacceptable risk”) use cases for AI that will soon be illegal.

Codes of practice will thereafter be imposed on developers of AI programs that fall under the purview of the law nine months after it enters into force, or around April 2025.

These codes are provided by the EU’s AI Office, which was formed by law as an ecosystem-building and supervision authority. Still unclear, though, is who will actually draft the guidelines.

Civil society has expressed worries that companies in the AI business will have the ability to create the regulations that will be applied to them, as revealed by a Euractiv piece earlier this month. The EU has reportedly been searching for consulting firms to draft the laws. Following pressure from MEPs to ensure inclusivity, MLex recently revealed that the AI Office will issue a call for expressions of interest to a small group of stakeholders to create the norms of practice for general purpose AI models.

Another important milestone is August 1, 2025, which is 12 months after the law’s coming into force and when the rules governing GPAIs that have to adhere to transparency criteria would come into effect.

The most lenient compliance timeline has been granted to a subset of high-risk AI systems, who have until 2027, or 36 months after the regulations go into effect, to fulfill their commitments. After 24 months, other high-risk systems must comply sooner.

Source link