The AI Act is a comprehensive set of regulations for those who develop and use artificial intelligence. The European Union reached a consensus on its details today. Legislators believe that this historic law will serve as a model for other countries.
Following several months of deliberation regarding the regulation of firms such as OpenAI, legislators from the European Union’s three branches of government—the Parliament, Council, and Commission—took over 36 hours to finalise the new legislation from Wednesday afternoon to Friday night. There was pressure on lawmakers to reach a resolution prior to the start of the EU parliament election campaign in January.
China’s new generative AI regulations came into force in August, so the law itself is not a first for the world. However, the EU AI Act is the most comprehensive regulation of this kind ever for the technology. The indiscriminate scraping of faces from the internet and biometric systems that use sensitive characteristics like race and sexual orientation to identify individuals are prohibited. Legislators also concurred that, in certain cases, biometric identification systems should be used by law enforcement in public areas.
Stronger guidelines for “very powerful” models and new transparency requirements for all general purpose AI models—such as OpenAI’s GPT-4, which powers ChatGPT—were also incorporated. Dragos Tudorache, a member of the European Parliament and one of the two co-rapporteurs spearheading the negotiations, states that the AI Act establishes guidelines for big, potent AI models, guaranteeing they do not pose systemic risks to the Union.
Businesses who break the regulations risk fines of up to 7% of their worldwide sales. The complete set of regulations will go into effect in about two years, followed by the prohibitions on AI that is prohibited in six months and the transparency requirements in a year.
There were also measures to make it simpler to defend copyright holders against generative AI and mandate greater energy transparency for general-purpose AI systems.
European Commissioner Thierry Breton stated at a press conference on Friday night that Europe has positioned itself as a pioneer and recognizes the significance of its role as a global standard setter.
AI technology and the main concerns surrounding it have changed significantly in the two years that lawmakers have been working to negotiate the rules that were agreed upon today. Lawmakers were concerned about opaque algorithms determining who would be eligible for social benefits, be granted refugee status, or be awarded a job when the AI Act was drafted in April 2021. There were instances of AI actively hurting people by 2022. Students studying remotely claimed that AI systems discriminated against them based on the colour of their skin, and in a Dutch scandal, decisions made by algorithms were linked to families being forcibly separated from their children.
Then, OpenAI released ChatGPT in November 2022, which significantly changed the conversation. Some AI experts expressed concern over the sudden surge in popularity and flexibility of AI, drawing ludicrous comparisons between AI and nuclear weapons.
The debate over whether companies that create “foundation models,” like the one behind ChatGPT, like OpenAI and Google, should be viewed as the source of possible issues and subject to appropriate regulation, or whether new regulations should instead concentrate on businesses that use those foundational models to create new AI-powered applications, like chatbots or image generators, took shape during the AI Act negotiations in Brussels.
The generative AI sector in Europe warned against regulating foundation models, claiming that this would stifle innovation among AI startups in the region. The CEO of the French AI startup Mistral, Arthur Mensch, stated last month that they are unable to control an engine that is not being used. Because the C programming language can be used to create malware, they do not regulate it. Rather, they outlaw malware. The company is still in the research and development stage, so Mistral’s foundation model 7B would be exempt under the guidelines decided upon today, according to Carme Artigas, Spain’s Secretary of State for Digitalization and Artificial Intelligence, who made this announcement at the press conference.
Whether law enforcement should be permitted to use facial recognition or other forms of biometrics to identify people either in real time or retrospectively was the main point of contention during the concluding talks. According to Daniel Leufer, a senior policy analyst at the digital rights organisation Access Now, both undermine anonymity in public areas. He says that while “post” or retrospective biometric identification uses previously banked images or video to determine that the same person also visited a bank, a supermarket, and a train station yesterday, real-time biometric identification uses live security camera feeds to identify a person standing in a train station right now.
The “loopholes” for law enforcement that seemed to have been incorporated into the finalized version of the act disappointed Leufer.
Conversations were clouded by the European regulators’ sluggish response to the social media era. The Digital Services Act, an EU rulebook intended to safeguard online human rights, was passed this year, almost 20 years after Facebook’s debut. While unable to support their smaller European competitors, the bloc was compelled to deal with issues brought about by US platforms during that period. One of the European Parliament’s two lead negotiators, Brando Benifei, told WIRED in July that “maybe we could have prevented [the problems] better by earlier regulation.” AI technology is developing quickly. It will take a long time to determine whether the AI Act is more effective in containing the negative effects of Silicon Valley’s most recent export, though.