Significant changes are ahead for the major American tech companies as the European Union’s groundbreaking artificial intelligence law goes into effect on Thursday.
The European Commission, the EU’s executive body, legislators, and EU member states all finally approved the AI Act in May. It is a historic regulation that attempts to control how businesses create, utilize, and deploy AI.
Everything you should know about the AI Act and how the largest global tech businesses will be impacted.
What is the AI Act?
One component of EU legislation pertaining to artificial intelligence is the AI Act. The rule, which was first put forth by the European Commission in 2020, attempts to mitigate the harmful effects of AI.
Large American tech firms, who are currently the main creators and developers of the most cutting-edge AI systems, will be its main target. But many other companies, including non-tech ones, will be subject to the regulations.
The law applies a risk-based approach to regulating AI and lays out a thorough and uniform regulatory framework for the technology throughout the EU.
The EU AI Act, according to Tanguy Van Overstraeten, head of Linklaters’ technology, media, and technology practice in Brussels, is “the first of its kind in the world.”
Numerous firms could be impacted, particularly those who develop AI systems but also those who deploy or simply use them in specific situations.”
The law uses a risk-based approach to govern AI, meaning that various uses of the technology are subject to different regulations based on the degree of risk they represent to society.
Strict requirements will be implemented under the AI Act for AI applications that are considered “high-risk,” for instance. These responsibilities include having sufficient risk assessment and mitigation procedures in place, having excellent training datasets to reduce the possibility of bias, routinely recording activity, and being required to provide comprehensive model documentation with authorities so they may evaluate compliance.
Autonomous vehicles, medical equipment, loan decisioning systems, educational scores, and remote biometric identification systems are a few examples of high-risk artificial intelligence systems.
Additionally, any AI applications that are judged “unacceptable” due to their level of risk are outright prohibited by law.
Applications of artificial intelligence that pose an unacceptable risk include predictive policing, the use of emotional recognition technology in the workplace or classroom, and “social scoring” systems that evaluate citizens based on the collection and analysis of their data.
For US tech companies, what does this mean?
In the midst of a global hype surrounding artificial intelligence, American behemoths like Microsoft, Google, Amazon, Apple, and Meta have been actively collaborating with and investing billions of dollars into businesses they believe can lead in the field.
Since massive computational infrastructure is required to train and run AI models, cloud platforms like Microsoft Azure, Amazon Web Services, and Google Cloud are also essential to aiding AI development.
As a result of the new regulations, Big Tech companies will surely be among the brands that are most actively targeted.
Beyond the EU, the AI Act has far-reaching effects. The AI Act likely applies to you wherever you are based because it covers any organization that has operations or an influence in the EU.
Regarding their activities in the EU market and their use of EU citizen data, this will subject tech giants to far greater scrutiny, Thompson continued.
Though this move wasn’t specifically brought upon by the EU AI Act, Meta has already limited the availability of its AI model in Europe due to regulatory concerns.
Because it is unclear if Facebook’s LLaMa models comply with the EU’s General Data Protection Regulation, or GDPR, the company announced earlier this month that it will not make its models available in the EU.
Concerns that the corporation might be breaking GDPR led to an earlier injunction to stop training its models on EU-wide posts from Facebook and Instagram.
How is generative AI treated?
An example of “general-purpose” artificial intelligence is generative AI, according to the EU AI Act.
This term describes a class of tools designed to do a wide variety of tasks at least as well as, if not better than, human capabilities. Among the general-purpose AI models are Anthropic’s Claude, Google’s Gemini, and OpenAI’s GPT.
The AI Act places stringent standards on these systems, including adhering to EU copyright laws, disclosing transparently how the models are trained, conducting regular testing, and having sufficient cybersecurity safeguards in place.
However, not every AI model is handled the same way. According to AI developers, the EU must make sure that open-source models, which are available for free and may be used to create customized AI applications, aren’t subject to unduly onerous regulations.
Open-source models include the LLaMa from Meta, the Stable Diffusion model from Stability AI, and the 7B model from Mistral. There are various exceptions for open-source generative AI models outlined by the EU.
However, in order to be excluded from the regulations, open-source providers must permit “access, usage, modification, and distribution of the model” and make their parameters—such as weights, model architecture, and model usage—publicly accessible. The AI Act states that open-source models that present “systemic” dangers would not be exempt.
He [who said this?] stated that it is “necessary to carefully assess when the rules trigger and the role of the stakeholders involved.”
What occurs when a business violates the regulations?
Enterprises violating the EU AI Act may face fines ranging from 35 million euros ($41 million) or 7% of their worldwide yearly income, whichever is greater, to 7.5 million euros or 1.5% of their worldwide yearly revenue.
Both the infraction and the size of the punished company will determine the severity of the sanctions. That surpasses the penalties that could be imposed under Europe’s stringent digital privacy legislation, the GDPR. Businesses who violate the GDPR risk fines of up to 20 million euros, or 4% of their yearly worldwide revenue.
General-purpose AI systems and all other AI models falling within the Act’s purview will be supervised by the European AI Office, a regulatory organization that the Commission formed in February 2024.
The EU is aware that imposing hefty fines on violating corporations is necessary to make legislation effective, as stated by Jamil Jiva, global head of asset management at fintech company Linedata, in an interview.
The EU is attempting to duplicate the GDPR’s demonstration of how it may “flex their regulatory influence to mandate data privacy best practices” globally with the AI Act. However, this time, the bloc is focusing on AI. Jiva went on.
Although the AI Act has officially gone into effect, it’s important to remember that the majority of its provisions won’t really take effect until at least 2026. Twelve months following the AI Act’s enactment will mark the start of restrictions on general-purpose systems.
The “transition period” of 36 months is also given to currently available commercial generative AI systems, such as Google’s Gemini and OpenAI’s ChatGPT, to bring their systems into compliance.