The world’s largest technology companies have launched a final push to persuade the European Union to take a light-handed approach to regulating artificial intelligence, hoping to avoid billions of dollars in fines.
After months of intense negotiations between various political groups, EU lawmakers adopted the AI Act in May, the first comprehensive set of rules governing the technology.
However, until the companion codes of practice for the law are finalized, it is unclear how strictly regulations pertaining to “general purpose” AI (GPAI) systems, like OpenAI’s ChatGPT, will be enforced and how many copyright cases and multibillion-dollar fines businesses may have to pay.
An unusually high number of applications—nearly 1,000—were received by the EU after it invited businesses, academics, and others to assist in drafting the code of practice. The source, who wished to remain anonymous because they were not authorized to speak publicly, said that the number of applications was unusual.
When the AI code of practice goes into effect late next year, it won’t be legally binding, but it will give businesses a checklist they can use to prove they are in compliance. A business that disregards the code and says it abides by the law may find itself in legal challenge.
“The code of conduct is essential. We will be able to carry on innovating if we get it right,” stated Boniface de Champris, a senior policy manager at the trade association CCIA Europe, which has Amazon, Google, and Meta among its members.
“If it’s too narrow or too specific, that will become very difficult,” he stated.
SCRAPING DATA
Using best-selling books or picture archives to train their models without the authors’ consent has raised concerns for companies like Stability AI and OpenAI regarding copyright violations.
A company’s “detailed summaries” of the data used to train its models must be provided, per the AI Act. In theory—this is being tested in the courts—a content creator who finds out their work has been used to train an AI model might be able to pursue compensation.
Business executives have stated that minimal information must be included in the required summaries to safeguard trade secrets, while others contend that copyright holders have a right to know if their work has been used without authorization.
According to a person with knowledge of the situation who wished to remain anonymous, OpenAI has also applied to join the working groups. OpenAI has come under fire for refusing to respond to inquiries concerning the data used to train its models.
A representative for Google said that the company has also submitted an application. Amazon, meanwhile, stated that it wants to share its knowledge and guarantee the success of the code of practice.
Companies are “going out of their way to avoid transparency,” according to Maximilian Gahntz, AI policy lead at the Mozilla Foundation, the nonprofit that creates the Firefox web browser.
He claimed that the AI Act offers the best opportunity to shed light on this important issue and unlock at least some of the mystery.
BIG BUSINESS AND PRIORITIES
Businesses have criticized the EU for putting tech regulation ahead of innovation, and the people writing the code of practice will try to find a middle ground.
Mario Draghi, the former head of the European Central Bank, informed the bloc last week that in order to keep up with China and the US, it needed to make decisions more quickly, better coordinate its industrial policy, and make significant investments.
After a disagreement with Ursula von der Leyen, the head of the bloc’s executive branch, Thierry Breton, a vocal supporter of EU regulations and a critic of noncompliant tech companies, resigned from his position as European Commissioner for the Internal Market this week.
European tech companies hoping for carve-outs in the AI Act to help up-and-coming firms are facing an increasing amount of protectionism within the EU.
According to Maxime Ricard, policy manager at Allied for Startups, a network of trade associations that supports smaller tech companies, “we’ve insisted these obligations need to be manageable and, if possible, adapted to startups.”
Tech companies will have until August 2025 to begin measuring their compliance efforts against the code after it is published in the first part of next year.
Nonprofit groups that have applied to assist with the code’s drafting include Mozilla, Access Now, and the Future of Life Institute.
Gahntz stated: We must exercise caution to prevent the major AI players from weakening crucial transparency requirements as we approach the point where many of the AI Act’s requirements are laid out in more detail.