The first comprehensive collection of laws governing artificial intelligence has just advanced towards completion. On Wednesday, the European Parliament decisively approved a “draft law” of its AI Act. The broad restrictions might have a significant impact on biometric monitoring, data privacy, and AI research within the European Union once member states have finished negotiating the bill’s final form. The modifications will also influence how other countries will approach this potent, contentious technology. By the end of the year, the regulations might be completed.
While Big Tech firms are raising the alarm about their own inventions, Europe has provided a tangible solution to the problems that AI is beginning to bring. They want AI’s positive potential for creativity and productivity to be realized, said Brando Benifei, a member of the European Parliament (MEP) representing Italy, in a statement, adding they will also battle to defend its position and prevent threats to its democracies and freedom.
The AI Act would outlaw several intrusive technologies if it were to be put into action, including real-time remote biometric identification in public areas and biometric categorization systems that take into account “gender, race, ethnicity, citizenship status, religion, [and] political orientation.” Predictive policing technology, emotion recognition, and untargeted facial image scraping from the internet or CCTV footage are more examples of AI that have been declared illegal by the European Parliament and are considered violations of human rights and the right to privacy.
According to the European news outlet, EU legislators also established a tier system for enforcement, with so-called “General Purpose AI” subject to less limitations than largeĀ language models like OpenAI’s ChatGPT. If adopted, the new legislation would mandate the labelling of all work produced by artificial intelligence and compel companies to disclose any training data that was protected by copyright.
Despite numerous prominent statements cautioning against the risks of unregulated AI, Big Tech heavyweights like Sam Altman of OpenAI have advised against “overregulation. In case the legislation turned out to be too strict, Altman even threatened to cut off access to the EU. Additionally, he predicted that Europe’s AI regulations would “get pulled back,” a rumor that was swiftly denied by EU legislators.
At the time, Dutch MEP Kim van Sparrentak remarked, If OpenAI can’t comply with basic data governance, transparency, safety and security criteria, then their solutions aren’t suited for the European market.