HomeArtificial IntelligenceArtificial Intelligence NewsDriving AI Innovation With Regulation

Driving AI Innovation With Regulation

The European Commission announced world-class legislation to regulate the use of artificial intelligence in April, sparking criticism that the regulations could slow down AI innovation and cripple Europe, in competition with the United States and China for leadership in AI.

For example, Andrew McAfee wrote an article entitled “EU Proposals to Regulate AI Will Only Hinder Innovation”. In anticipation of this criticism, and taking into account the example of the GDPR, where Europe’s leadership did not necessarily translate into data-related innovation, the European Commission has sought to tackle AI innovation head-on by publishing a new coordinated plan on AI.

The plan, published with the proposed regulations, is jam-packed with initiatives to help the EU become a leader in AI technology. Will the combination of regulatory and innovation policies be enough to promote accelerated AI leadership?

AI innovation can be accelerated with the right laws

While the combination is well considered and suggests improvements in both regulation and innovation, there is one problem: pro-innovation initiatives focus on RandD, not targeting the increase acceptance of “high risks”, AI use cases must be regulated.

Promoting adoption is an important missing element. Many research studies have shown that well-designed “tough laws” can actually increase innovation, especially when used with incentives that accelerate adoption, a breeding ground for AI innovations.

Regulating high risk AI and investing in innovation

The main objective of the EC regulations is to introduce new requirements for high risk AI systems, including AI systems used for remote biometric identification, public infrastructure management, recruitment and Employment, solvency assessment and training, as well as for various use cases in the public sector, such as the posting of first responders.

Legislation requires developers of these systems to implement an AI quality management system that meets requirements for high quality data sets, records, transparency, human oversight, accuracy, robustness and security, create voluntary codes of conduct to achieve similar goals.

It is clear that the creators of the proposal knew the balance between regulation and innovation. First, the legislation limits the number of AI systems that are classified as high-risk, with the exception of systems that could plausibly have been included, such as insurance, and mostly include AI systems that are already at some risk and loans.

Second, the legislation defines high level requirements without prescribing how to achieve them, and also creates a compliance system that is self-reported rather than something more arduous.

Finally, the coordinated plan is jam-packed with initiatives in support of RandD, including spaces for data sharing, test and experimental facilities, investment in research centers and excellence in AI, digital innovation centers, funding for education and investment change, health, robotics, public sector, Law Enforcement and Sustainable Agriculture.

However, the proposal lacks measures to encourage adoption which, when combined with regulation, have resulted in faster innovation in other sectors.

A motivating precedent: Incentives for electric vehicles

So how could the EC encourage much faster innovations in artificial intelligence while also enacting regulatory guard rails? The example of electric vehicles in the US, provides a guide.

The United States has emerged as a leading manufacturer of electric cars through a combination of entrepreneurship, regulations and smart market-creation incentives. Tesla revitalized the electric car industry with the idea that the new top class of electric cars should be powerful, desirable sports cars.

Corporate Average Fuel Efficiency (CAFE) regulations represented a stick that required the development of more efficient vehicles, and generous tax credits on the purchase of electric vehicles helped directly accelerate vehicle sales without affecting the natural dynamics and competitive marketplace. The combination of CAFE regulations, tax credits and entrepreneurial companies like Tesla has created such a massive boost to innovation that electric vehicle engines are poised to become less expensive than internal combustion ones.

Setting AI Incentives Correctly: Three Additional Tracking Initiatives

The EC has the chance to achieve something similar with AI. In particular, the EC should consider combining these current rules with three additional initiatives.

Create tax incentives for companies to build or buy high-risk AI systems that comply with these regulations. The EC should try to use AI proactively to help achieve economic and social goals.

For example, some banks use artificial intelligence to better assess the creditworthiness of people with limited credit histories while ensuring that banking operations are free from bias. This increases financial inclusion, a goal shared by governments, and is a benefit for all innovations in AI.

Further reduce uncertainty about EC legislative implementation. Part of this can be done directly by the EC by developing more specific standards for AI quality management and equity. However, it can be even more valuable to bring together a coalition of AI technology providers and user organizations to translate these standards into practical compliance steps.

For example, the Monetary Authority of Singapore created an industry consortium for banking, insurance and AI technology providers called Veritas to achieve similar goals for their Fairness, Ethics, Accountability and Transparency (FEAT) policies.

Consider accelerating the adoption of the AI ​​quality management systems that legislation requires by funding companies to build or purchase these systems. There is already significant academic and commercial activity in this area, in areas such as the exploitability of black box models, the assessment of potential discrimination based on data or algorithmic bias, and the testing and monitoring of artificial intelligence systems to ensure their viability determine significant data changes.

By creating the conditions to encourage the widespread adoption of such technologies, the EC should be able to achieve the twin goals of promoting innovation and ensuring compliance with new legislation in a sustainable manner.

If the European Commission strongly reduces uncertainty, promotes the use of regulated high-risk AI and promotes the use of AI quality management techniques, then it has the chance to become a world leader in AI innovation while providing critical protection for its own Citizens. We should all strive to be successful as he would be an example for the world to follow.

Source link

Most Popular