The United Kingdom has proposed new AI rules

New plans to regulate the use of artificial intelligence (AI) will be released today to aid in the development of compatible rules to promote innovation in this avant-garde technology while also safeguarding the public.

It happens at the same time as the Data Protection and Digital Information Bill, which will change the UK’s data laws to encourage innovation in technologies like AI. The Bill will embrace Brexit’s advantages to maintain a high level of protection for people’s privacy and personal data while saving businesses around £1 billion.

Machines that learn from data how to perform tasks generally performed by humans are referred to as artificial intelligence. For instance, AI aids in the detection of patterns in financial transactions that may indicate fraud, and clinicians use chest images to diagnose illnesses.

The new AI paper, released today, describes the government’s approach to regulating the technology in the UK, with suggested rules addressing future risks and opportunities, so businesses know how to develop and use AI systems and consumers know they are safe and robust.

The approach is based on six core principles that regulators must adhere to, with compliance to enforce these in ways that best accommodate the use of AI in their respective industries.

The proposals are aimed at promoting growth while avoiding unnecessary barriers to business. Businesses may share information about how they test the reliability of their AI, as well as follow the guidance set by UK regulators to guarantee AI is safe and avoids unfair bias.

Digital Minister Damian Collins stated: We want to ensure that the UK has the right rules in place to empower businesses while also protecting people as AI and data use endure to alter the way we live and work. It is critical that our rules provide clarity to businesses, give investors confidence, and increase public trust. Our adaptable approach will assist us in shaping the future of AI and solidifying our global position as a science and technology superpower.

The United Kingdom already has a thriving AI sector, leading Europe and ranking third in the world in terms of private investment, with domestic firms attracting $4.65 billion last year. AI technologies have benefited the economy and the country as a whole, from tracking tumors in Glasgow to improving animal welfare on dairy farms in Belfast to speeding up property purchases in England. According to recent research, more than 1.3 million UK businesses will use artificial intelligence by 2040, with over £200 billion invested in the technology.

It can be difficult for organizations and smaller businesses to navigate the extent to which existing laws apply to AI. Overlaps, inconsistencies, and gaps in regulators’ current approaches can also baffle the rules, making it more difficult for organizations and the public to have confidence where AI is used.

If AI rules in the UK fail to keep up with fast-paced technology, innovation may be curbed, making it more difficult for regulators to protect the public.

Instead of entrusting AI governance to a single regulatory body, as the EU has done with its AI Act, the government’s suggestions will enable different regulators to take a customized approach to the use of AI in a variety of settings. This better reflects the increasing use of AI in a variety of industries.

This approach will result in proportionate and adaptable regulation, ensuring that AI is quickly adopted in the UK to boost productivity and growth. The fundamental principles demand that developers and users do the following:

  1. Make certain that AI is used safely
  2. Ensure AI’s technical reliability and functions as intended
  3. Ensure that AI is transparent and understandable
  4. Contemplate fairness
  5. Determine a legal entity to be in charge of AI
  6. Define the paths to redress or contestability

The principles will be interpreted and implemented by the following regulators:

  1. Ofcom
  2. The Competition and Markets Authority
  3. The Information Commissioner’s Office
  4. The Financial Conduct Authority, and
  5. The Medicine and Healthcare Products Regulatory Agency.

They will be motivated to consider lighter touch options, like guidance and voluntary measures, or the creation of sandboxes, such as a trial environment where businesses can test the safety and reliability of AI technology before releasing it to the public.

Through a call for evidence that was launched today, industry experts, academics, and civil society organizations focused on this technology can share their perspectives on putting this approach into practice.

In the upcoming AI White Paper, which will examine the way to put the principles into practice, responses will be contemplated alongside further development of the framework.

The government will look into ways to encourage cooperation among regulators, as well as their capabilities, to ensure that they are prepared to deliver a world-class AI regulatory framework.

Professor Dame Wendy Hall, the AI Council’s Acting Chair, stated: We applaud these important first steps toward establishing a clear and consistent approach to AI regulation. This is critical for driving responsible innovation and ensuring the success of our AI ecosystem. The AI Council is excited to collaborate with the government on the next steps in developing the White Paper.

The government is also releasing the first AI Action Plan today to demonstrate how it is implementing the National AI Strategy and identifying new priorities for the coming year.

Since 2014, the government has invested more than £2.3 billion in artificial intelligence. Since releasing the National AI Strategy last year, the government has announced new investments in the sector’s long-term needs, including funding for up to 2,000 new AI and data science scholarships, as well as new visa routes, to ensure that the industry has the skills and talent it needs to thrive.

The AI Standard Hub was unveiled at the start of the year as part of the strategy. The Hub will provide practical tools and educational materials to users in industry, academia, and regulators in order for them to effectively use and shape AI technical standards. The Alan Turing Institute will lead the development of the interactive hub platform, which will be supported by the British Standards Institution and the National Physical Laboratory, and it will be launched in the autumn of 2022.