Home Artificial Intelligence Artificial Intelligence News Why does Google think we need to control AI

Why does Google think we need to control AI

Audio version of the article

Growing up in India, I was fascinated by technology. Each new invention changed my family’s life in meaningful ways. The telephone saved us long trips to the hospital for test results. The refrigerator meant we could spend less time preparing meals, and television allowed us to see the world news and cricket matches we had only imagined while listening to the short-wave radio.

Now, it is my privilege to help to shape new technologies that we hope will be life-changing for people everywhere. One of the most promising is artificial intelligence: just this month there have been three concrete examples of how Alphabet and Google are tapping AI’s potential. Nature published our research showing that an AI model can help doctors spot breast cancer in mammograms with greater accuracy; we are using AI to make immediate, hyperlocal forecasts of rainfall more quickly and accurately than existing models as part of a larger set of tools to fight climate change; and Lufthansa Group is working with our cloud division to test the use of AI to help reduce flight delays.

Yet history is full of examples of how technology’s virtues aren’t guaranteed. Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents. The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.

These lessons teach us that we need to be clear-eyed about what could go wrong. There are real concerns about the potential negative consequences of AI, from deepfakes to nefarious uses of facial recognition. While there is already some work being done to address these concerns, there will inevitably be more challenges ahead that no one company or industry can solve alone.

The EU and the US are already starting to develop regulatory proposals. International alignment will be critical to making global standards work. To get there, we need agreement on core values. Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.

Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it.

That’s why in 2018, Google published our own AI principles to help guide ethical development and use of the technology. These guidelines help us avoid bias, test rigorously for safety, design with privacy top of mind, and make the technology accountable to people. They also specify areas where we will not design or deploy AI, such as to support mass surveillance or violate human rights.

But principles that remain on paper are meaningless. So we’ve also developed tools to put them into action, such as testing AI decisions for fairness and conducting independent human-rights assessments of new products. We have gone even further and made these tools and related open-source code widely available, which will empower others to use AI for good. We believe that any company developing new AI tools should also adopt guiding principles and rigorous review processes.

Government regulation will also play an important role. We don’t have to start from scratch. Existing rules such as Europe’s General Data Protection Regulation can serve as a strong foundation. Good regulatory frameworks will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways. Sensible regulation must also take a proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities.

Regulation can provide broad guidance while allowing for tailored implementation in different sectors. For some AI uses, such as regulated medical devices including AI-assisted heart monitors, existing frameworks are good starting points. For newer areas such as self-driving vehicles, governments will need to establish appropriate new rules that consider all relevant costs and benefits.

Google’s role starts with recognising the need for a principled and regulated approach to applying AI, but it doesn’t end there. We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs. We offer our expertise, experience and tools as we navigate these issues together.

AI has the potential to improve billions of lives, and the biggest risk may be failing to do so. By ensuring it is developed responsibly in a way that benefits everyone, we can inspire future generations to believe in the power of technology as much as I do.

This article has been published from a wire agency feed without modifications to the text. Only the headline has been changed.

Source link

- Advertisment -

Most Popular

Data Analytics vs. Machine Learning

Technological advancements have changed the way we perform a lot of tasks. Today, we have powerful devices that have made our work quite easier....

Nanotechnology: What is it and how does it improve CBD?

There is just about any kind of CBD product you can think of - there are edibles (with vegan options because why not?), oil...

Make Your Own Virtual Zoom Background | Beginner Python Coding Tutorial 

A lot of video calling software like Zoom and Google Hangouts now let users use a virtual background behind them. In this project, we'll...

The Evolution in Data Science Jobs

AutoML is poised to turn developers into data scientists — and vice versa. Here’s how AutoML will radically change data science for the better. In...

Understanding the Difference between Blockchain and Relational database

What is a blockchain database? If we consider all that we have learned about blockchains so far, we can say that blockchains are quite sophisticated and complex....

Understanding the Future of Money

Five years ago, Bitcoin and its cousins in cryptocurrency seemed so unimportant that central banks could hardly be bothered to sneer at them. Now...
- Advertisment -