Home Artificial Intelligence Media Gartner IT Symposium/Xpo 2019: Gartner Fellow discusses ethics in artificial intelligence

Gartner IT Symposium/Xpo 2019: Gartner Fellow discusses ethics in artificial intelligence

Gartner Fellow Frank Buytendijk said it’s important to get ahead of AI projects with ethics training.

TechRepublic’s Associate Managing Editor Teena Maddox talked with Frank Buytendijk, distinguished VP and Gartner Fellow in Gartner’s Data Analytics Group, at the Gartner IT Symposium/Xpo 2019 about ethics in tech and artificial intelligence. The following is an edited transcript of the conversation. 


Frank Buytendijk: Whenever there is a new technology that develops so fast that as people and as organizations and even as a society, we cannot put our arms around and really understand what it does, the question of ethics comes up. Is this good? What is happening? Does this need some kind of parental oversight, or can this develop in an autonomous way? And AI is certainly the most impactful technology that begs for these questions. So how do we make sure that AI behaves in the right ways, and how do we avoid that it behaves in the wrong ways?

There’s a big debate in AI, and the big future vision is that we would want to have ethical rules built into the AI. In fact this was pioneered by the science fiction writer, Isaac Asimov, when he introduced his robotic rules (The Three Laws of Robotics). The irony is that he used those as a literary instrument, and wrote really cool stories on how those rules never worked, and that still is the case today. The idea that you can build ethical rules into AI is a future vision.

The state of pay in the market today is that developers of AI need to take the responsibility for the behaviors of AI. Even if the algorithms, in production when they’re running, learn in unanticipated and unintended ways. It is always a developer responsibility that needs to be maintained even when systems are in production.

There are tons of ways you can use AI ethically and also unethically. One way that is typically being cited is, for instance, using attributes of people that shouldn’t be used. For instance, when granting somebody a mortgage or access to something or making other decisions. Racial profiling is typically mentioned as an example. So, you need to be mindful which attributes are being used for making decisions. How do the algorithms learn? Other ways of abuse of AI is, for instance, with autonomous killer drones. Would we allow algorithms to decide who gets bombed by a drone and who does not? And most people seem to agree that autonomous killer drones are not a very good idea.

The most important thing that a developer can do in order to create ethical AI is to not think of this as technology, but an exercise in self-reflection. Developers have certain biases. They have certain characteristics themselves. For instance, developers are keen to search for the optimal solution of a problem; it is built into their brains. But ethics is a very pluralistic thing. There’s different people who have different ideas. There is not one optimal answer of what is good and bad. First and foremost, developers should be aware of their own ethical biases of what they think is good and bad and create an environment of diversity where they test those assumptions and where they test their results. The the developer brain isn’t the only brain or type of brain that is out there, to say the least.

So, AI and ethics is really a story of hope. It’s for the very first time that a discussion of ethics is taking place before the widespread implementation, unlike in previous rounds where the ethical considerations were taking place after the effects. For instance, with big data and privacy, all the discussion took place when it is too little, too late. With the impact of smartphones on human communications and relationships, it kind of happened to us. There never was that discussion up front. So with AI, the discussion is taking place up front. And what we see—and that is a really good thing—is that organizations that take AI seriously typically also invest in ethical training. So, ethics becomes parts of AI training in universities, part of data science training in universities, and the companies that bet on AI are also taking care of ethical training. This is really something that is becoming a best practice.


Source link

Must Read

Highlighting AI Bias

On Monday, IBM made a monumental announcement: the company is getting out of the facial recognition business, citing racial justice concerns and the need...

Artificial Brains Need Sleep Too

 States that resemble sleep-like cycles in simulated neural networks quell the instability that comes with uninterrupted self-learning in artificial analogs of brains.No one can...

Differenciating Bitcoin and Electronic Money

Bitcoin has the largest market share among virtual currencies, and is already being used on a daily basis overseas. Since it is a virtual...

Answering the Woes of Staking Centralization

What if better behavior on blockchains could be encouraged with fun rather than value?Josh Lee and Tony Yun of Chainapsis built a staking demo at the Cross-Chain...

The future of Machine Learning

Machine learning (ML) is the process which enables a computer to perform something that it has not been explicitly told to do. Hence, ML...

Is Automation the solution for rapid scaling in response to the Pandemic

Thanks to the pandemic, the nature of work for federal agencies changed almost overnight. Agencies are now attempting to meet the challenges of a...

Siemens and SparkCognition unveil AI-driven cybersecurity solutions

Today, Siemens and industrial AI-firm, SparkCognition, announced a new cybersecurity solution for industrial control system (ICS) endpoints.DeepArmor Industrial, fortified by Siemens, leverages artificial intelligence (AI) to...

Amazon and Microsoft follow IBM, no longer in Face Recognition business

At least its bandwagon-detection AI still worksMicrosoft said on Thursday it will not sell facial-recognition software to the police in the US until the...

Developing smart contracts with buffered data model

How specifying world state data model with protocol buffers can help in developing smart contracts

Reasons why your AI Project might fail

Here is a common story of how companies trying to adopt AI fail. They work closely with a promising technology vendor. They invest the...
banner image