AI’s impact spooking expert

A scientist who has written a large textbook on artificial intelligence has said that experts are “afraid” of their own success in this field and compared the advancement of AI to the development of the atomic bomb.

Professor Stuart Russell, founder of the Center for HumanCompatible Artificial Intelligence at the University of California, Berkeley, said that most experts believed that this century would see machines smarter than humans and called for international treaties to help develop the technology to regulate.

“The AI ​​community has yet to adjust to the fact that we now have a really big impact in the real world,” he told The Guardian. “That just hasn’t been the case in all of the history of the field – we’ve just been in the lab, developing things, trying to get things going, mostly without getting things going. So the question of the effects in the real world was not relevant at all. And we have to grow very quickly to catch up. ”

Artificial intelligence underpins many aspects of modern life, from search engines to banking, and advances in image recognition and machine translation are among the most important developments in recent years.

Russell, who co-authored the groundbreaking book Artificial Intelligence: A Modern Approach in 1995 and will deliver this year’s BBC Reith lectures titled “Living with Artificial Intelligence” starting Monday, says urgent work is needed to make sure that people remain in control of the development of super-intelligent AI.

“AI was developed with a specific methodology and a general approach. And we are not careful enough about using this type of system in complicated real-world environments,” he said.

For example, asking AI to cure cancer as soon as possible could be dangerous. He would likely find ways to induce tumors in the entire human population so that he could run millions of experiments in parallel and use all of us as guinea pigs,Russell said. And that’s because that’s the solution to the goal we gave him; We just forgot to state that you can’t use humans as guinea pigs, and you can’t use all of the world’s gross domestic product to do your experiments, and you can’t do this and can’t do that.

Russell said there is still a huge gap between today’s AI and that shown in films like Ex Machina, but a future with machines smarter than humans is at stake.

I think numbers range from 10 years for the most Optimists and a few a hundred years, Russell said AI researchers would say it will happen this century.

One concern is that a machine doesn’t have to be smarter than humans in all things to pose a serious risk. It’s something that is developing now, he said. When you look at social media and the algorithms that choose what people read and see, they have great control over our cognitive information.

The result, he said, is that the algorithms manipulate and brainwash the user so that their behavior is more predictable when it comes to what they interact with, which increases click-based revenue.

Are AI researchers freaking out about their own success? Yeah, I think we’re getting more and more scared, said Russell.

“It kind of reminds me of what happened in physics, where physicists knew atomic energy existed, they could measure the masses of different atoms, and they could figure out how much energy could be released if one could convert between different types of atoms . He said, noting that experts always stressed that the idea was theoretical. And then it happened and they weren’t ready for it.

The use of AI in military applications, such as small anti-personnel weapons, is particularly worrying. Those are the ones that are very easy to climb, which means you could put a million of them on a single truck and open the tailgate and destroy an entire city, Russell said.

Russell believes the future of AI lies in developing machines that know the real destination is uncertain, as do our preferences, which means they need to communicate with people like a butler about any decision. But the idea is complex, no less important because different people have different and sometimes conflicting preferences and those preferences are not fixed.

Russell called for measures that include a code of conduct for researchers, laws and contracts to ensure the safety of the artificial intelligence systems used, and the training of researchers to ensure that artificial intelligence is not prone to problems such as racial prejudice. He said that EU legislation that would prohibit the imitation of humans by machines should be passed worldwide.

Russell said he hoped that Reith’s lectures would emphasize that there is a choice as to what the future holds. It is really important that the public vote in these elections because it is the public that benefits or not, he said.

But there was also another message. Advances in AI will take a while, but it’s not science fiction, he said.

Source link