HomeArtificial IntelligenceArtificial Intelligence NewsThe "Godfather of AI" calls on governments to prevent machine takeover

The “Godfather of AI” calls on governments to prevent machine takeover

One of the so-called “godfathers of AI,” Geoffrey Hinton, asked governments to intervene on Wednesday to prevent machines from taking over society.

Following the publication of ChatGPT, which attracted the attention of the entire world, Hinton made headlines in May when he revealed that he had left Google after ten years of service in order to speak more openly about the dangers of AI.

At the Collision tech conference in the Canadian city, the highly regarded AI expert, who is located at the University of Toronto, was addressing a crowded audience.

More than 30,000 startup owners, investors, and tech professionals attended the conference; the majority of them were interested in learning how to ride the AI wave rather than being warned about its risks.

Before AI surpasses human intelligence, Hinton believes that its creators should be pushed to spend a lot of time figuring out how it might attempt to usurp human authority.

He said, At the moment, there are 99 very smart people working to improve AI, and one very smart person working to figure out how to stop it from taking over. Perhaps you want to be more balanced.

Despite the criticism that he is exaggerating the risks associated with AI, Hinton cautioned that they should be treated seriously.

He emphasised that he thinks it was crucial for people to understand that this was not science fiction and that it wasn’t just scare tactics. We must consider this risk as real and prepare for how to handle it in advance.

Hinton voiced concern that the large productivity gains from AI’s adoption will only benefit the wealthy and not workers, thus escalating inequality.

The workers won’t be the ones who benefit financially. He continued, That’s very bad for society because it will go into making the rich richer and not the poorer.

Additionally, he emphasized the risk posed by fake news produced by ChatGPT-style bots and expressed the wish that AI-generated content may be identified in a manner akin to how central banks watermark physical money.

It’s crucial to make an effort, such as labelling everything that is fraudulent as fake. He stated he is unsure of whether we can achieve that technically.

Such a method is being taken into consideration by the European Union in its AI Act, a piece of legislation that would provide guidelines for AI in Europe and is now being negotiated by legislators.

‘Overpopulation on Mars’

Hinton’s list of AI hazards contrasted with conference talks that focused less on safety and threats and more on capitalizing on the potential offered by ChatGPT.

According to venture capitalist Sarah Guo, who was citing Andrew Ng, another AI expert, doom-and-gloom rhetoric about AI as an existential threat is premature and is comparable to talking about overpopulation on Mars.

She also cautioned about “regulatory capture,” which would see government action shield the powerful before it could help fields like health, education, or research.

Different people had different ideas on whether the current leaders in generative AI, primarily Google and OpenAI, which is supported by Microsoft, would continue to dominate the market or whether other players would enter the picture and add their own models and breakthroughs.

According to Leigh Marie Braswell of venture capital company Kleiner Perkins, she still believes that in five years, if you want to go for the best, most accurate, and most sophisticated general model, you’ll likely still need to go to one of the few businesses with the necessary funding.

According to Gradient Ventures’ Zachary Bratun-Glennon, there will be millions of models on a network similar to the one we have now with websites.

Source link

 

Most Popular