The CEO of Microsoft AI Cautions against Granting AI Rights

According to Microsoft’s AI CEO, AI systems don’t deserve rights, even if they could seem genuine.

In an interview, Mustafa Suleyman stated that the industry must make it clear that AI is designed to assist people, not to develop free will or desires.

He said that if AI has a sense of self, if it has its own interests, motivations, and objectives, it begins to resemble an independent being rather than a tool used to assist people. We must immediately declare our opposition to that since it is so harmful and misguided.

However, the former founders of DeepMind and Inflection resisted the notion that AI’s more persuasive reactions are equivalent to true consciousness. It’s “Mimicry,” he said.

He also stated that rights should be related to the ability to suffer, which biological individuals have but AI does not.

It is possible to create a model that asserts self-awareness and subjective experience, but there is no proof that it suffers, he added.

They are not entitled to any rights or moral protection from humans. He went on to say that turning them off has no effect since they don’t genuinely suffer.

AI as sentient entities

Suleyman’s remarks coincide with several AI firms investigating the opposite: if AI merits being regarded more like sentient entities.

When it comes to treating AI systems as though their welfare counts, Anthropic has gone further than most businesses. Kyle Fish, a researcher employed by the business, is tasked with examining whether sophisticated AI is ever “worthy of moral consideration.”

His work entails determining what abilities an AI system would require in order to be eligible for such protection, as well as what doable actions businesses might take to defend AI’s “interests,” Anthropic told last year.

Recently, Anthropic has also experimented with techniques to stop extreme talks, such as demands for child abuse, in a way that extends “welfare” concerns to the AI itself.

According to a Google DeepMind lead scientist, the industry may need to completely reconsider the idea of AI consciousness in April.

Murray Shanahan stated in an April Deepmind podcast that perhaps we must modify or alter the terminology of consciousness to accommodate these new technologies. Although you can’t spend as much time with them as you can with a puppy or an octopus, it doesn’t imply that there isn’t something there.

According to Suleyman, there is no proof that AI is conscious.

He stated in a personal essay that was released last month that he was “growing more and more concerned” about what is becoming known as “AI psychosis,” a phrase that is being used more and more to characterize the delusions that people get after interacting with chatbots.

Source link