HomeArtificial IntelligenceArtificial Intelligence NewsGoogle engineer claims that its AI is sentient

Google engineer claims that its AI is sentient

According to the Washington Post, Google has sent one of its engineers on paid administrative leave for allegedly violating its confidentiality policies after he became concerned that an AI chatbot system had achieved sentience.

Blake Lemoine, an engineer for Google’s Responsible AI organization, was testing whether discriminatory or hate speech is generated by its LaMDA model.

The engineer’s concerns were kindled by the AI system’s plausible responses to queries regarding its rights and the ethics of robotics. He shared with executives, in April, a document called “Is LaMDA Sentient?” In April, he distributed a document titled “Is LaMDA Sentient?” to executives, which included a transcript of his discussions with the AI (after he was sent on sabbatical, Lemoine published the copy via his Medium account), which he claims shows it arguing that it is sentient because it contains feelings, emotions, and subjective experience.

According to The Washington Post and The Guardian, Google believes Lemoine’s actions related to his work on LaMDA violated its confidentiality policies. He allegedly invited a lawyer to represent the AI system and spoke with a House Judiciary Committee representative about alleged unethical practices at Google.

The engineer stated in a Medium post on June 6th, the day he was placed on administrative leave, that he needed a minimal amount of outside consultation to help guide him in his investigations, and that the list of people he had discussions with comprised US government employees.

Last year, at Google I/O, LaMDA was publicly announced with the hope to enhance its conversational AI assistants and allow for more natural conversations. Similar language model technology is already used by the company for Gmail’s Smart Compose feature and search engine queries.

A Google spokesperson in a statement to WaPo said that there is no proof that LaMDA is sentient. Our team, which includes ethicists and technologists, reviewed Blake’s concerns in accordance with our AI Principles and informed him that the proof does not support his claims. He was informed there was no evidence that LaMDA was sentient (and plenty of evidence that it was not), said spokesperson Brian Gabriel.

Gabriel said that some in the broader AI community believe that there is a possibility of sentient or general AI, in the long run, however, it is not sensible to do so by anthropomorphizing the current conversational models, which are not sentient. These systems can riff on any fantastic topic by mimicking the types of exchanges found in millions of sentences.

Hundreds of researchers and engineers have spoken with LaMDA, and none of them has made such broad claims or anthropomorphized LaMDA as Blake has, Gabriel said.

WaPo interviewed a linguistics professor who complied that equating convincing written responses with sentience is incorrect. We now have machines that can generate words mindlessly, but we have not learned the way to stop imagining a mind behind them, said Emily M. Bender, a professor at the University of Washington.

Timnit Gebru, an eminent AI ethicist fired by Google in 2020 (though Google claims she resigned), said the debate over AI sentience risks “derailing” more important ethical discussions about the use of artificial intelligence. Rather than discussing the harms of these companies, the sexism, racism, AI colonialism, centralization of power, and white man’s burden (building the good “AGI” [artificial general intelligence] to shield us while exploiting is what they do), spent the entire weekend discussing sentience, she tweeted. The mission has been completed.

Despite his reservations, Lemoine stated that he intends to continue working on AI in the future. Whether Google retains me or not, he tweeted, I intend to stay in AI.

Source link

Most Popular