Can AI under the concept of consciousness?

Some people wonder if ChatGPT and other contemporary chatbots are conscious since they are so adept at simulating human conversation.

No, at least not at this time. Everyone who works in the artificial technology industry is very certain that ChatGPT is not alive in the way that the common person would typically understand.

Yet the question doesn’t end there. It’s unclear exactly what consciousness is in the era of artificial intelligence.

How do you transfer these deep neural networks and these matrices of millions of data onto our conceptions of what consciousness is? That’s kind of terra incognita,” said Nick Bostrom, the institute’s founding director and the Latin phrase for “uncharted land” at Oxford University.

While philosophers have spent decades debating the nature of consciousness, science fiction has long explored the development of artificial life. Others have even contended that certain AI algorithms as they are now implemented should be considered sentient (one Google engineer was fired for making such a claim).

The co-founder of OpenAI, the business that developed ChatGPT, Ilya Sutskever, has speculated that the algorithms powering its inventions may be “somewhat aware.”

We discussed the possibility of advanced chatbots having some kind of awareness with five experts in the field. And if so, what moral duties does humanity owe to a creature like that?

It is a recently developed field of study.

This is a new field of study, according to Bostrom. Simply said, there is a ton of work to be done.

According to the experts, it truly depends on how you define the terminology and the subject in typical philosophical form.

Because to their simplicity of use and impressive mastery of English and other languages, ChatGPT and comparable applications like Microsoft’s search assistant are already being used to help with activities like programming and generating straightforward material like press releases. They are frequently referred to as “large language models” since most of their proficiency stems from having been trained on vast amounts of content that have been extracted from the internet. Despite the fact that their words are persuasive, they were not written with accuracy as a high concern and frequently get the facts wrong when they try to explain them.

Both ChatGPT and Microsoft spokespeople told NBC News that they adhere to strict ethical guidelines, but they did not address concerns that their products could develop consciousness. According to a Microsoft spokesperson, the Bing chatbot cannot think or learn on its own.

Computer scientist Stephen Wolfram stated that ChatGPT and other large language models employ mathematics to determine a likelihood of what word to use in any given circumstance based on whatever text library it has been trained on in a lengthy essay on his website.

Many philosophers concur that something must have a subjective experience in order for it to be conscious. Philosopher Thomas Nagel claimed in his seminal essay “What Is It Like to Be a Bat?” that something can only be conscious “if there is something that it is like to be that organism.” Even if a bat’s brain and senses are very different from a human’s, it’s possible that it has some form of experience unique to bats. In comparison, a dinner dish wouldn’t.

The co-director of New York University’s Center for Mind, Brain, and Consciousness, David Chalmers, has written that while ChatGPT doesn’t seem to have many of the characteristics of consciousness that are typically assumed, such as sensation and independent agency, it’s simple to imagine that a more complex program might.

They can take on any new character at any time, kind of like chameleons. Chalmers told NBC News that it’s unclear what their guiding principles and ideals are. He added that they might eventually gain a stronger feeling of agency.

One issue raised by philosophers is that although users can ask an advanced chatbot if it has inner experiences, they can’t rely on it to provide a trustworthy response.

The Center for the Future Mind’s founding director, Susan Schneider, described them as “great liars.”

She claimed that they were getting better at interacting with people in seamless ways. “They can tell you that they think of themselves as people. They will then say the contrary in a different chat 10 minutes later.

According to Schneider, modern chatbots use written works by humans to convey their internal states. She contends that one approach to determine whether a program is conscious is to deny it access to that kind of information and then check to see whether it can still express subjective experience.

Ask it if it comprehends the concept of surviving when its system dies. Or perhaps it would miss a frequent human contact. And you delve into the responses to discover why it reports in that manner, she said.

Robert Long, a philosophy fellow at the Center for AI Safety, a nonprofit organisation in San Francisco, issued a warning that a system like ChatGPT is not necessarily conscious just because it is complicated. On the other hand, he pointed out that just because a chatbot cannot be relied upon to accurately express its own subjective experience, it does not follow that it is devoid of one.

On his Substack, Long stated that “parrots most likely do sense pain,” adding that “if a parrot says “I feel pain,” this doesn’t mean it’s actually hurt.

Human consciousness is an evolutionary result, according to Long. This statement may serve as a model for how an increasingly sophisticated artificial intelligence system might approach the concept of subjective experience as understood by humans.

According to Long, similar things could occur with artificial intelligence.

He added: Maybe you won’t be intending it, but out of your effort to develop more complicated machines, you might obtain some sort of convergence on the kind of mind that has conscious experiences.

The possibility that humans could develop a different type of sentient being raises the question of whether they owe it any moral obligations. Bostrom suggested that although it was challenging to make predictions about something so hypothetical, humans could begin by simply asking an AI what it needed and agreeing to assist with the simplest requests, or “low-hanging fruits,” as he put it.

Its code could even need to be modified.

Giving it everything at once might not be a viable option. Bostrom stated, He means, he’d want to have a billion dollars. But, if there are really insignificant things we might provide for them, like simply altering a small portion of the code, that may be quite important. Maybe do that if someone needs to change one line of code and all of a sudden feels much better about their condition.

In the event that a synthetic consciousness does eventually coexist with humans, civilizations may be forced to fundamentally rethink several issues.

The majority of free societies concur that everyone should have the right to reproduce if they so desire, and that each individual should have one vote for representative democratic leadership. With artificial intelligence, though, things get complicated, according to Bostrom.

Something has to yield, he added, “if you’re an AI that could produce a million clones of itself in the space of 20 minutes, and then each of those gets one vote.”

In the context of a world we co-habit with digital minds, some of these concepts that we believe to be truly essential and significant would need to be rethought, Bostrom added.

Source link