Can AI Become Conscious?

The idea of artificial intelligence evolving into consciousness has long been explored in science fiction. For example, consider HAL 9000, the supercomputer turned villain in the 1968 film 2001: A Space Odyssey. That idea is becoming less and less outlandish as artificial intelligence (AI) develops quickly, and it has even been acknowledged by top AI experts. Ilya Sutskever, the head scientist of OpenAI, the firm that created the chatbot ChatGPT, for example, suggested on Twitter last year that some of the most advanced AI networks might be “slightly conscious.”

Although many academics claim that AI systems have not yet reached the stage of consciousness, many wonder how humans would know if they had.

To address this, 19 neuroscientists, philosophers, and computer scientists have developed a checklist of requirements that, if satisfied, would show that a system has a strong likelihood of being conscious. Prior to peer review, they released their provisional guidance earlier this week in the arXiv preprint repository1. According to co-author and philosopher Robert Long of the Centre for AI Safety, a research non-profit in San Francisco, California, the writers made the effort because “there seemed to be a real dearth of detailed, empirically grounded, thoughtful discussion of AI consciousness.”

According to the team, there are significant moral ramifications if it is not possible to determine whether an AI system has developed consciousness. According to co-author Megan Peters, a neuroscientist at the University of California, Irvine, if anything is classified as “conscious,” it alters a lot of how we as humans feel that entity should be handled.

Long continues by saying that, as far as he can tell, not enough effort is being made by the businesses developing sophisticated AI systems to assess the models for consciousness and make strategies for what to do in that scenario. And that’s despite the fact that, according to comments made by the leaders of top labs, they do express curiosity about AI awareness or consciousness.

According to a Microsoft spokeswoman, the company’s AI development is focused on responsibly enhancing human productivity rather than imitating human intelligence. Since the release of GPT-4, the most recent version of ChatGPT made available to the public, it has become evident that new approaches are needed to evaluate these AI models’ capabilities as we look into how to fully harness AI for society’s benefit.

WHAT IS CONSCIOUSNESS?

The definition of consciousness is one of the difficulties in understanding consciousness in AI. For the objectives of the report, according to Peters, the researchers concentrated on “phenomenal consciousness,” also known as the subjective experience. This is what it means to have the experience of being, whether that be as a human, an animal, or (if an AI system turns out to be conscious).

The biological basis of consciousness is described by numerous neuroscience-based hypotheses. What is ‘correct,’ however, is a matter of dispute. Thus, a variety of these theories were employed by the writers to develop their framework. According to the theory, there is a higher chance that an AI system is conscious if it behaves in a way that many of these theories’ key assumptions can be met.

They contend that this is a more accurate method of determining consciousness than just subjecting a system to a behavioral test, such as asking ChatGPT if it is conscious or posing challenges to it to see how it answers. That’s because artificial intelligence (AI) systems have gotten very good at mimicking people.

Neuroscientist Anil Seth, director of the center for consciousness science at the University of Sussex close to Brighton in the United Kingdom, thinks the group’s strategy, which the authors describe as theory-heavy, is the proper one to take. It does, however, underline the need for more specific, tried-and-true theories of consciousness, he claims.

A THEORY-HEAVY APPROACH

The authors adopted the assumption that consciousness has something to do with how systems process information, regardless of what components they are formed of, such as neurons, computer chips, or something else, in order to establish their criterion. Computational functionalism is the term for this strategy. Additionally, they believed that AI may benefit from ideas of awareness based on neuroscience that have been tested in both people and animals using techniques like brain scans.

Based on these presumptions, the researchers chose six of these ideas and gleaned a list of consciousness indicators from them. The global workspace theory, for instance, contends that humans and other animals conduct cognitive tasks like sight and hearing by utilizing numerous specialized systems, sometimes known as modules. These modules operate independently but concurrently, sharing data by joining together to form a single system. According to Long, a person would assess if a certain AI system exhibits an indicator generated from this theory by looking at the system’s architecture and the way information flows through it.

The proposal from the team is transparent, which impresses Seth. He claims that it is very considerate, non-boasting, and makes its assumptions very obvious. He disagrees with all of the presumptions, but that’s okay because he could be mistaken.

The authors state that their article is far from definitive on how to evaluate AI systems for awareness, and they invite other academics to help them develop their methods. However, it is already possible to apply the criterion to existing AI systems. The research assesses large language models, such as ChatGPT, and argues that this type of system arguably exhibits some of the signs of consciousness associated with global workspace theory. However, the results does not suggest that any existing AI system is a strong contender for consciousness – at least not yet.

Source link

Most Popular