Conscious Machine case

[ad_1]

The ghost in the machine. The soul. Consciousness. Humanity has bestowed all of these names, in various forms, upon the uniquely human experience of “being” – the thinking, feeling, expressing, and creating that we do around the clock without missing a beat. Many would extend much of this phenomenon to their pets and more complex animals, such as dolphins. Few, if any, would (yet) admit that this is anywhere close to being present in our most advanced forms of AI. But some, including Dr. Peter Boltuc, believe that this could one day be possible.

Boltuc, whose background is in the moral and political philosophy of machine consciousness, believes that conscious machines could potentially do anything that a human can do. As with so many broad topics in life, there are more detailed distinctions that can be made within the concept of consciousness. He distinguishes between functional consciousness and phenomenological consciousness.

Functional consciousness encompasses whatever a conscious entity can do i.e. it’s behavior. This theory of mind is not tied to a specific physical medium, and any action mapped out mathematically could hypothetically be done by a future AI. Phenomenological consciousness is the constant, first-person experience, and it’s a type of awareness that we have before we do any reflecting on our experience.

Peter gives the example of a nurse who walks into a hospital room and asks whether or not a patient is conscious. At this moment, the nurse cares nothing about the patient’s capabilities and his or her personality, but exclusively refers to the “light in the brain” or the phenomenon of being aware of one’s presence. This type of awareness is one that perhaps surpasses the ability to think or to show sensory perception, both of which Dr. Boltuc would categorize as outputs of functional consciousness.

Is it possible for machines to achieve this level of first-person awareness? No one really knows; how to measure such an achievement is also a mystery, certainly something that far surpasses today’s Turing Test, which is a measure of functional consciousness anyhow. “We don’t…understand how this first-person consciousness is generated,” says Peter; however, “there is good reason to believe we are going to know the exact blueprint, how this first-person consciousness is generated in the brain…once we understand it, there is a very clear situation that we could use the same blueprint to build a machine”, though he posits that we are very far from this becoming a reality. But supposing that this is not outside the realm of possibility, could we one day build an AI that is capable of gaining more consciousness than humans? Dr. Boltuc thinks this is also conceivable.

Levels of Moral Consciousness

The next line of thought drives us dangerously close to the ethical edge of the cliff. If we assume that humans, by nature, place greater amounts of moral weight on creatures that have higher levels of sentience or consciousness, wouldn’t it be logical to assume that any created entity with a higher level of awareness would be farther up the scale of moral relevance? Peter warns that while this is a logical conclusion, it is also potentially counter-intuitive and a risk to present day human-based ethics.

Expanding the argument, Boltuc suggests that we consider consciousness through the lens of non-homogenous moral space, something that he describes as including all of the universal and particular characteristics that could be considered morally relevant. He further argues that such characteristics are better seen if looked at in a collective sense, and describes the value of these properties to be partly a function of positional characteristics (i.e. the identity of the moral entity towards which a person acts).

This concept leads to the basic notion that morals hold different weights depending on individual experiences and relative positioning. “Some people think moral space needs to be homogenous…we just measure rationality…I would say there is a difference as to whether I say my child, or my neighbor’s child, or somebody else, I have different levels of moral duties, there is curvature of moral space around us…there is nothing wrong about it,” says Boltuc.

Returning to the original supposition that “a machine can do a better moral judgment at some point, therefore it would be a better moral entity”, is to take a homogenous moral viewpoint. To make this claim, we are classifying the aspects of morality that we deem valuable based on a first-person perspective, and are then weaving these criteria into our argument. This is exactly the stance about which Dr. Boltuc warns us. “We don’t want to take into account just (moral) operations of a first-person being.” As we move forward as a conscious society and being to create other entities, Peter suggests that it may be in our best interest to cultivate respect for the many forms of consciousness that exist, and could potentially exist, in the world.

Despite the apparent complexity of these arguments, Boltuc argues that interacting with other conscious organisms and solving problems based on morals does not have to be overly complicated. He suggests that ethics are likely universal, and that there is logic to considering the value of different levels of consciousness. Regardless, we still maintain a human-based ethical system that appears to be inherently flexible. These morals can be curved too far, in cases such as extreme tribalism or religious ideology, but there seems to exist a slight curve around the people and things dear to us. “Ethics are (likely) parochial for a reason”, says Peter. For the time being, this may be an important insight for sticking with our proprietary ethical basis for the future.

[ad_2]

This article has been published from the source link without modifications to the text. Only the headline has been changed.

Source link