Without a doubt, software platforms like ChatGPT, Google Gemini, and Grok are contributing to artificial intelligence’s rapid rise in intelligence. Does that mean, however, that AI beings will surpass the generalized intellect that sets humans apart at some point in the future? If so, how does that affect humanity—for better or worse? Just a few of the queries posed at the AGI-24 conference held in Seattle this week were those mentioned above.
Artificial general intelligence, or AGI, was the topic of discussion during conference sessions at the University of Washington. From playing the game of Go to diagnosing some types of cancer, artificial intelligence is now capable of performing better than humans on an increasing number of specialized activities. However when it comes to handling a broader variety of tasks—including ones for which AI agents haven’t been trained—humans continue to be more intelligent than these machines. That’s the main idea behind AGI.
Known for building Sophia, a humanoid robot, roboticist and artist David Hanson stated that his team at Hanson Robotics places a great deal of emphasis on the issues surrounding consciousness and intelligence on par with those of humans.
During a Friday session, he stated that the real objective is to constantly investigate what it means to be intelligent. How do we become conscious? How can we create machines that co-evolve with people? All of these efforts are fantastic and they’re all simply attempting to start the engine of this kind of conscious machine that can co-evolve with humans.
Hanson stated that in order to reach that stage, programmers would need to design AI agents with “bio-drives” derived from the motivations of living things.
According to him, an agent possesses a self that is made up of several distinct patterns, including the mind, body, evolutionary drive, and the will to live. This architecture refers to the entire organism. W-H-O-A, or whoa. Therefore, he believes that if you combine these factors properly, you will have an agent that exclaims, “Whoa!” as they wake up. Where am I? Who are you? What is this place?
A creature like that would, according to Hanson, “begin to seek the affinity, the homologous relationships between itself and humans and other living beings.” When a machine begins to perform such tasks, humanity has a “Whoa” moment as well.
What would happen, for instance, if the agent’s will to survive caused it to “fix” itself so that people couldn’t turn it off? Hanson stated that when AGI agents advance, it will be the responsibility of their developers to proceed cautiously. In order to do this, he has assembled a “little hacker group” to focus on AGI strategies that are biologically inspired.
He believes that the “tinkerers” approach is the best course of action. Let’s just give it a shot. Check to see whether it functions. According to Hanson, artificial general intelligence (AGI) will not suddenly “foom” and transform into an unmanageable superintelligence.
He stated that after we figure out how to raise those infant AGIs to adulthood, we will generate baby AGIs. Give them affection. He considers this to be a very significant idea. When we need them to be beings, don’t treat them like tools.
However, is it possible for AI entities to ever evolve into beings similar to humans? Following Hanson’s address, Christof Koch, a neuroscientist at the Allen Institute in Seattle, emphasized in a virtual session that consciousness and intelligence are not the same thing. And he contended that the design of AI agents’ hardware would prevent them from having consciousness.
According to Koch, integrated information theory is a viable explanation for consciousness. According to the concept, the causal power produced by a system and its elemental interconnection can be used to quantify different levels of consciousness.
The causal abilities of brains are necessary for computers to be conscious, according to Koch. The human brain is significantly more capable than the architecture that serves as the foundation for modern computer technology.
This machine’s causal power is always going to be extremely small, regardless of the program you are using, according to Koch.
That’s not to say machines can’t get smarter. Koch believes that AI agents can replicate human intelligence and even interior lives so perfectly that they appear aware. However, this does not imply that the agents had the same experiences and feelings as humans.
Koch made a comparison between it and a black hole computer simulation. “Spacetime will bend around the computer and execute the software so that it would be sucked into the black hole, so you don’t have to be concerned about that,” he stated. “Well, that’s absurd, people say. It is merely a simulation.
The concept of artificial consciousness is not entirely ruled out by Koch. He claimed that new paths toward creating sentient machines could be made possible by quantum or neuromorphic computers.
Is it significant that consciousness differs from evidence of intelligence? Koch stated unequivocally that it does — and in certain ways, he is proving his point.
Koch stated that he owns an executive position and a financial interest in Intrinsic Powers, a company that is developing a brain-monitoring gadget to detect the presence of consciousness in behaviorally unresponsive patients. He cited a recently released study that claimed up to 100,000 people in the United States may exhibit some level of consciousness while not responding to external stimuli.
According to Koch, they are genuinely secretly conscious. How can we find that out? Since many of these will pass away after 45 days when vital treatment is no longer provided. 80% of them actually pass away.
Hanson is equally committed to AGI research and artificial consciousness. We can’t wait 100 years; we’ll be out of luck and out of time. We’re going to pull a depth from the environment that we can’t return, and if we stopped today and said, ‘OK, we’re just going to go play our Nintendos and try to chill with solar panels,’ we’d probably be too late, he added.
So it isn’t the AGI that will kill humans. He said that the absence of AGI will end humanity. We’re not clever enough yet. We need to get smarter, which is why he is proposing AGI today. Let us move things forward in the proper manner.