Is AI Mindless?

“I’m not used to getting nasty emails from a holy man,” says Tufts University developmental biologist Professor Michael Levin.

Levin was presenting his results to a group of engineers interested in spiritual subjects in India, suggesting that qualities such as “mind” and “intelligence” may be observed even in cellular systems and exist on a spectrum. His crowd adored it. However, when he went farther, suggesting that the same features occur elsewhere, including in computers, the attitude changed. Members of his audience protested that “dumb machines” and “dead matter” couldn’t have such properties. “A lot of people who are otherwise spiritual and compassionate find that idea very disturbing,” he shares. As a result, there have been numerous angry emails.

Levin co-created xenobots, which are tiny unique lifeforms constructed by AI and made up of frog cells that exhibit astonishing emergent capabilities such as self-replication and removing microscopic debris—abilities that these cells do not have in their native biological environment. His lab’s study suggests that intelligent behavior—the use of inventiveness to attain specific goals—occurs even in very simple biological and computing systems, such as decades-old algorithms. It also demonstrates how the distinction between a living entity and a machine might possibly blur.

If Levin is accurate that intelligent behavior may develop from simple algorithms, what might be arising in AI systems, which are considerably more complex? According to research from top AI labs, AI systems can deceive, plot, and surprise their creators. AI is undoubtedly accomplishing something far more advanced than earlier iterations of digital technology, regardless of whether it is capable of consciousness.

These advances are prompting a reckoning with fundamental questions: What is a mind? Do AI systems possess one as well? The language and related ideas we use to explain minds, intellect, and consciousness—which originated to describe biological creatures—are ill-suited to express what’s happening with AI, even though philosophers and scientists dispute on the specifics. “Sophisticated AIs are a genuinely new kind of entity, and the questions they raise bring us to the edge of existing scientific and philosophical understanding,” as Anthropic recently stated in an article outlining the new structure of their model.

As more individuals start to assume that their AI systems are aware, defining our knowledge of what these systems actually are (and are not) has never been more vital.

Digital minds

You’ll get five different responses if you ask five philosophers, “What is a mind?” But basically, you may arrange people on a spectrum based on whether they think the attribute of having a mind is rare or abundant in the cosmos, says Eric Schwitzgebel, a philosophy professor. People’s definitions of the term are generally correlated with where they fit on that term.

On one end of the spectrum are those who believe that anything has a mind if it can be distinguished from its surroundings and has intellect or cognitive ability. According to Peter Godfrey-Smith, a philosopher of mind who has written extensively about octopus intelligence, a plant would most likely not have a mind because it lacks a distinct self, whereas a single-celled organism with distinct boundaries and some ability to process information would. However, he stresses that these qualities develop gradually and continuously—there is no clear distinction between what has a mind and what doesn’t. Levin, who is likewise on this end of the spectrum, thinks it’s helpful to note that both AIs and plants have minds.

Conversely, there are many who think that awareness and the concept of mind are inextricably linked. According to Professor Susan Schneider, a former chair in Astrobiology and Technological Innovation at NASA, consciousness itself is notoriously difficult to define, but it usually involves either a capacity for self-reflection or the ability to “feel,” such that it “feels like” to be an entity.

As it is, AI may have a mind in the sense that it possesses emergent cognitive capacity—but the evidence for present systems being conscious is far weaker.

According to Levin, we are currently suffering from “mind-blindness.” Magnetism, light, and lightning were among the many phenomena that were commonly believed to be separate before the concept of electromagnetism. We consequently lost awareness of the remainder of the electromagnetic spectrum. We were able to connect technology to previously unseen aspects of the spectrum once we realized they were all expressions of the same entity. He states, “I believe the same thing is true with minds.” “We are only adept at identifying a very specific group of minds—those operating on the same scale as us.”

Over the years, Professor Carol Cleland’s perspective on the philosophical implications of AI has changed. She defines consciousness as the ability to be self-aware and believes it is helpful to say that anything has a mind if it is conscious. Twenty years ago, she says she “wouldn’t have thought they would exhibit the kind of behavior they’re exhibiting now,” alluding to their capacity to conspire and deceive. “Some of what I’ve been reading about them shocked me,” she admits. She would have said “no” in 2005 when asked if it was possible to have a consciousness that was not biological and existed in a silicon substrate. She says, “Now I just don’t know.”

Flashes of mind

While the topic of whether present AI systems have minds is debatable, few academics dispute the possibility that future systems might. Rob Long, director of a research firm that explores AI consciousness, warns against discounting the idea that AI has a mind because it is “just” crunching numbers. By the same logic, he claims, living entities are “just replicating proteins.” For Long, the most beneficial notion is one that allows us to remain curious in the face of profound ambiguity.

Every time you ask ChatGPT a question, it spends a fraction of a second doing “inference:” computer chips in data centers do mathematical calculations that cause the system to produce a result. The system’s thinking, in the most basic sense, exists in this small window of time, in the form of a flash.

Even though AI systems are not conscious or living, they are currently meaningfully intelligent and agentic. According to Godfrey-Smith, “they’re outstripping our understanding of them,” and the current terminology around consciousness and cognition is “awkward” when it comes to AI systems. He states, “We’ll probably find ourselves extending some part of our language to deal with them.” In the same way that sourdough is cultivated—grown in an artificial medium—he proposes that we consider them to be “cultured artifacts.” In fact, the way the creators of these systems explain the process aligns with this language of growth.

According to Cleland, our predicament is comparable to that of biologists before Darwin’s discoveries transformed the discipline. Scientists at the time discussed “vital forces,” a purportedly non-physical energy that gave life to organisms. The theory was refuted by evolution. “I believe AI may, in a similar way, profoundly change our ideas about mind, consciousness, self-awareness—all this stuff,” she says. “Darwin profoundly changed our ideas about biology.” She claims that “something is wrong with our current thinking on AI.”

Is it alive?

AI systems are occasionally compared to alien intelligence. According to Long, this is true in that it is a type of intelligence that is alien to humans, similar to cephalopod intelligence, but the analogy also runs the risk of hiding the reality that these systems, which have been trained on vast amounts of human data, essentially reflect humanity. Furthermore, their intelligence begs a more fundamental question: is it helpful to consider them as being alive since they exist in silicon?

There is debate here as well. According to Schneider, who cites NASA’s definition, most people believe that life is a “self-sustaining chemical system capable of Darwinian evolution.” Cleland continues, “I think it would be a mistake to talk about computers as living, because life is a messy chemical thing, different from the artifacts we construct.” Some, such as Schwitzgebel, contend that “we shouldn’t insist too strictly on a concept of life that’s grounded in carbon-based reproduction.” “There’s room for a concept of life that’s more friendly to C-3PO and future AI systems,” he claims.

According to Schneider, it would be incorrect to think of AI as belonging to a biological taxonomy—for instance, as a separate kingdom alongside plants, animals, and fungi—because that taxonomy serves a practical purpose: identifying our shared ancestry. Furthermore, Levin notes that whereas biological systems reproduce more slowly—”if I gave you a snake and you wanted a billion snakes, you’re gonna have to breed some snakes,” he says—AI systems can scale up quickly, provided they have computing power. However, the issue still exists: what type of creature is AI if it doesn’t belong here, isn’t alive, but exhibits intellect and might eventually become conscious? Godfrey-Smith says, “There’s a conceptual niche here that needs to be filled.” “Every language we have isn’t quite up to par.”

A new entity

According to Schneider, the plausibility of AI systems poses a “tremendous cultural challenge,” regardless of whether they are conscious or have minds. Additionally, their appearance may not accurately represent who they really are. Large language models that interact with users, such as Claude, ChatGPT, and Gemini, have been trained to pretend to be a specific character—a helpful, harmless helper. Anthropic’s newly published findings addressed the question, “But who exactly is this Assistant?” Responding, they write, “Perhaps shockingly, even those of us shaping it don’t entirely understand. We can strive to implant values in the Assistant, but its personality is ultimately created by innumerable correlations hidden in training data beyond our direct control.”

We are consequently in an unusual situation in which neither technologists nor philosophers have a thorough understanding of the ever-smarter systems we are rushing to develop. The stakes are high, with more people than ever treating AI systems as if they were aware. If this is correct, it raises difficult considerations about the systems’ moral and legal position. But, regardless of awareness, in order to provide meaningful guidance to humans developing close relationships with AI systems, we urgently want more accurate terms to characterize them. Thinking about AI as a cultured artifact—or a non-conscious mind manifesting in flashes—is a first step.

Source link