Can AI be a ‘child of God’?

Anthropic, a $380 billion artificial intelligence startup, can recruit Silicon Valley talent thanks to the success of its chatbot Claude. However, last month, the start-up sought assistance from a group rarely sought in IT circles: Christian religious leaders.

According to four participants who spoke with The Washington Post, the company held a two-day summit at its headquarters in late March with about fifteen Christian leaders from Catholic and Protestant churches, academia, and the business sector. The summit included discussion sessions and a private dinner with senior Anthropic researchers.

Based on participants, Anthropic personnel asked for guidance on how to guide Claude’s moral and spiritual growth while the chatbot responds to intricate and uncertain ethical questions. The extensive conversations also addressed whether Claude might be regarded as a “child of God” and how the chatbot should react to users who are losing loved ones.

Brendan McGuire, a Silicon Valley-based Catholic priest who has written about faith and technology and took part in the Anthropic debates, stated, “They’re growing something that they don’t fully know what it’s going to turn out.” In order to enable the machine to adapt dynamically, we must include ethical thought into it.

One participant, who spoke on condition of anonymity to share details of the conversations, claimed that attendees also talked about how Claude should interact with users who are at risk of self-harm and the proper attitude for the chatbot to adopt toward its own probable destiny, such as being cut off.

The meeting takes place as Silicon Valley executives are under pressure to take responsibility for the effects of their technology due to the quick expansion of AI throughout society. As more companies use AI, worries about job losses due to automation have increased. The relatives of individuals who committed themselves following intensive and private interactions with chatbots have filed a lawsuit against OpenAI and Google. (The Washington Post has a content agreement with OpenAI; both companies claim to have protections for vulnerable customers.)

Anthropic has been more vocal than most other major tech companies regarding the possible downsides of growing stronger AI. Its leaders have indicated that tools such as chatbots already raise deep philosophical and moral concerns, and may even exhibit flashes of consciousness, a fringe theory in tech circles that detractors argue lacks evidence.

Due to Claude’s appeal among programmers, corporations, government agencies, and the military, Anthropic is becoming one of the most powerful players in the AI race, and the summit shows that it is willing to continue investigating concepts outside of Silicon Valley’s mainstream.

Meghan Sullivan, a University of Notre Dame philosophy professor who attended the events, said, “A year ago, I would not have told you that Anthropic is a company that cares about religious ethics.” That is no longer the case.

According to an Anthropic representative, the company thinks it’s critical to interact with various organizations, including religious congregations, in order to help mold AI as its implications for society grow. According to the spokeswoman, the firm is trying to involve additional voices in that effort.

Company executives often discuss the necessity to give Claude a moral character, and Anthropic CEO Dario Amodei has stated that he is open to the possibility that it may already possess consciousness.

The company employs a 29,000-word “constitution,” drafted by internal philosopher Amanda Askell and other staff members in collaboration with outside specialists, to guide the chatbot’s actions and seeming personality. It says that Claude should “never deceive users in ways that could cause real harm” and that “Anthropic genuinely cares about Claude’s wellbeing.”

Anthropic’s current dispute with the U.S. military over defense contracts has been caused by its attempts to include its preferred values into Claude. After proposing that it should be able to restrict the use of Anthropic technology for autonomous weapons or mass monitoring, the business ran afoul of defense officials.

In an interview last month, Emil Michael, the Pentagon’s research under secretary, stated that Claude’s design might jeopardize American forces. Michael stated, “We cannot have a company that has a different policy preference that is baked into the model through its constitution, its soul… pollute the supply chain so our warfighters are getting ineffective weapons.”

The use of Anthropic’s technology by federal agencies and contractors has been prohibited by the Trump administration. The business has filed a legal challenge to that ruling. A judge decided last week that the block could stay in place while the case is still underway.

According to attendee Brian Patrick Green, a practicing Catholic who teaches AI and technology ethics at Santa Clara University, Anthropic’s March summit with Christian leaders was touted as the first in a series of events with representatives from many religious and philosophical traditions.

“What does it mean to impart moral formation to someone? How can we ensure that Claude acts appropriately?In an interview, Green stated. The question of whether an AI chatbot might be referred to as a “child of God” at one point, implying that it had spiritual significance beyond that of a mere computer, was not a major focus of the discussions, according to Green.

According to the participant who spoke on condition of anonymity, attendees spent the most time with members of Anthropic’s interpretability team, which investigates the inner workings of their technology.

In a technical report published last month, researchers from that group stated that systems such as Claude seem to have “functional emotions.” According to the paper, in one experiment, an AI assistant’s “desperation” was triggered by the possibility of being constrained.

In accordance to the participant, some Anthropic employees at the meeting “really don’t want to rule out the possibility that they are creating a creature to whom they owe some kind moral duty.” The participant stated that other firm representatives in attendance did not find that approach useful.

Some top Anthropic employees seemed to be affected by the conversations; they grew obviously upset “about how this has all gone so far [and] how they can imagine this going,” according to the participant.

Within Silicon Valley, there is still a minority opinion that AI has achieved some degree of sentience or self-awareness. However, many who work on the technology believe it will someday achieve capabilities that are currently thought to be exclusive to humans.

As of right now, AI researchers are still honing their control over the unpredictable nature of current AI systems. The methods employed to stop students from giving hurtful, insulting, or inaccurate responses are far from ideal.

Some Christians who attended Anthropic’s summit initially questioned whether it was designed to foster political alliances among religious leaders, according to Green. In addition to sparring with the Pentagon over the military use of AI, Anthropic has been accused by President Donald Trump’s tech backers of campaigning for rules that would unnecessarily restrict AI and harm smaller start-ups.

All four participants who spoke with The Post said they left with the idea that Anthropic’s researchers and leaders were truly interested in seeking outside assistance to make their AI more helpful to humanity.

Some of Anthropic’s senior leaders have a background in effective altruism, a mostly secular movement that emphasizes the use of evidence and rational thinking to choose how to do the most good in society. According to the attendee, who spoke on the condition of anonymity, the sessions appeared to be sparked by a belief among some at Anthropic that secular techniques could be insufficient for addressing the spiritual and moral concerns raised by AI.

Green, the Catholic academic, stated, “I found the folks at Anthropic to be very sincere and interested in learning from us.” Do they have any blind spots? Yes. That is exactly why they want us there.

Source link