The novel “M3gan” by Gerard Johnstone poses concerns about the potential risks associated with artificial intelligence.
In the film, Allison Williams’ character Gemma, who is abruptly given custody of her orphaned niece Cady, makes M3gan, a lifelike artificially intelligent doll, to be both her best friend and caretaker. The companionship and comfort-seeking robot quickly morphs into a possessive, cunning, and ruthless protector of Cady.
Director Gerard Johnstone creates a horrifying vision of the future in which machines have the capacity to demonstrate empathy, serve as close companions, and commit horrific crimes in its eerie doll meets AI gone wrong story.
Would a doll like M3gan ever exist, though? The viability of Hollywood’s newest unsettling robot doll was discussed with two AI experts.
As M3gan does for Cady in the film, it is unlikely that AI will ever be able to mimic meaningful human-to-human connections.
As she carries out the various parental responsibilities, Gemma constructs M3gan to tend to Cady’s emotional needs.
Artificially intelligent companions already exist, noted Dr. Thomas Wolf, a co-founder of Hugging Face, which distributes machine-learning algorithms to assist businesses in developing their own products. He pointed out that firms like Replika offer consumers companions, even “virtual girlfriends.” The tagline of Replika is “Chat with a bot that resembles a virtual Sims avatar and gives customers the choice to Always be available to communicate and listen.
perpetually on your side Everyone would appreciate AI that can answer politely, Wolf said. Aside from that, he is skeptical of AI’s capacity for real, meaningful intimacy. For those yearning for a true relationship, he doesn’t think anyone can replicate this kind of intimate relationship.
You can’t be friends with everyone, even among humans, he continued. This type of depth [of emotional connection] is not something we can recreate [with AI] since we are already so choosy.
M3gan shows throughout the movie that he has the ability to successfully console Cady as she laments the unexpected death of her parents. Wolf said that in practice when AI has been employed to help comfort distressed individuals, people have reacted with indifference.
He claimed, It actually wasn’t working, when they understood it was coming from AI. There was no noticeable effect. Knowing who it originates from is really important, even if the words remain the same.
Dr. Alain Briançon, who has worked as the chief technology officer at several digital businesses, including Cerebri AI and Kitchology, and cofounded others, agree with Wolf.
It’s challenging to suspend disbelief when you know you’re staring at a robot, he said.
He does, however, think that in the future, we’ll have robots that, aside from their appearance and feel, will be usually viewed as human. People won’t be able to tell [between a human and a robot] in three to four years, he predicted.
Experts concur that AI cannot violently erupt out of the blue.
M3gan’s capacity for violence and her lack of remorse are possibly her most unsettling traits. Despite Gemma programming M3gan to protect her niece at all costs, M3gan rapidly misinterprets these directives by engaging in murderous activities as a method for this purpose.
Briançon said that while he does not envision a time in the future when machines kill without being specifically ordered to do so, artificial intelligence that is designed to target human life is already in development. According to Briançon, in the not-too-distant future, you’re going to get machines of war — drones or robots — that are specifically taught to murder and do it in a wiser and smarter way.
Briançon clarified that until a human intentionally programs a specific machine to do the opposite function of what its programmer directs, AI is not intended to demonstrate what he called “reverse behavior.”
He remarked, If someone wants to train a doll to kill and turn it into a war machine, then, yeah, that can happen, but something that spontaneously changes from the aim of not killing to killing, he regards as pretty well unlikely.
Wolf acquiesced. “[AI] is not making decisions about what it wants to do. Humans possess free will, which AI lacks.
They claimed that while deadly dolls like M3gan are not in our immediate future, there are still other AI-related concerns.
Briançon raised two unique concerns about the potential of AI, and he said that “it’s not a murderous doll.
He asserts that what should actually worry us is “[the end of] the concept of the universal truth, particularly the possibility that deepfakes will have the potential to build another reality that no one can recognize.
For a very long time, the truth has served as the foundation of civilization, he claimed. If our shared reality is lost, there is no turning around.
More specifically, he worries about the capacity of bad actors to use your data and what AI can do with the vast amount of data that they know about you. He thinks the danger posed by this problem is muchly understated.
Wolf contends that the existential concerns that will start to surface as AI becomes more advanced are what pose the true threat.
You start to doubt yourself, he added. What does it mean to be a human if this computer can essentially perform 95% of the tasks that I can?
We’ll be reevaluating our place in the global community, he remarked. He doesn’t think our society is at all prepared for it.