On a laptop or smartphone, artificial intelligence can create an autonomous robot in less than 30 seconds.
It’s not need to panic just yet about anyone being able to build a Terminator while they’re standing in line for the bus, according to a recent study, because the robots are merely straightforward machines that move in straight lines without doing more complicated duties. (Interestingly, though, they never seem to acquire a configuration that involves wriggling, moving like an inch worm, or slithering.) But with more effort, the approach might democratise robot creation, claims Sam Kriegman, a computer scientist and engineer at Northwestern University and the study’s lead author.
According to Kriegman, it truly restricts the variety of the problems being posed when only big businesses, governments, and big academic institutions have sufficient computer power [to create with artificial intelligence]. What’s really fascinating is making these tools more accessible.
Design may appear to be the logical next step given that AI can currently drive vehicles and write essays. According to Columbia University roboticist Hod Lipson, who was not engaged in the research, it is difficult to develop an algorithm that can successfully construct a real-world product. The latest study, according to Lipson, is a huge step forward, but there are still a lot of questions.
The technique builds robots that are capable of a particular task—in this example, forward locomotion—using a simulation of evolution. Before, evolving robots required generating random variations, testing them, improving the top performers with fresh variations, then testing those versions once more. According to Kriegman, that calls for a lot of computing power.
As an alternative, he and his coworkers used a technique called gradient descent, which is more akin to directed evolution. It varies from random evolution in that the algorithm can evaluate how well a certain body plan will perform in comparison to the ideal. The process begins with a randomly generated body design for the robot. The AI can focus on the routes that have the best chance of success for each iteration. We gave the [algorithm] a way to determine if a mutation would be beneficial or detrimental, according to Kriegman.
The researchers created their robots using random shapes in their computer simulations, gave the AI the task of creating terrestrial locomotion, and then let the nascent robots loose in a simulated world to evolve. Reaching the ideal state just required ten simulations and a few seconds. According to research published on October 3 in the Proceedings of the National Academy of Sciences USA, the robots were able to start moving from their initial, immobile body plans at speeds of up to 0.5 body length per second, or roughly half the speed at which people walk. The study also discovered that the robots consistently evolved legs and began walking. Lipson thinks it was impressive that the AI could develop something practical from a random form after only a few repetitions.
By 3-D printing a mold of their best-performing robot’s design and filling it with silicone, the researchers created examples of the robot to test whether the simulations were effective in real-world situations. To simulate muscles flexing and expanding, they pushed air into tiny gaps in the form. Each of the resulting robots was approximately the size of a bar of soap, and they moved like clumsy tiny cartoon figures.
Because AI-simulated robots don’t often transition into the actual world, Kriegman adds, they were really excited about it just moving at all and moving in the right direction.
N. Katherine Hayles, a research professor at the University of California, Los Angeles and a professor emerita at Duke University, claims that despite the robots’ simplicity and limited functionality, the research represents a step towards more sophisticated robot design. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics, published by the University of Chicago Press in 1999, is another book by this author. It would be powerful to combine the brains and bodies, she argues, because the gradient descent method is already well-established in the building of artificial neural networks, sometimes known as neural nets—AI approaches that are inspired by the human brain.
According to Hayles, the real innovation in this area will occur when you combine an evolvable body with gradient descent methods to develop neural nets. Then, as with biological beings, the two may coevolve.
According to Lipson, AI that is capable of inventing new products might free humans from a number of nagging issues, such as designing the next-generation batteries that could slow down climate change and discovering new antibiotics and drugs for diseases that now have no known cures. These simple substantial robots are a step in that direction, he claims.
All bets are off, according to Lipson, if we can create algorithms that can create things for us. There will be a tremendous upsurge for us.