DeepMind AI figures out how to play soccer

The digital humanoids were taught to run naturally in the first phase of the curriculum by mimicking motion-capture video clips of humans playing soccer. A second phase involved the AI practicing dribbling and shooting the ball using trial-and-error Machine Learning, which rewarded the AI for staying close to the ball.

The first two phases represented almost 1.5 years of simulation training time, which the AI sped through in nearly 24 hours. However, after five simulated years of soccer matches, more complex behaviors other than movement and ball control began to emerge. They learned coordination, but they also learned movement skills that we hadn’t explicitly set as training drills before, states Nicolas Heess of DeepMind.

In the third phase of training, the digital humanoids were challenged to score goals in two-on-two matches. Teamwork abilities, like anticipating where a pass will be received, emerged over the course of about 20 to 30 simulated years of matches, or the equivalent of two to three weeks in the real world. This resulted in demonstrable improvements in the digital humanoids’ off-ball scoring opportunity ratings, a real-world measure of how frequently a player finds himself in a favorable position on the field.

Such simulations will not result in flashy soccer-playing robots right away. The digital humanoids were taught simple rules that allowed fouls, created a wall-like boundary around the pitch and avoided set pieces like throw-ins and goal kicks.

According to Sven Behnke of the University of Bonn in Germany, the long learning times make it more difficult to directly transfer the work to real soccer robots. However, he says it will be interesting to see if DeepMind’s approach is competitive in the annual RoboCup 3D Simulation League.

The DeepMind team has started teaching real robots how to push a ball toward a target and plans to see if the same AI training strategy works outside of soccer.