Enhanced realism to assist in autonomous vehicle testing

Today’s driving simulators have a major flaw: they don’t appear realistic enough, especially when it comes to background objects like trees and road markings. However, researchers have created a new method for creating photorealistic images for simulators, paving the way for improved testing of driverless cars.

Traditional computer graphics use detailed models, meshes, and textures to interpret 2D images from 3D scenes, a time-consuming process that results in images that are often unrealistic, especially in the background. A Machine Learning framework known as a Generative Adversarial Network (GAN) was used by researchers to train their program to randomly generate life-like environments by enhancing the program’s visual accuracy – the level of representation computer graphics share with reality.

This is particularly significant when testing how humans react in driverless vehicles or on the road alongside them. When driving simulations resemble computer games, most people don’t consider them seriously, stated Ekim Yurtsever, the study’s lead author and an electrical and computer engineering research associate at The Ohio State University. That’s the reason we try to make our simulations appear as realistic as possible.

The research was published in the IEEE Transactions on Intelligent Transportation Systems journal.

CARLA, an open-source driving simulator, served as the researchers’ starting point. They then used a GAN-based image synthesizer to interpret the background elements such as buildings, vegetation, and even the sky and merged them with more conventionally rendered objects.

Driving simulations, according to Yurtsever, will continue to require traditional, time-consuming graphics rendering techniques to display the primary objects of interest, like nearby cars. GAN, on the other hand, can be trained to generate realistic backgrounds and foregrounds utilizing real-world Data.

One of the difficulties the researchers encountered was training their program to identify patterns in their environments, which is required to detect and create objects such as vehicles, trees, and shadows, as well as distinguish these objects from one another.

The point worth noting is that these patterns and textures in our model were not designed by engineers, Yurtsever explained. We have a template for feature recognition, however, the neural network learns it on its own.

Their findings revealed that combining foreground objects differently from background scenery enhanced overall image photorealism.

However, rather than modifying an entire simulation at once, the process had to be carried out frame by frame. But, because we do not live in a frame-by-frame world, the next step of the project will be to enhance the temporal consistency of the program, in which every frame is consistent with the ones before and after it, so that users have a seamless and visually riveting experience, Yurtsever said.

According to Yurtsever, the development of photorealistic technologies could also help scientists study the complexities of driver distraction and improve experiments with real drivers. More immersive driving simulations, with access to larger datasets of roadside scenes, could change the way humans and AI begin to share the road.

Our research is a critical step in conceptualizing and testing new ideas, Yurtsever explained. We can never replace real-world testing, however, if we can enhance simulations slightly, we can gain a better understanding of how we can improve autonomous driving systems and how we interact with them.

Ibrahim Mert Koc, Keith A. Redmill, and Dongfang Yang, all of whom work in electrical and computer engineering at Ohio State, were co-authors. The US Department of Transportation funded the research.