Autopilot

Uber's self driving car concept. / Photo by: Dllu via Wikimedia Commons

 

In self-driving cars, people understand that certain issues in reliability and safety constantly challenge developers. But solutions may be easier to create if they can see what the system actually sees around it. In a published article of Drive.ai at Medium, experts showed how automated cars process data.

The software company noted that their self-driving software can generate an enormous amount of data used by their engineers to improve the system’s capabilities, performance, and safety. To help understand how the data is processed, they used visualization tools to show how it looks like behind the curtains.

1. The visual tool for onboard displays demonstrates the sensor in an intuitive and easy-to-comprehend manner. The visuals show a full-surround three-dimensional image of the system’s surroundings. The 3D picture has been produced through “point clouds” derived from LIDAR sensors, combined with video data obtained from full-surround cameras. The details from the visual tool resemble outlines of obstacles around the car.

2. The data on off-board analysis differs from the onboard displays. This one consists of robotic information such as localization, mapping, and sensor calibration. The visual tool for the analysis resembles an HD game-like style to show what the system sees around it.

3. Annotation is an essential part of autonomous technologies. It is needed to label all data sets collected on the road. Without it, the system will fail to understand what element they see, such as people, animals, and objects. The visual tool shows particular lines about the area including structures and the size of the road.

The company also applied a 3D simulator to evaluate and test the self-driving system. The simulator has a comprehensive database of relevant information on driving with the addition of real-world elements, including incoming vehicles and switching of traffic lights.