DE · Topics · Test · Test

Autonomous Visualization

Autonomous driving visualization tools make massive testing scenarios possible.

Baraja Spectrum-Scan is a lidar system for autonomous driving that replaces several spinning lasers with one stationary device. Fiber optics send and receive signals to small prisms on the exterior. Image courtesy of Baraja.


It is not hyperbole to call autonomous driving development a race. Various cutting edge technologies including deep learning, general-purpose graphics processing units (GP-GPUs), artificial intelligence and computer vision are adapting to make driverless cars and trucks an everyday reality. Several vendors are pushing themselves to create the engineering tools required.

As with most great engineering challenges, testing is where the industry will succeed or flop. Humans average a fatal automotive crash every 100 million vehicle-miles traveled. Rand Corporation estimates it will take 11 billion miles of test driving before the industry achieves broad regulatory approval. Waymo, an Alphabet Inc. (Google) subsidiary, has logged more than 8 million miles as of July 2018. Other leading developers, including Uber and General Motors, are close behind.

Cognata combines 3D geographic data with autonomous driving software to create virtual test drive environments. Image courtesy of Cognata. Cognata combines 3D geographic data with autonomous driving software to create virtual test drive environments. Image courtesy of Cognata.

“We’ve now test driven in 25 U.S. cities, gaining experience in different weather conditions and terrains: from the snowy streets of Michigan to the steep hills of San Francisco, to the desert conditions of greater Phoenix,” Waymo announced in its blog recently. “And because the lessons we learn from one vehicle can be shared with the entire fleet, every new mile counts even more.”

Simulation is the only reasonable way to log the required miles. Industry leaders and startups alike are working to adapt visualization and simulation tech for working with lidar, radar, sonar and computer vision. Others are creating virtual driving environments to simulate every imaginable driving scenario.

Two Development Stacks

Two kinds of visualization for simulation are required, says Danny Atsmon, CEO of Cognata, an Israeli startup with experts in deep learning, advanced driver assistance systems (ADAS), 3D graphics and geolocation. The first is fully rendered visualizations. “The images you get from all the sensors should look like the real thing,” says Atsmon. Fully rendered visualizations of driving terrain are required to train autonomous driving and to validate results.

The second form of visualization is non-rendered. To the untrained eye it looks like polygons in motion: cars and pedestrians are rectangles; lanes are lines. It is a semantic view of the world, Atsmon says, the foundational level required to guide systems in their interpretation of the driving environment.Two software stacks guide autonomous vehicles. The Perception stack takes the raw sensor data and creates an environmental model. “It must be able to present a car 20 meters away at 75° in the center lane,” Atsmon explains. The result is data that vehicles interpret and localize. The Quality stack then makes decisions on how the vehicle interacts with the world. “Where are the objects? Where are you? What do I do?” is how Atsmon describes the work of the Quality stack. “Think of it like two parts of the brain: one part processes vision, one makes sense of the information.”

Cognata’s software creates digital twins for testing of both the autos and city infrastructure, the latter from existing 3D geographic data. The autonomous driving software then guides a virtual car through the virtual landscape. Sensors are recreated in the simulation, and all the road details down to traffic signs and lane lines are included. The virtual car does not drive in isolation; other cars and pedestrians, historical traffic conditions and time-of-day lighting are added to the mix. By using real-world geographic data, Cognata’s dynamic traffic model can simulate driving conditions in Mumbai or the German Autobahn.

Better Lidar

One of the key sensory technologies in autonomous vehicles is lidar (laser radar), which civil engineers have been using for several years to capture 3D geographic data. Woodside Capital Partners recently predicted the lidar industry will become a $10 billion market by 2032 (up from $1.4 billion in 2017) due to rapid acceleration of its use in automotive.

Lidar for automotive is a relatively new adaptation; the market has not coalesced around a single method of use or vendor. The challenge is in how the laser moves back and forth to scan its surroundings. Velodyne LiDAR, a large lidar vendor, uses 128 lasers spinning at 64 times per second. The moving parts and complexity are fine for stationary platforms or slow-flying drones, but automotive use is proving difficult.

Cesium supplies software for gathering and selectively streaming 3D data in real time for autonomous vehicle visualization. Image courtesy of Cesium. Cesium supplies software for gathering and selectively streaming 3D data in real time for autonomous vehicle visualization. Image courtesy of Cesium.

An Australian start-up, Baraja, is working on a new, mechanically simpler approach to lidar as an autonomous driving vision tool. Instead of multiple spinning lasers, a single laser is shot through a lens that refracts infrared light the way a prism refracts visible light. The aiming takes place by adjusting wavelengths, not moving the laser.

Baraja’s co-founders came from telecoms, where a similar approach creates wavelength division multiplexing, in which light splits into many wavelengths, but travels inside one optic fiber cable. Because there is no single most-optimal point on a car for the sensor, most competing methods install lidar at several points. Baraja uses only one laser, buried deep inside the car. The light pulses travel via fiber optic cables to tiny prisms scattered around the exterior. Baraja believes its Spectrum-Scan technology will significantly lower costs of producing autonomous vehicles in two ways: initial cost of parts and lower ongoing maintenance costs by eliminating most moving parts.

Repurposing Simulation Tools

Mechanical Simulation Corporation has been a software developer since 1996, providing vehicle dynamic model simulation. It is now using its expertise to develop a new line of simulation products for autonomous driving development. The transition moves Mechanical Simulation from just the math model of vehicle behavior to its interaction with the ever-changing environment. “Simulated conditions have expanded to include running with the many built-in controllers in hundreds of thousands or even millions of simulations,” says Mechanical Simulation CEO/CTO Dr. Michael Sayers.

The new line for autonomous vehicle simulation includes animation resources (pedestrians, bicycles and animals), road surfaces, signage, GPS data and more. The new line supports vehicle sensors and interactive traffic, and can exchange data with MATLAB/Simulink, LabVIEW, ETAS ASCET and other engineering software.

Geospatial Data Fusion

A single autonomous vehicle can collect 4 TB of data per day. The multiple incoming data streams include camera imagery (40 MB/s), lidar points (70 MB/s) and varying amounts of time-based telemetry from GPS, gyroscopes and accelerometers. All this data must be compiled and available in real time as heterogeneous 3D geospatial datasets. Cesium has been in the 3D geospatial streaming industry for years, and is now adding autonomous vehicle development to its services. The company’s past work includes software for tracking every satellite in space and creating sub-centimeter point clouds with more than 6.4 billion data points.

Cesium is only doing the autonomous vehicle visualization, leaving other parts of the autonomous vehicle development stack for others. It offers an open time-dynamic streaming format, CZML, capable of batching multiple frames of input into a single visual. The technology was originally developed for aerospace visual data fusion. Open-source Cesium 3D Tiles manages the streaming process, delivering only the parts of the 3D model needed for the current virtual view. Cesium is an open standards proponent and has developer relationships with more than 50 software vendors.

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Randall  Newton's avatar
Randall Newton

Randall S. Newton is principal analyst at Consilia Vektor, covering engineering technology. He has been part of the computer graphics industry in a variety of roles since 1985.

  Follow DE
#19150