This article is part of the How to Overcome Autonomous-Vehicle Networking Challenges series and in the Automotive topic of our Library: Article Series.
What you’ll learn:
- What is sensor fusion is and why is it the future for autonomous vehicles?
- What sensors are needed for sensor fusion.
- Different sensor architectures being utilized by vehicle manufacturers.
Sensor fusion is the future for autonomous vehicles (AVs), allowing them to replicate how human senses work together to provide spatial awareness. So, what exactly is sensor fusion?
It’s the harnessing of data from multiple sensors to build awareness of the events surrounding the vehicle, allowing it to process what’s happening and then take the appropriate action. The most discussed sensors in AVs are LiDARs, radars, and cameras, and when combined via sensor fusion, they complement each other very well.
LiDAR technology has a scanning effect that enables the vehicle to detect objects at both a high resolution and long distance, which can help prevent accidents. LiDAR replicates our depth perception by providing 3D information of nearby objects, but it doesn’t provide the type of resolution like that of a camera. Also, LiDAR is sensitive to weather conditions such as dense fog or heavy rain.
Radar is used to detect the speed and distance of objects in the vicinity of the car. Consisting of both a transmitter and a receiver, it sends out radio waves that bounce and reflect off objects until they’re captured by the receiver. Through this echolocation-like process, the radar can determine the distance, speed, and direction of nearby objects. While radar doesn’t have great resolution at range, it does feature the added benefit of being able to detect well through bad weather.
Lastly, cameras, as I’m sure one could assume, imitate our sense of sight through creating an image from reflected and refracted light rays. While cameras have exceptionally good quality resolution, they’re incapable of providing details about the distance and depth of what they’re imaging. And like LiDAR, cameras also don’t perform well in certain weather conditions and may struggle with detection at night.
Combining Sensor Data
With sensor fusion, the data input from several, or all of these sensors—LiDAR, radar, cameras, and more—are brought together. This is because the data from each sensor gets combined and forms an image of the events surrounding the vehicle, allowing it to take the proper measures. As with human senses, combining various sensory inputs creates a very detailed visuo-spatial awareness.
All three types of sensors can be combined and implemented in the various design architectures utilized by car manufacturers. One such example is the “zonal” architecture, which may be referred to as an evolved domain architecture—an intermediate step-up from the original, flat electronic-control-unit (ECU) architecture in older vehicles.
Zonal Architecture
With zonal, emphasis is placed on select physical zones in the vehicle like the front, back, central core, or the sides. All ECUs located in a given physical zone connect to the same zonal controller or gateway, regardless of the ECU’s exact function.
Using a zonal design, the gateways can be much closer to the sensors themselves. This means that the cabling between hosts and gateways is greatly simplified, allowing for better connectivity. Such an approach offers huge advantages in terms of scalability and the functionality that comes with the use of high-speed Ethernet and various other computational resources. Thus, it’s reliable for both the vehicle’s decision-making and processing data. Zonal does, however, come at the cost of greater complexity for the gateways that need to manage and route traffic to and from ECUs with very different functions.
Zonal architecture also offers a notable advantage specifically applicable to sensor fusion. The vehicle can have multiple zonal controllers that gather data from the sensors—and sometimes even make up for the sensors if they’re lacking in performance. Therefore, the zonal controller can apply any of the local processing that the sensors themselves didn’t perform, such as signal cleanup or local machine learning (ML).
Central Processing
Another type of architecture considered for modern vehicles implements centralized-processing sensor fusion. Instead of having several ECUs spread throughout the car, all of the domains are merged in one centralized domain control system, leading to the “central processing” name. Central processing is the preferred design of high-tech companies manufacturing vehicles, such as Tesla and Waymo, as they strive for Level 5 (full) autonomy. Today, this type of architecture leads to three or so centralized controllers rather than the ideal single controller.
Although it sounds simpler having all of the domains located in one centralized domain control, it actually makes things more complicated from a processing standpoint. In this type of architecture, the bulk of the processing is performed by the central unit. However, the immense amount of data generated by the sensors means the central unit can throttle performance.
One solution is to rely on state-of-the-art processing units with high-end processing capabilities, specifically built to process automotive sensor data. Another approach is to rely on distributed processing to optimize the system, lower the workload on the centralized controller, and allow for high compute speeds.
Distributed Approach
Different manufacturers will select what’s most appropriate to their company type and history. New entrants may favor the high-end intensive centralized compute approach, whereas traditional manufacturers may favor the latter, distributed solution to better leverage contributions from their suppliers.
With the distributed approach, it’s essential to have multiple modules pushed out to the edge to handle the abundance of data generated from the multitude of various sensors, including sensor modules, braking control modules, lighting modules, etc. Highly sophisticated sensors are often implemented so that they can perform some data-processing functions to reduce the bandwidth of information sent back to the central unit or reduce the noise in the received signal—forming sensor fusion.
For vehicles to become completely autonomous, they need perfect spatial awareness to properly identify and react to the constantly changing environment on the road. It’s hard to say whether zonal or central processing will be the more prevalent type of automotive architecture in the future, but what we do know is that sensor fusion will be critical to replicating human senses in AVs and ultimately achieving Level 5.
Regardless of which architecture is used, making sure it’s properly networked is a key for AVs. We’ll explore that topic more in the next article.
Read more articles in the How to Overcome Autonomous-Vehicle Networking Challenges series and in the Automotive topic of our Library: Article Series.