ID 312689142 © Chechotkin | Dreamstime.com
LiDAR in autonomous driving

The Critical Role of LiDAR Sensors and Adaptive Computing in Automotive

Nov. 8, 2024
Discover how adaptive-computing technology is extending the capabilities of LiDAR sensors to deliver higher depth resolution and reliability, overcoming challenges in autonomous driving.

What you’ll learn:

  • The evolving role of LiDAR sensors in autonomous driving.
  • Sensor detection focus requires higher depth resolution.
  • How adaptive computing can extend LiDAR system capabilities.

 

The autonomous-driving landscape is evolving at a rapid pace. The number of highly automated vehicles shipping each year is set to grow at a CAGR of 41% between 2024 and 2030. This rapid growth has led to unprecedented demand from automative brands for precise and reliable sensor technology as they seek to deliver accurate, trusted, and, ultimately, fully autonomous driving.

In pursuit of this goal, LiDAR (light detection and ranging) sensors have become indispensable to auto manufacturers and automotive equipment suppliers. They can “read the road” by enabling depth perception and range detection with sufficient resolution for object classification.

Yet, as we move into the next generation of autonomous-driving solutions—from the latest innovations in active safety systems to driverless vehicles—the capabilities of edge systems like LiDAR must be expanded so that they can offer higher depth resolution and reliability to overcome increasingly more complex scenarios.

Incorporating adaptive-computing technologies like FPGAs and adaptive SoCs enables companies to achieve the end goal of a comprehensive perception platform. Such a platform would navigate complicated driving environments and identify potential hazards with exceptional precision.

>>Check out this TechXchange for similar articles and videos

Dreamstime.com
Tech Xchange Lidar Promo
Automotive

TechXchange: LiDAR Technology

LiDAR provides 3D imaging support to applications like automotive and robotics.

Types of LiDAR System Architectures

When examining LiDAR systems, the three primary categories of architectures are mechanical (non-solid), MEMS (semi-solid), and flash-based (solid-state). Each has advantages and disadvantages based on the application use case.

Mechanical systems are the most widely deployed systems (Table 1). These systems use a rotating emitter to send a light wave, which bounces off an object and is sent back to a receiver. The emitters rotate extremely fast to achieve a 360-degree field-of-view, otherwise known as a point cloud. These systems have the advantage of a long range and wide field-of-view, but the downsides are that they’re larger and costly.

Table 1: Non-Solid State: Mechanical

MEMS (microelectromechanical system) replaces the large mechanical rotating LiDAR with an emitter and mirror system to deflect light (Table 2). It’s commonly used in self-driving applications today. These systems are smaller, lighter, and cost-effective, but they also have a more limited field-of-view and are susceptible to impact and vibration.

Table 2: Semi-Solid State: MEMS

Flash systems are solid-state and include optical-phase-array (OPA) systems that use an array of optical antennas that radiate light at different angles (Table 3). This newer solution does have a limited field-of-view, which requires the implementation of several units to cover a full 360 degrees.

Table 3: Solid State: Flash-Based Systems

Companies like AMD produce the FPGAs and adaptive-computing devices that enable each of these LiDAR systems and applications. Regardless of the technique used, FPGAs and adaptive-computing devices are able to meet the various size, cost, and resolution implementations in the LiDAR space.

Overcoming Timing Jitter

The value of LiDAR lies in its ability to deliver image classification, segmentation, and object-detection data, which is essential for 3D vision perception enhanced by artificial intelligence (AI). Such a level of precision can’t be provided by cameras alone, especially when you factor in poor weather or low light, which is why LiDAR has become an essential technology for autonomous driving.

However, LiDAR still must overcome multiple challenges, including timing jitter. When the timing or position of the laser pulses fluctuates, it can impact the quality of the image created and, therefore, hamper object recognition and depth resolution. LiDAR’s role in autonomous driving is set to expand, and ongoing improvements to the technology are essential.

Adaptive-computing technology can support a reduction in timing jitter and improve resolution thanks to FPGAs that enable faster data processing. FPGAs provide the flexibility to optimize the data path and memory hierarchy for reduced latency and offload AI engines that adjust the timing of pulses to minimize fluctuation. Ultimately, the smaller the jitter value, the more accurately an object can be recognized by sensor and radar detection.

Evolving and Expanding LiDAR Architecture

Currently, vehicles coming off the production line may have just one forward-looking LiDAR. But that’s changing, as next-generation vehicles will have multiple systems, including forward-facing, rear-facing, and side-view LiDARs, for more comprehensive coverage of the road and its surroundings. This expanded LiDAR sensor ecosystem requires powerful and efficient AI-compute platforms. These platforms would process and transmit the increased amount of data generated and permit the high-speed connectivity and low latency needed for the ecosystem to perform effectively.

Using an FPGA-based multiprocessor system-on-chip (MPSoC) can reduce the size of these LiDAR implications. Because FPGAs are optimized for the edge, they can seamlessly integrate and efficiently interface with several systems to factor in the explosion of sensors seen in autonomous-driving solutions today. By reducing system size and space, MPSoCs allow for multiple LiDARs to work in tandem to generate a comprehensive view of a vehicle’s path.

Plus, because FPGA-based MPSoCs provide the flexibility to be reprogrammed after manufacturing, they can be used for multiple LiDAR systems—including future generations. This adaptability makes it possible for automotive OEMs to drive down system costs and future-proof designs, so that they don’t have to overhaul the original system as next-generation solutions emerge.

Point-Cloud Preprocessing and Machine-Learning Acceleration

Point-cloud images are at the heart of autonomous driving, and being able to create an image by combining individual measurements of an object’s form is critical. Companies are using upwards of 128-channel, digital multi-beam flash LiDAR in some instances to produce these rich point-cloud images. This requires highly capable hardware that can be optimized for the task, with the power to deliver both image and digital signal processing.

For instance, transferring image data via high-speed serial transceivers within programmable logic (PL) enables high-speed connectivity and data transmission. While parallel processing, clock speed reduction, and power dissipation is possible, companies must partition between software and associated hardware acceleration functions using the high-bandwidth connectivity between a processing system and PL.

Ultimately, this produces point-cloud images with depth, signal, and ambient data as part of a simplified sensor architecture. It can unlock more effective signal processing and the high resolution needed for LiDAR to deliver reliable object detection, high-precision 3D mapping, and zero-centimeter minimum range when a vehicle is working in tight surroundings.

Preparing Sensor Technology for Today and Tomorrow

As sensor detection technology like LiDAR becomes even more integral to the autonomous-driving experience, a robust yet streamlined processing platform that delivers high performance at low latency is essential to deliver the depth resolution required for safety-critical functionality. Adaptive computing brings together AI engines and FPGAs to optimize the object detection and data conditioning needed for this to play out—accelerating its growth as it becomes the solution trusted by automotive brands to provide accurate and reliable performance.

The LiDAR ecosystem will only become more comprehensive as next-generation solutions are created and develop into an established part of the autonomous driving experience. As additional workloads are deployed through a vehicle’s lifecycle, the flexibility brought by adaptive computing can power the evolution and innovation required.

For instance, it could enable in-field software and hardware upgrades that will deliver the processing power and low latency required for LiDAR to deliver end detection quality. Or it may ensure that new and innovative features and algorithms can be deployed remotely and securely for future-ready designs.

The compute needed to achieve the sensor detection and depth resolution expected for the automotive use cases of today and tomorrow requires flexibility, powerful processing, AND integration. Plus, it needs the modularity to minimize design complexity and costs as well as maximize accuracy and reliability. Factoring in adaptive computing into LiDAR systems and how they’re integrated can unlock the scale of deployment to support fully autonomous driving.

>>Check out this TechXchange for similar articles and videos

Dreamstime.com
Tech Xchange Lidar Promo
Automotive

TechXchange: LiDAR Technology

LiDAR provides 3D imaging support to applications like automotive and robotics.

About the Author

Wayne Lyons | Automotive Senior Director, AMD

Wayne works with several companies at the forefront of advanced driver-assistance systems (ADAS). As a result, he’s closely involved in the identification and requirements of future platforms. These platforms include time-of-flight solutions such as LiDAR and radar designs, along with vision platforms such as front and surround-view camera systems.

Prior to AMD/Xilinx, Wayne spent 20 years at Arm, working in several roles, including IP licensing and heading Asia Pacific sales for Arm. His prior experience involves global marketing for Arm’s embedded market and the introduction of the highly successful Cortex-M family of microcontroller cores. Wayne began his career working in the semiconductor market for Hitachi (now Renesas). He holds a master’s degree in Electronic and Electrical Engineering from Loughborough University in the UK and is a Member of the IET.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!