How AI Will Help Pave the Way to Autonomous Driving

How AI Will Help Pave the Way to Autonomous Driving

May 8, 2018
Artificial intelligence will enable vehicles to manage, make sense of, and respond quickly to real-world data inputs from hundreds of different sensors, but it’s going to take some time.

An entire industry is charging ahead to reach higher-level autonomous driving capabilities. Still, the challenges are many, ranging from the purely technical to regulation- and insurance-related topics, all the way to moral considerations of derived actions and decisions. 

However, the benefits of Level 4 and/or Level 5 autonomous-driving capabilities, as defined by the Society of Automotive Engineers, are also many, particularly with regard to fewer accidents and life-long mobility. This means every aspect of the driving experience will change, with designers at the forefront as they now look to incorporate artificial-intelligence (AI) capabilities to help achieve the highest levels of automation as safely as possible.

The technical challenges to autonomous vehicles, like those facing high-performance wireless networks and low-latency cloud infrastructure, are solvable over time by advancing the state-of-the-art in well-understood design practices and techniques. However, based on the foreseen complexity of an autonomous vehicle, AI systems are more than a promising element to address a huge set of data, scenarios, and real-world decisions a human brain—consciously or subconsciously—today processes within a short period. And to make all of those decisions with high precision while operating a vehicle.

The focus now is to properly identify, manage, and control the actual input parameters coming from various sensors that are required to develop a usable representation of the real-world operating environment and status of the vehicle. These sensors include cameras, radar, LiDAR, ultrasound, and other sources, such as accelerometers and gyroscopes. Many are already widely used in advanced driver assistance systems (ADAS). However, a key challenge here is to define and develop models to find correlations between available physical signals, existing or to-be-developed AI scenarios, deep-learning models, and the real-world decision impact in a real traffic situation.

Technology Building Blocks as Input Parameters

Until recently, concepts for autonomous vehicles have been built on multiple technology building blocks—including the aforementioned sensors, along with ultrasound, GPS, and wireless technologies. The first and foremost challenge is to clearly understand the actual capabilities and boundaries of each technology, as well as the contributions to the overall autonomous driving system (Fig. 1).

1. Sensors are already used to map the terrain for ADAS-equipped vehicles, but AI will build on these blocks for better safety, more convenience, and energy efficiency. (Source: Keysight Technologies Inc.)

While the testing aspects of technologies like radar and other optical and RF technologies are also well-understood and have been used in various industries for decades, new testing challenges need careful consideration when taking into account the complex autonomous system. 

Like any other data model, AI depends heavily on input parameters determined by signal quality, resolution considerations, update cadence, and latency aspects. For example, understanding and optimizing resolution considerations are critically important to correctly determine and identify respective objects in a real-world driving situation.

On top of that, latency and signal update information is critically important for two reasons. The first is to get to an almost real-time decision model. The second is to ensure that correct sets of data are combined and correlated in a way to derive the most optimal set of actions to ultimately determine the correct decision.

AI Data Processing and Scenario Considerations

As the information gathering and data processing based on the various sensor inputs is already a challenge on its own, the real challenge starts by adding near-real-time drive scenarios with an almost limitless number of variables and nonlinear parameter sets and complexities. Thus, capturing and analyzing massive volumes of data with agility and speed at scale is a requirement for making informed decisions in a situation where time correlation between data sets coming from two or more independent sources is critically important.

Translating it all into a real-world challenge for AI-backed autonomous-driving systems, the expected outcome of such massive data processing is nothing short of getting the right answer in the shortest possible time to determine a proper action to avoid a traffic incident (Fig. 2).

Absolute merit of performance and quality for AI-based systems aren’t clearly identified. Abstraction and approximation techniques based on timing correlations, like a time-synchronized environment used in network topologies of raw and processed data sets, can provide critical insights and conclusions for the use of a self-learning stimulus/response system. This together with some functional testing and logical correlation appear to be a manageable way forward in lieu of an absolute metric of quality.

2. Technology has eased driver burden, improved connectivity, and increased safety, but for AI, the challenge is to correlate all of the data and sensor inputs and come to a safe action decision, quickly. (Source: Keysight Technologies Inc.) 

Clearly, autonomous-driving scenarios generate massive sets of data. When combining it with an environment that encourages data discovery through iteration, the self-learning mechanisms of AI systems can move faster, experiment more, and learn quicker.

To put it differently, large set of data in combination with realistic scenarios and nonlinear parameter sets enables systems and applications to fail safely and learn faster.

At the same time, this also creates questions around the validation of AI models considering real-world driving scenarios and ever-changing environmental components. Despite using self-learning systems, the correlation between input parameters and the respective output is hard to obtain and sometimes even harder to explain and prove. This will lead us back to more traditional simulations as well as emulation techniques, which are currently the only feasible and traceable way to deterministically validate certain aspects of AI testing scenarios.

However, this will reduce the validated set of scenarios to a restricted number of specific target parameters with limited validity, as the benefit of dealing with a huge set of data concurrently isn’t feasible. It will leave us, for the foreseeable future, with the problem that the creation and simulation of scenarios will offer technically a much broader set of test scenarios. However, the validation will remain a challenge, especially in an environment where a wrong conclusion could end with a fatal incident.

The Role of AI and Deep Learning

Looking ahead, deep learning is expected to be the most adopted approach to develop AI as it learns and develops scenarios and algorithms to mimic real-world scenarios. This means that AI will enable technical advances for the required technology building blocks, but it will only create a breakthrough if testing and qualification for end-to-end systems is part of the equation as well. This will become evident when all of the harvested data from a range of sensors will be used to enact the correct procedures in a constantly changing driving environment and traffic situations.

For true enablement of Level 4 and Level 5 automated driving, the system should be functional in all weather and driving conditions, which is obviously a given requirement. Still, it’s a much bigger challenge than sometimes mentioned and admitted.

It’s clear that AI and its associated deep-learning functionality offers the key ingredients to lift technologies like autonomous driving to the next level—especially as alternative approaches aren’t considered in any way more reliable or practical. Creating models and scenarios by simply generating corresponding lines of code is obviously a nonstarter, as the bug rate of any human-coded software would invalidate the approach based on the sheer volume, complexity, and variety.

On the other side, it’s obvious that data quality, input parameters, and model definition, as well as scenario validation tasks, are critical elements to ensure that any data-driven AI system with its self-learning capabilities is properly used, questioned, and validated. This benefits the overall goal of enabling safe autonomous driving in a more complex and dense traffic environment. Putting together these different pieces of the puzzle effectively isn’t easily resolved, though.

Michael Reser manages the business development and portfolio management function for Keysight Technologies' automotive and energy solution business unit.

About the Author

Michael Reser | Director of Technical Marketing, Automotive and Energy Solutions

Michael Reser manages the business development and portfolio management function for Keysight’s automotive and energy solution business unit. He is responsible for strategic/operational sales, business development, and solution portfolio marketing activities with a key focus on current and future test solutions for automotive and energy markets across the target customer’s entire lifecycle.

Michael joined Agilent Technologies Inc. (Keysight Technologies’ predecessor company) in 2000 as a business analyst. Before taking up his present position, he held various Product Marketing, Business Development, and various management positions within Agilent’s and Keysight’s digital test business.

Born in Marbach, Germany in 1973, Michael holds a Bachelor’s in economics and a Master’s e in Business Administration from the University of Stuttgart-Hohenheim.

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!