If you’re involved in the design and test of systems or subsystems that may be used in autonomous vehicles, you must study the newly crafted Federal Automated Vehicles Policy. Then you need to comment now on a number of ideas, not least of which is federal pre-market approval of new products and technologies (p. 73.)
Whether or not you’re a “believer” in autonomous vehicles isn’t very relevant. The fact remains that it’s happening, slowly but surely, to some degree or another. Of course, it all depends on your definition of “autonomous.” More importantly, it depends on the government’s definition of autonomous and its general policy on the matter, which the industry needs in order to move forward.
That’s why the Policy released on September 20th is so important, and that importance applies equally to the 60 days following the 20th, where the policy is open to public comments. If you’d like to add your thoughts, do it here. If you’re not sure you need to comment, read on.
Of course, some kind of governmental involvement is required to pave the way for whatever form autonomous vehicles take at their various stages of deployment. Someone needs to direct traffic at the federal and state level to ensure common understanding of everything from definitions of autonomous through to paying for the infrastructure to make it all happen.
The National Highway Traffic Safety Administration (NHTSA), working under the auspices of the Department of Transportation (DOT), is the natural owner of this undertaking, as it’s “dedicated to saving lives and improving safety and efficiency in every way Americans move, by planes, trains, automobiles, bicycles, foot, and more.” It sees 35,092 on record as having died on U.S. roadways in 2015, with 94% of those being related to human choice or error.
“An important promise of HAVs is to address and mitigate that overwhelming majority of crashes. Whether through technology that corrects for human mistakes, or through technology that takes over the full driving responsibility, automated driving innovations could dramatically decrease the number of crashes tied to human choices and behavior. HAVs [DOT-specific definition of autonomous, see below] also hold a learning advantage over humans. While a human driver may repeat the same mistakes as millions before them, an HAV can benefit from the data and experience drawn from thousands of other vehicles on the road.” (p. 7)
SAE-Guided Policy
There’s the logic for getting involved. So, starting at the definitions stage, the Policy states clearly that it will abide by SAE International definitions for levels of automation. The SAE definitions divide vehicles into levels based on “who does what, when.” It has six levels, 0 through 5:
• At SAE Level 0, the human driver does everything.
• At SAE Level 1, an automated system on the vehicle can sometimes assist the human driver to conduct some parts of the driving task.
• At SAE Level 2, an automated system on the vehicle can actually conduct some parts of the driving task, while the human continues to monitor the driving environment and performs the rest of the driving task.
• At SAE Level 3, an automated system can both actually conduct some parts of the driving task and monitor the driving environment in some instances, but the human driver must be ready to take back control when requested by the automated system.
• At SAE Level 4, an automated system can conduct the driving task and monitor the driving environment, and the human need not take back control. But, the automated system can operate only in certain environments and under certain conditions.
• At SAE Level 5, the automated system can perform all driving tasks, under all conditions that a human driver could perform them.
The DOT distinguishes between 0-2 and 3-5, based on whether the human operator or the autonomous system is primarily responsible for monitoring the driving environment. For the new Policy, the term “highly automated vehicle” (HAV) represents SAE Levels 3-5.
Breaking it Down
The next 104 pages address everything from performance to regulatory tools. Designers of vehicles would already be familiar with having to meet Federal Motor Vehicles Safety Standards (FMVSS) for standard vehicles. These still apply. But as you add HAV capabilities, things like safety, cybersecurity, data sharing, consumer education, and ethical considerations pop up. To help manage the complexity around HAVs, the Policy uses a framework for vehicle performance guidance (see figure).
The Policy asks for development of tests that evaluate the HAV’s ability to operate under certain pre-defined conditions (operational design domain, or ODD), and fall back to a minimal risk condition.
This makes sense—nobody wants accidents. But then the Policy gets a bit “hairy.” It starts by defining 15 areas for which it’s considering asking manufacturers to submit Safety Assessments to the NHTSA’s Office of the Chief Counsel for each HAV system, outlining how they’re meeting the Guidance. The Safety Assessment would cover:
• Data recording and sharing
• Privacy
• System safety
• Vehicle cybersecurity
• Human-machine interface
• Crashworthiness
• Consumer education and training
• Registration and certification
• Post-crash behavior
• Federal, state, and local laws
• Ethical considerations
• Operational design domain
• Object and event detection and response
• Fall back (minimal risk condition)
• Validation methods
A new letter will be required if significant updates are implemented.
Then there are software over-the-air and hardware updates, which also will require a Safety Assessment to the agency.
For system safety, the Policy expects standard practices to be observed as they relate to automotive functional safety, following best practices and ISO and SAE standards, which includes having well-documented processes.
However, it also emphasizes that a thorough and measurable software testing should complement a structured and documented software-development process, and that:
“The automotive industry should monitor the evolution, implementation, and safety assessment of Artificial Intelligence (AI), machine learning, and other relevant software technologies and algorithms to improve the effectiveness and safety of HAVs.”
Wading into the Pre-Approval Waters
Despite acknowledging the speed at which these technologies are changing and being incorporated into current vehicles to make them safer, the NHTSA then throws a wrench in the works: It proposes that the NHTSA be granted pre-market approval authority.
We’re very familiar with pre-market approval; it’s been a hallmark of the Federal Aviation Administration (FAA) and the Federal Drug Administration. In both cases, it’s been a force for good, and not-so-good. Too often it has slowed down innovation and evolved over time to create enormous barriers to entry for startups.
With autonomous vehicles, this Authority II applies to new products and technologies, which currently can be self-certified by the manufacturer.
So, the question is: Do we want this level of new technology test and development oversight, or not?
Comments and thoughts are more than welcome here, but definitely needed at the Policy comments site. Remember, the 60-day clock started on September 20th.