Electronicdesign 7608 Verocelpromo
Electronicdesign 7608 Verocelpromo
Electronicdesign 7608 Verocelpromo
Electronicdesign 7608 Verocelpromo
Electronicdesign 7608 Verocelpromo

George Romanski of Verocel Explains DO-178C Certification for Airborne Equipment

July 25, 2014
  A proven process developed and refined over years of practical certification/verification experience provides test coverage and analysis compliance for DO-178B/C. 

DO-178C is the latest certification for airborne systems and equipment. The process has been refined over many years of practical certification/verification experience to show compliance with DO-178B/C. Verocel has developed new tools to meet DO-178C test coverage and analysis requirements. George Romanski, president and CEO of Verocel, recently discussed the issues involved in certifying and verifying safety-critical systems using standards such as DO-178C.

George Romanski is the president and CEO of Verocel Inc.

Wong: DO-178C verification and testing are requirements-based. How are they defined, and what are the levels of requirements necessary to achieve later test/coverage conformance to the standard?

Romanski: High-level requirements (HLRs) should be developed to express the intended behavior of the software at its boundaries to the system. HLRs do not necessarily reflect the structure of the software implementation, but they may be used to form the basis of the implementation. The HLR may be refined using architectural decisions until the details of the implementation are captured. The intended behavior may then be captured as low-level requirements (LLRs) with enough detail to enable software to be written from these requirements directly. LLRs are often structured hierarchically to reflect the structure of the code. Even though LLRs may appear at several levels in their hierarchy, they are still considered low-level requirements.

Wong: DO-178C requirements-based traceability verification appears to be hierarchical. Is this true or is it more complex and perhaps bi-directional?

Romanski: At the start, the relationship between HLRs and LLRs must be documented and reviewed. Sometimes these can be simple links, with relationships recorded as adjacent elements in requirement tables, but sometimes this is more complex.

We find that one HLR may be implemented by several LLRs, and sometimes one LLR may be traced to several HLRs and thus the relationships are not necessarily purely hierarchical. Our experience during audits is that many different designated engineering representatives (DERs) would ask us to explain the relationships between HLRs and LLRs.

To make this easier, we decided to document this traceability using “link” artifacts and “mapping commentary” artifacts. These reviewable artifacts are managed just like the requirements. They have unique identifiers, identified versions, and an internal status attribute (initial, developed, ready-for-review, failed, and passed).

If the traceability relationships are obvious, then simple HLR -> Link -> LLR traces are sufficient. However, if the relationships are not so obvious, then explanations are added as HLR -> (Link & referenced mapping commentary) -> LLR to justify the traces. At Verocel, this is managed using our DO-178C qualified lifecycle management tool VeroTrace, where such relationships are managed automatically.

This additional information makes the audits very much easier as DERs can see the explanations and understand how the behavior at the software boundary level is translated to behavior at the software implementation level. By treating this mapping information as a managed artifact, impact analysis based on HLR or LLR changes are much easier to isolate and manage. This also reduces the effort for re-verification should the requirements or implementation of the system change in the future.

Wong: Are there any special concerns when verifying software against requirements?

Romanski: Tests must be written against test case specifications that use requirements as a basis. The requirements describe the intended behavior, and the tests verify that this is the same as the actual behavior.

It is tempting to automate software testing, and there are companies that will sell their “magic” tools to analyze the code and to generate the tests based on the code structure. After all, if the LLRs are close to the code, then why not? This is a violation of DO-178C and all other software standards. This approach risks testing what the code does rather than checking what it’s supposed to do. Tests must be requirements-based only, and a DER should fail any other testing approach.

It is important to perform testing at the HLR level, which shows that the software has been integrated correctly. At the LLR level, we are verifying that the software implementation is sound. By sound, I mean that the implementation satisfies the SUNECO principle: SUfficient, NEcessary, and COrrect.

Sufficient means the intended behavior is implemented. Any requirement-based test that expects some specific behavior that is not implemented should fail. Necessary means that any code not invoked by the requirement-based tests should not be executed. Correct means that any code invoked by a test that measures some intended behavior will have the expected result.

Let’s go back to the necessary principle. Showing that software is necessary is important for two reasons:

• In accordance with DO-178C, it shows that there is no unintended functionality. If we perform requirements-based testing and there is code that is not executed, then this would be an indication that the requirements are incomplete or there is additional behavior in the software beyond that specified by requirements. This could be viewed as unexpected behavior.

• When an aircraft is accepted to be “safe enough” to carry passengers, we need a measure of what is “safe enough.” Clearly, we need a stopping criterion for testing. Otherwise, we would have to continue testing beyond some reasonable limit. This limit was agreed upon and is based on coverage analysis.

Coverage analysis is defined in DO-178C and varies depending on the design assurance level (DAL). The higher the DAL, the more stringent the coverage analysis requirements. At Verocel for level A, the highest DAL, we have taken the following approach:

First, we only use low-level requirements-based tests to measure code coverage. The same requirements-based tests used to perform functional testing are rerun while measuring coverage. There are some companies that routinely measure coverage using HLR tests. Their sales pitch says, “You can switch on coverage measurement even before the system is initialized and reaches a steady state. This may cover 50% of the code, so all you have to worry about is the other 50%.”

This is wrong and should be rejected by the DERs. LLRs should be used for code coverage measurement. The LLRs describe the intended behavior of the software, and test cases specify the boundaries and values to be explored while measuring coverage of the software.

Second, some coverage tools work at the source code level by extracting each function under test so that it can be tested in isolation. The tools generate an environment to provide all of the functions’ interfaces including global data and other functions to be called. This type of testing is fine for the individual functions, but it leaves a big verification hole for the verification of code and data coupling.

Wong: We have spoken about the requirements traceability and testing. DO-178C stresses that data/control coupling must be verified using requirements-based tests. How does Verocel satisfy this objective?

Romanski: At Verocel, we always test the integrated image. Even when we perform unit tests and invoke individual functions, they are always tested as part of the actual program with which they are linked. The software that flies is the software that is tested. This means that manipulations of global data and function call references are the same during test as they are during flight. There are a number of advantages using this approach.

First, by using LLR-based tests, the intended function is covered and the results of the intended functionality are checked. Any code that is not covered by tests is discovered. Even code inserted by the compiler and not executed is discovered.

Second, by testing the integrated image and showing that all code that references global data and all of the linked functions have been executed, then data/control coupling has been verified.

Third, testing is always focused. When a specific function is being tested, only the tests traced to test cases that are traced to the requirements for that function are used. Coverage data is captured for the compiled file containing the function only, and all other coverage data is discarded.

For level A software, Verocel measures coverage at the machine code level using our qualified coverage tool VerOCode. Any code inserted by the compiler that is not covered by a test is flagged. This could be due to incomplete requirements, incomplete tests, or code added during implementation, e.g., robustness checks, or additional code inserted by the compiler to make the code faster.

Once coverage is captured and shown on the listings, the VerOCode annotation editor locks the coverage listings and the coverage results so that engineering review annotations may be added, but the rest of the data is locked against change.

Wong: Can you summarize the Verocel approach?

Romanski: The Verocel approach to traceability, testing, and coverage analysis is compliant with the objectives of DO-178C. HLRs are linked to LLRs where the link and the mapping commentary associated with the link are “first-class” reviewable artifacts. LLR-based testing is used to measure coverage of the software. The same tests are used for functional testing and for coverage measurement. The integrated image is used for testing without change. What flies is what is tested.

The rigor of verification of DAL A software must be taken seriously. This is serious business. Let’s hope it is taken seriously by all practitioners.

George Romanski is the president and CEO of Verocel Inc. He has specialized in the production of software development environments for the past 35 years. His work has focused on compilers, cross compilers, run-time systems, and tools for embedded real-time applications. Since 1992, he has concentrated on software for safety-critical applications and safety-critical verification. He has also been an active member of the following committees: SC-205 (RTCA) – Software Considerations (DO-178C/DO-278A/DO-248C/DO-178C Supplements); SC-190 (RTCA) – Application Guidelines for RTCA DO-178B/ED-12B (Software); UCSWG (Office of the Secretary of Defense) – Unmanned Air Systems Control Segment, Safety and Security Certification Sub Group; and FACE – Future Airborne Capability Environment.

Sponsored Recommendations

Understanding Thermal Challenges in EV Charging Applications

March 28, 2024
As EVs emerge as the dominant mode of transportation, factors such as battery range and quicker charging rates will play pivotal roles in the global economy.

Board-Mount DC/DC Converters in Medical Applications

March 27, 2024
AC/DC or board-mount DC/DC converters provide power for medical devices. This article explains why isolation might be needed and which safety standards apply.

Use Rugged Multiband Antennas to Solve the Mobile Connectivity Challenge

March 27, 2024
Selecting and using antennas for mobile applications requires attention to electrical, mechanical, and environmental characteristics: TE modules can help.

Out-of-the-box Cellular and Wi-Fi connectivity with AWS IoT ExpressLink

March 27, 2024
This demo shows how to enroll LTE-M and Wi-Fi evaluation boards with AWS IoT Core, set up a Connected Health Solution as well as AWS AT commands and AWS IoT ExpressLink security...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!