The development of systems, whether high-assurance or mission-critical, faces a perennial problem: We can envision and therefore want to build systems that are much more capable and complex than we can assure with traditional approaches. The result is predictable: System errors resulting in system failures that result in losses—sometimes including loss of life.
Many system errors can be traced to erroneous behavior of critical system software. Software, even more than systems, is prone to escape from the envelope of manageable complexity. Software has no physical extent, no physical weight, and no physical power demand per se. The pressures that constrain the features we want to add to systems don’t constrain software. If we can imagine a feature, we can add the feature to software. The result is again predictable: Software that’s far too complex to assure; that fails; and that causes system failure.
Going Formal?
Software assurance has traditionally been gathered through extensive testing. Unfortunately, testing isn’t exhaustive and thus can only reveal the presence—not absence—of errors. But we can do better than just testing. At the system level, we apply analysis as well as testing. For example, when designing an aircraft, aerospace engineers conduct extensive aerodynamic analysis using computational fluid dynamics before going into the wind tunnel. The wind-tunnel tests validate the analytical model, upon which the fundamental assurance is based.
For software, we can apply formal methods. These are techniques based on mathematical representations of software behavior that allow for a truly exhaustive examination of software state to prove the absence of errors. Software testing then validates this comprehensive analysis.