Build Your Computer System Security From The Ground Up

Nov. 1, 2006
What we really need is a new approach to designing the systems we want to protect, an approach that can make those systems inherently tamper-resistant and capable of surviving assaults. Otherwise, we’re simply erecting concrete barriers around a house of

Quick—which of these events really happened?

  • A. A computer worm crashes the safety system in an Ohio nuclear plant.

  • B. A virus halts train service in 23 states.

  • C. A young recluse cracks computers that control California dams.

  • D. A hacker uses a laptop to release 260,000 gallons of raw sewage.

The answer, sad to say, is all of the above. These attacks and thousands like them demonstrate that building a secure perimeter around our computer systems is no longer enough. Firewalls, intrusion detection software, and anti-virus programs are all important. But no matter how robust a perimeter they may create, malicious hackers can and will break through.

What we really need is a new approach to designing the systems we want to protect, an approach that can make those systems inherently tamper-resistant and capable of surviving assaults. Otherwise, we’re simply erecting concrete barriers around a house of cards.

A major shift in cyber crime has made the need for such an approach all the more urgent. Yesterday, hackers cracked systems for thrills and notoriety. Today, they do it for profit. It has become a full-time job staffed by dedicated professionals. If a hacker stands to make money by accessing your data, or by threatening to launch a denial-of-service (DoS) attack on your system if you don’t pay an extortion fee, then you’re a target.

Worse, these professionals are targeting not only corporate IT servers, but also control and supervisory systems—systems that keep factories running, power flowing, and trains from derailing. An attack on a corporate server might be costly, but an attack on a life-critical embedded control system can be catastrophic. Consequently, cyber extortionists consider such systems prime targets.

Truth be told, the principles of creating a design that’s inherently survivable and tamper&emdash;resistant aren’t all that new. In fact, many of them were established as far back as the 1970s, when researchers such as Saltzer & Schroeder published seminal papers on the topic. The surprise is how much—and how long—the software industry has ignored them. This omission goes a long way toward explaining why our servers and desktops are so vulnerable to malicious exploits. It also explains why many embedded systems are equally at risk.

Consider the key principle of least privilege, which states that a software component should only have the privileges it needs to perform a given task, and nothing more. If a component needs to, say, read data but has no need to modify that data, then it shouldn't be granted write privileges, either explicitly or implicitly. Otherwise, that component could serve as a leverage point for a malicious exploit or a software bug.

As it turns out, most operating systems today are in serious violation of this principle. Device drivers, file systems, and protocol stacks in a monolithic kernel such as Windows or Linux all run in the kernel’s memory address space at the highest privilege level. Each of these services can, in effect, do anything it wants. Consequently, a single programming error or piece of malicious code in any of these components can compromise the reliability and security of the entire system. Imagine a building where a crack in a single brick can bring down the entire structure, and you’ve got the idea.

In response, many embedded system designers are adopting a more modular operating-system architecture, where drivers, protocol stacks, and other system services run outside out of the kernel as user-space processes. This "microkernel" approach not only enables developers to enforce the principle of least privilege on system services, it also can result in a tamper-resistant kernel that hackers cannot bend or modify. This approach can also satisfy other requirements of a secure, survivable system, such as fault tolerance (the system will operate correctly even if a driver faults) and rollback (the system will undo the effects of an unwanted operation while preserving its integrity).

Methods such as secure partitioning extend microkernel technology to give applications guaranteed access to computing resources in virtually any scenario. The need for such guarantees is especially urgent in the embedded market. Keeping pace with evolving technologies requires the ability to download and run new software throughout an embedded product’s lifecycle—for example, in-car telematics and infotainment systems. In some cases, this new software may be untrusted, which is an added risk.

To address such concerns, a system must guarantee that existing software tasks always have the resources (e.g., CPU cycles) they need, even if an untrusted application or DoS attack attempts to monopolize the CPU. Properly implemented, resource partitioning can enforce those guarantees without any need for software recoding or extra hardware. None of the scenarios mentioned earlier caused serious harm—with the possible (and pungent) exception of the sewage incident. They do demonstrate, though, the phenomenal trust we place in complex, software-controlled systems and how vulnerable we become if those systems are compromised. As software designers, developers, and managers, our task is to create systems that are inherently trustworthy.

But trustworthiness isn’t simply an add-on layer. It has to be built from the ground up. Start with a software architecture that embraces fundamental principles of security—such as separation of privilege, fail-safe defaults, complete mediation, and economy of mechanism—and you’ve got a major head start. Fail to do so, and you fight a costly, uphill battle. For proof, consider the endless parade of patches needed to secure our desktops.

When it comes to building secure, survivable systems, what you start with determines what you end up with. Fortunately, the underlying principles we need to embrace aren’t unproven or obscure, but simply good, well-accepted programming practices. The groundwork has already been laid. Let the next generation of innovative and secure systems begin.

Sponsored Recommendations

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Bidirectional power for EVs: The practical and creative opportunities using power modules

March 18, 2024
Bidirectional power modules enable vehicle-to-grid energy flow and other imaginative power opportunities. Learn more about Vicor power modules for EVs

Article: Tesla commits to 48V automotive electrics

March 18, 2024
48V is soon to be the new 12V according to Tesla. Size and weight reduction and enhanced power efficiency are a few of the benefits.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!