Within the EDA and semiconductor industries, we regularly talk about the importance of technical standards and how they help make our designs successful in silicon. It’s interesting to look outside our neighborhood on occasion to see what others are doing as well. One of the more dramatic examples of technical standards at work is the Internet, which is truly amazing.

Do you remember the promise of the Information Superhighway? We now live on it and sometimes wonder how we survived without it. In many realms, Internet access is considered a basic utility, like water or electricity. The world is completely different from what it was prior to the Internet being used by more than a third of our planet’s population. And it works very well, thanks to standards.

Download this article in .PDF format
This file type includes high resolution graphics and schematics.

The Standards At Work

The Internet Engineering Task Force (IETF) is one of several organizations that develop, deploy, and maintain standards that make the Internet work. It is responsible for powerful standards such as HTTP, TCP, and IPv4, which are called inter-domain standards. The IETF is also responsible for intra-domain standards like DHCP, ARP, and OSPF.

HTTP (hypertext transfer protocol) is a basis for all data communication across the Internet. It has evolved from a standard for downloading Web pages to being used to determine the authenticity of a Web site, among other things. The IETF uses the term “wildly successful” to describe a standard like HTTP, which serves purposes not comprehended at its inception.

TCP (transmission control protocol) is one of the first two protocols for transferring data between programs running on different computers. (The other is IP, or Internet Protocol, hence the term TCP/IP.) TCP was originally defined in a 1974 paper published by the IEEE as part of a larger “transmission control program.” Vinton Cerf and Robert Kahn, two famous Internet pioneers, wrote the paper.

IPv4 is the fourth version of the Internet Protocol. It defines IP addresses and is in widespread use today. With IPv4 and its 32-bit format, there is a maximum of 4,294,967,296 IP addresses. It should come as no surprise, although it usually does, that we will run out of IP addresses very soon because there are so many devices connected to the Internet. Indeed, some registries are already fully depleted.

What’s the answer? The IETF’s IPv6 boasts a 128-bit format, offering an astonishing 192,903,836,122,980,988,357,922,113,056,557 or nearly 200 novillion IP addresses! It’s unlikely we’ll ever use these up, but then again, it used to be said that semiconductor features could never get smaller than… well, you remember.

DHCP (dynamic host configuration protocol) is used by a server to assign an IP address to a computer. The assigned IP address falls within a specific range of addresses allowed for the server’s network. The IP addresses are reused when computers join and leave the network. Dynamic IP addresses are more cost-effective and more secure than static IP addresses. Static IP addresses, which do not get reassigned, are more reliable for Voice over IP (VOIP), gaming, and virtual private networks (VPNs).

ARP (address resolution protocol) is used to map IP addresses to a device’s physical address when the device is on a local network. The device’s IP address is 32 bits long (in IPv4). If it’s connected via Ethernet, it also has a 48-bit Ethernet address. Plus, the device itself has a unique hardware number called a media access control (MAC) address. An ARP program uses the address resolution protocol to sort this out so the devices on the network can communicate with each other.

OSPF (open shortest path first) is used in an autonomous system to communicate IP routing information to its constituent routers. Each router sends a message about its usable interfaces, its available neighbors, and the cost of using each interface. The routers then build identical topology maps for themselves to decide the most efficient way to communicate with each other at any given moment.

Who’s In Charge?

Given these and many other critical standards for a functioning Internet, it would seem that someone or some entity should be in control of it all. Not so. No one is “in charge” of the Internet! This is a very good thing. The evolution of Internet standards is not in the hands of any dictator, benevolent or otherwise. Instead, it’s in the hands of the technologists who know what works and what doesn’t, and they can prove it. Then the users of the standards are free to choose the ones that work best and abandon the ones that don’t.

The cooperation among the people who create, distribute, and maintain the Internet standards is vast and admirable. Does this mean there isn’t contention or disagreement in this community? Of course not. The work to produce Internet standards is no easier than what we face in EDA. All of the participants have their own perspectives, strengths, weaknesses, and missions to accomplish.

As a result, achieving consensus in open standards processes is difficult. We know, however, and so do the Internet standards people, that consensus-driven standards are more likely to be adopted broadly. Adoption is overwhelmingly the best measure of a standard’s success.

The IETF follows an open standardization process that has some interesting differences from the processes we usually use in the EDA industry. Most noteworthy is that there is no voting. Instead, the organization uses a model called “rough consensus, running code.” The working group participants bring their engineering expertise to the table, get a general consensus, and write code to prove that the standard will work.

Download this article in .PDF format
This file type includes high resolution graphics and schematics.

Rough consensus is a concept that embodies the overall sense of the group. It is not a simple majority, and it can’t even be called a supermajority because it is not achieved by numbers (votes). Instead, the chair of the working group gets a sense of its collective opinion. A show of hands can work, as can a fun technique the IETF uses called “the hum.”

The chair makes a proposal—for example, that a certain feature be added to the standard. The group members who agree with the proposal hum their approval. Based on the sound level in the room, it can be obvious whether or not most people are in agreement. Apparently a person can only hum to a limited volume, so it’s difficult to stuff the ballot-hum box. Let’s try this at our next EDA standards meeting, shall we? However, if only a couple of people are humming but they show code that proves the viability of the construct, the group can change its collective mind and hum right along.

It’s this running code that adds the second dimension to the effectiveness of the IETF’s standards. The group doesn’t wait until the standards are published to determine their market worthiness. It’s as if prototyping is happening in parallel with design. In the EDA standards world, the concept of running code is sometimes practiced, but more behind the scenes and individually rather than as a group. Let’s not try running code at our next EDA standards meeting.

But there’s a key to making rough consensus and running code work. There is a common goal that all participants have, and they are probably reminded of it regularly. They want one and only one global Internet. Specialized protocols used in different places undermine this goal, so there is a conscious effort to prevent it from happening. The benefit is not only a single Internet, but also one that has the most interoperability, scalability, and quality possible.  

What’s Next?

As the world changes and humankind creates products and solutions unimagined a decade ago, the Internet is coming under pressure to advance as well. The current IETF chairperson* sees at least six significant areas that the Internet must address: power, bandwidth, mobility, new applications, smart objects, and infrastructure. Each of these requires enhanced or new standards.

Power consumption is something very close to our industry. All the servers, routers, computers, and electronic devices that make up and use the Internet are hungry for power. As more of them are produced and consumed, the cries for less power usage and heat generation are becoming louder and louder. (Unlike a hum, a cry seems to have no upper bound on its volume.) Internet standards that optimize router performance, for instance, will help ease the cries, as will our own EDA standards for low-power semiconductor design.

Bandwidth goes without saying. With the increasing number of people in the world using the Internet, the demand for access and performance increases too. Today’s massive data centers that power such juggernauts as Google, Facebook, and Amazon require transmission capacity that will not diminish. The relatively tiny handheld devices that have enough computing power to match the last generation’s massive data centers require their own connectivity. Widespread adoption of IPv6 is becoming an imperative.

Regarding those handheld devices, mobility is a major shift in how people are beginning to behave. No more do we want to simply talk to anyone, anywhere and anytime. Now we want to text, video chat, browse the Internet, post content, work, watch movies, play games, monitor our health, get directions, read reviews, shop, bank, and get instant news updates whenever and wherever we want, with the Internet making it all possible. On top of that, our transportation systems are getting more sophisticated, using Internet protocols in their systems to get us to the places we want to be safely and quickly.

With all of this happening, our expectations for the Internet are growing. All the new applications satisfy our craving to be more connected with each other and to have the whole world’s knowledge at our fingertips. We expect the Internet to be omnipresent, yet we want to be safe from harm and have our privacy protected. Internet protocols for information proliferation and Internet standards for protection will play a leading role in satisfying these orthogonal desires.

A transformation is underway that will astound the historians of the future. It’s being called “the Internet of Things,” and it will cause a cultural and behavioral shift greater than the Industrial Revolution or the Renaissance. Things—machines, computers, containers, appliances, shoes, everyday objects—will communicate with each other in ways that we can imagine now and ways that we can’t. Closely related are the Smart Grid, cloud computing, intelligent vehicles and highways, and other technologies that make us smarter. The Internet will provide the means. Each object will need an identification code, an IP address per se.

These views of the advancing Internet clearly are putting stress on its infrastructure, which is built on standards. Adoption of IPv6 is a prime example of how the standards are evolving and will continue to evolve. Domain Name System Security Extensions (DNSSE) are relatively new and are designed to address the need for increased security and authentication of domain names. Resource Public Key Infrastructure (RPKI) is a framework being standardized by the IETF to secure routing infrastructure. These are but a few of the standards efforts that are crucial with increasing Internet usage.

This article barely scratches the surface of the standards that make the Internet work. It’s a phenomenal undertaking by countless phenomenal professionals. We in the EDA industry can be proud to help design the chips that go into the devices that use the Internet to make mankind great.

*Russ Housley, chair of the Internet Engineering Task Force (IETF), is an accomplished Internet security engineer and a wealth of knowledge. As a fellow standards practitioner, he provided much of the information for this article.

Karen Bartlesonis the senior director of community marketing, at Synopsys Inc. She has 30years of experience in semiconductors, joining Synopsys in 1995 as standards manager. Her responsibilities include initiatives that increase customer satisfaction through interoperability, standards support, university relationships, and social media engagements. She also held the position of director of quality at Synopsys for three years. She was elected to become president of the IEEE Standards Association for the 2013-2014 term. She holds a BSEE from California Polytechnic University, San Luis Obispo, Calif. She was the recipient of the Marie R. Pistilli Women in Design Automation Achievement Award in 2003. Her first book, The Ten Commandments for Effective Standards, was published in May 2010.

Download this article in .PDF format
This file type includes high resolution graphics and schematics.