Configurable Network Chip Set Handles Packets And Traffic At Wire Speeds

Sept. 16, 2002
Delivering scalable performance from 10 to 40 Gbits/s with full QoS, the TeraPacket chip set cuts line card complexity by two-thirds.

As the systems that support the Internet and Metropolitan-Area Networks (MANs) in both the edge and core are called on to handle more packet traffic, existing line card designs are running out of steam. The level of integration available today typically allows designers to implement an OC-192 (10-Gbit/s) channel per line card. But as bandwidth needs grow from gigabits per second to terabits per second, system racks won't be able to power or support the large number of line cards necessary.

To solve the board space and power challenges and provide a scalable solution, designers at Teradiant Networks concentrated on integrating many line card functions into a chip set that can reduce line card cost and power consumption by two-thirds. The TeraPacket chip set provides scalable performance, allowing designers to implement line cards with 10-, 20-, or 40-Gbit/s aggregate throughputs. The chip set performs both packet processing and traffic management functions along with full quality-of-service (QoS) management at line rates.

The QoS features support packet classification to determine the kind of flow and decide which priority queue to move the packet into. Policers/markers enforce the service provider throughput agreements. Hierarchical queues provide an extensive set of queues to meet any service provider priority needs. According to industry-standard algorithms, schedulers determine which queue to service, based on priority. Lastly, traffic shapers help throttle the flow through an individual queue. This lets the chip set manage and maintain the right amount of traffic over the network.

The two chips, used in various combinations, along with the appropriate memory buffers and network interfaces permit designers to implement line cards that pack up to four OC-192 ports, four 10-Gbit Ethernet ports, or 16 OC-48 ports. Thus, systems such as Internet routers, multiservice switches, hybrid TDM/data switches, and MAN switches are found both in the network edge and core, and storage-area-network servers can move more data at a lower cost and less power per channel.

Additionally, the high level of integration provided by the chip set keeps board space to a minimum. For example, at the high-end of the performance curve, four 10-Gbit/s channels can squeeze onto one line card using the TeraPacket chip set. In contrast, existing chip solutions might typically permit designers to implement just a single 10-Gbit/s port on a line card before exceeding space or power allotments.

To achieve the high level of integration, designers at Teradiant developed a flexible, fully configurable super-pipelined architecture. The company is currently preparing over 20 patent applications to cover many architectural innovations. Two functions divide the chip set—the multiservice packet engine and the multiservice traffic manager. Each chip is actually available in three slightly different versions for designers implementing 10-, 20-, or 40-Gbit/s subsystems.

In a 10-Gbit/s full-duplex system, the line card would include one TN100 Packet Engine and one TN101 Traffic Manager. A 20-Gbit/s solution would also require only one packet engine and one traffic manager. However, such a system would employ the TN200 and TN201, which have expanded I/O and memory interfaces to handle the higher data traffic. No other external memory is needed, except a small packet memory. This reduces system memory cost compared to previous approaches. In a 40-Gbit/s full-duplex implementation, two copies of the TN400 Packet Engine and two copies of the TN401 Traffic Manager chips are used (Fig. 1). Here again, the bus interfaces and some internal operating modes have been modified to accommodate the higher data traffic.

The chips are designed to handle all well-established communication protocols, including IPv4, IPv6, ATM, frame relay, point-to-point protocol (PPP), Ethernet, multiprotocol layer switching (MPLS), and the MPLS Martini draft. In a departure from the more traditional software-based network processors that use internal microcode to control operation, the TeraPacket chip set employs a hardwired yet configurable architecture. The hardwired (state machine) approach enables the chips to execute their operations much faster than a microcode-based solution. At the same time, designers included plenty of user-configurable internal registers and tables to set the various operating parameters for the multiple communications protocols and standards.

A Look At The Chips: Each of the two basic chips integrates many functions that previously required multiple chips to implement. To handle wire-speed packet processing, the TN100/200/400 packet engines provide all of the packet processing functions across the established protocols (Fig. 2). They can also handle operations that support MPLS tunnels, multiple MPLS Label PoPs and Pushes, IP-in-IP tunnels, and generic route encapsulation (GRE) tunnels. For classification, the chip has configurable fields, leverages both internal and external CAM support, and provides extensive policy configurability.

Incoming or outgoing packets can be filtered, and virtual local-area network (LAN) channels can be mapped to logical interfaces. To interface to the rest of the system, designers opted for SPI-4.2 ports on both the framer side and the traffic-manager side of the packet engine. In the 10-Gbit/s version, channelization can go as small as STS-1 channels. For 20- and 40-Gbit/s versions, channelization goes down to STS-3 channels. Each packet engine also keeps track of many statistics that can be implemented for performance analysis, or service billing arrangements.

The on-chip classifier provides configurable classification fields and uses an on-chip CAM to quickly perform matching operations. The classifier can also expand the CAM with external CAM chips to handle larger searches.

For routing lookup, the engine can handle up to 1 million IP prefixes, up to 1 million MPLS labels, or up to 1 million ATM virtual channels. To support policing, the chip includes dual-bucket policers that perform triple color packet marking. The chip's packet classification block carries out packet classification based on the layer 3 (IP) packet header and layer 4 (TCP) header to determine the QoS attributes. Therefore, the block can determine packet priority, queue index, policing index, and control flags.

The multiservice traffic-manager chip complements the packet engine (Fig. 3). This chip provides up to four 10-Gbit/s channel interfaces to the packet engine, plus four additional 10-Gbit/s interfaces to the switch fabric. In total, one traffic-manager chip can handle up to multiterabit switching fabrics.

The chip offers four main functions—queue selection, buffer management, memory control, and packet scheduling—beyond the high-speed SPI-4.2 interfaces. On the switch-fabric side, besides the SPI-4.2 interfaces, the chip can support the new NPF-SI interface. These four functions can analyze incoming packets, determine where they go, and buffer and schedule them to ensure maximum throughput.

A local CPU connection using a 64-bit, 133-MHz PCI-X interface on the traffic-manager chip allows a control-plane CPU to tie into the system to help coordinate system operation. The CPU would typically handle functions like initialization and configuration, interrupt handling, and the processing of special packets (unknown packets or packets that violate predefined parameters, for example).

The ingress queue selection block receives incoming data from the packet engine and places the data on the appropriate virtual output queue for transfer to the switch fabric. For egress operations, the block receives packets from the switch fabric and places the data into the appropriate queue based on the queue ID. The manager chip supports per-label packet queuing, class-based queuing, and priority-based queuing. Unicast and multicast transfers are supported too.

A sophisticated on-chip buffer-management scheme ensures data availability. The scheme also supports weighted random early discard (WRED), tail drop, per-class thresholds, and buffer reservation. Each queue in the traffic-manager chip can be configured with up to three drop options, its length, the minimum number of reserved buffers, and a flag to indicate whether WRED or tail-drop should be applied.

The chip's memory controller interfaces to external standard DRAMs and SRAMs. To provide the high bandwidth, the controller efficiently balances a traffic load while maintaining line-rate performance and offering a peak data bandwidth of 320 Gbits/s. The controller supports 256 independent streams of incoming packets and performs queue table management using external high-speed SRAM (via a quad-data-rate interface).

The packet scheduler on the chip coordinates all packet transfers to and from the switch fabric and can implement any of several scheduling schemes: strict priority, weighted round robin, deficit round robin, and shaping (rate limiting). Multicast transfers are also coordinated by the scheduler. It can select a single packet from one of two multicast queues, then transfer that packet to the switch fabric, where it's replicated to the appropriate ports based on the multicast group ID.

Because most functionality on the packet engine and traffic manager is hardwired, software support requirements are minimal. Teradiant provides all the application programming interfaces (APIs), the device drivers (both binaries and source code) on a VxWorks-based platform.

The APIs supplied by Teradiant employ a higher level of abstraction than the low-level device drivers and let the designer create linkages to other software modules. Also available are off-the-shelf third-party-developed IP/MPLS routing stacks that can be ported on top of the Teradiant API. The company offers a reference implementation based on a third-party IP/MPLS stack as well.

Price & Availability
The TeraPacket chip set will be ready for sampling in December. All devices will be housed in flip-chip ball-grid-array packages. The TN100 and 101 come in 1400- and 1100-contact packages, respectively, while the TN200/201 and TN400/401 all come in 2116-contact packages. (However, almost 40% of the contacts are devoted to power and ground connections.) In lots of 10,000 units, the TeraPacket chips for the 10-Gbit/s systems (TN100 and 101) cost $1250 apiece. For 20- and 40-Gbit/s systems, the TN200/201 and TN400/401 all sell for $2600 each.

Teradiant Networks Inc., 2835 Zanker Rd., San Jose, CA 95134; Sales Dept., (408) 519-1729; www.teradiant.com.

Sponsored Recommendations

Highly Integrated 20A Digital Power Module for High Current Applications

March 20, 2024
Renesas latest power module delivers the highest efficiency (up to 94% peak) and fast time-to-market solution in an extremely small footprint. The RRM12120 is ideal for space...

Empowering Innovation: Your Power Partner for Tomorrow's Challenges

March 20, 2024
Discover how innovation, quality, and reliability are embedded into every aspect of Renesas' power products.

Article: Meeting the challenges of power conversion in e-bikes

March 18, 2024
Managing electrical noise in a compact and lightweight vehicle is a perpetual obstacle

Power modules provide high-efficiency conversion between 400V and 800V systems for electric vehicles

March 18, 2024
Porsche, Hyundai and GMC all are converting 400 – 800V today in very different ways. Learn more about how power modules stack up to these discrete designs.

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!