Add 10G Ethernet And InfiniBand, Then Mix Thoroughly

Dec. 15, 2006
Mellanox’s ConnectX host architecture blends 10G Ethernet and 20-Gbit/s InfiniBand.

Cluster building is becoming ever-more common with InfiniBand, but these clusters never operate in isolation. This means a connection to the outside world, one that runs Ethernet. With the ConnectX hardware architecture from Mellanox, the two networking fabrics come together (Fig. 1).

The ConnectX hardware interface will find a home in Mellanox's next iteration of host adapter chips. The same interface will be used for both InfiniBand and the new Ethernet chips. Planned as an interface with Ethernet and InfiniBand interfaces, the first chip will target the cluster nodes between an Ethernet front end and InfiniBand back end (Fig. 2).

This approach works well because 10-Gbit (10G) Ethernet uses the same serial-deserializer (SERDES) as InifiniBand. Mellanox implements stateless Ethernet hardware acceleration that brings significant performance advances with low host overhead, but it's less than a TCP/IP offload engine (TOE). Most TOE implementations running at 1 Gbit/s already consume more than twice the power than InfiniBand, which runs significantly faster (40 Gbits/s/port).

The InfiniHost III Ex Dual-Port InfiniBand adapter consumes only 6 W. The stateless approach will use more host resources, but it will already have extra cycles available because the Infini-Band interface imposes significantly less host overhead.

COMPATIBILITY IS KEY ConnectX is compatible with standard IP-based protocols used with Ethernet, including IP, TCP, UDP, ICMP, FTP, ARP, and SNMP, making it compatible with third-party 1-Gbit/s and 10-Gbit/s Ethernet products. These protocols work over InfiniBand as well, though it's more efficient to use the OpenFabric interface.

The InfiniBand interface will include all of the InfiniHost III features, including OpenFabric RDMA (remote direct memory access) support. The Ethernet interface doesn't provide the RDMA support.

Some vendors of TOE Ethernet adapters have promised or are delivering RDMA support (see "iSCSI Does 10G Ethernet" at www.electronicdesign. com, ED Online ID 13285). InfiniBand offers other features, such as quality-of-service support and end-node application congestion management.

PRICE AND AVAILABILITYSingle-and dual-port InfiniBand-only adapters are available from Mellanox right now. The mixed Ethernet/InfiniBand adapters will arrive in the first quarter of 2007. Both 1-Gbit/s and 10-Gbit/s Ethernet interfaces will be available. Pricing is expected to be comparable to the InfiniBand adapters.

Mellanox
www.mellanox.com

Sponsored Recommendations

TTI Transportation Resource Center

April 8, 2024
From sensors to vehicle electrification, from design to production, on-board and off-board a TTI Transportation Specialist will help you keep moving into the future. TTI has been...

Cornell Dubilier: Push EV Charging to Higher Productivity and Lower Recharge Times

April 8, 2024
Optimized for high efficiency power inverter/converter level 3 EV charging systems, CDE capacitors offer high capacitance values, low inductance (< 5 nH), high ripple current ...

TTI Hybrid & Electric Vehicles Line Card

April 8, 2024
Components for Infrastructure, Connectivity and On-board Systems TTI stocks the premier electrical components that hybrid and electric vehicle manufacturers and suppliers need...

Bourns: Automotive-Grade Components for the Rough Road Ahead

April 8, 2024
The electronics needed for transportation today is getting increasingly more demanding and sophisticated, requiring not only high quality components but those that interface well...

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!