VHS and Betamax were two rival consumer-level, analog video-recording tape standards for those who now only know about DVDs and streaming video. They both did the same thing and VHS eventually won out. That’s only one example of major competing standards, though.
The Compute Express Link (CXL) and the Cache Coherent Interconnect for Accelerators (CCIX) standards look to repeat a similar epic battle, with both targeting the same problem and using common technology. Each is built on PCI Express (PCIe) and bring features like cache coherence to the party. These features are needed to bring hardware accelerators into the fold in a way that PCIe alone cannot.
PCIe Roots
Designed for peripheral control, PCIe was built on the parallel PCI bus. PCIe is much more than PCI at this point—the ubiquitous standard is approaching its fifth generation. The PCI SIG has done a great job of providing a growing, backward-compatible standard.
The problem is that requirements and support for PCIe devices is more limited than what’s necessary for connecting cache-coherent, virtual-memory devices like processors. Typically, it’s been done using processor buses, but these are typically vendor-specific. This usually wasn’t an issue since multichip/multiprocessor solutions were homogeneous, such as a multichip Xeon server.
The rise of GPGPUs and artificial-intelligence (AI) hardware acceleration, as well as more common integration of other hardware-acceleration platforms like FPGAs, has intensified the need for the functionality provided by these vendor-specific links.
Enter CCIX and CXL.
The CCIX Base Specification 1.0, under the auspices of the CCIX Consortium, has been available since the beginning of the year. The CXL Consortium’s CXL standard, which was revealed privately, is based on technology from Intel. As noted, both are built on PCIe and coexist with it. One of the advantages of this approach is that PCIe switch chips can provide interconnect support (see figure). In fact, a CXL or CCIX device will also be a PCIe device; thus, these standards essentially act as supersets of PCIe.
CCIX, shown here, as well as CXL, can support different topologies in addition to taking advantage of PCIe switches.
The challenge for developer is that the host and the accelerator must have matching support. Therefore, a host that supports CCIX can work with a CCIX device since host CPUs still tend to be the coordinating hardware with a system. Actually, CCIX/CXL peers really need to work together because they’re all designed to have equal access to memory in a cache-coherent fashion.
One of the other applications available with these interfaces is access to lots of memory. This could be lots of DRAM, flash memory, or other technologies, but not limited by the typical memory bus that likes to have DIMMs close and in a limited number.
Who’s Supporting What?
CCIX and CXL have a plethora of notable supporters. CCIX includes Arm, AMD, IBM, Marvell, Qualcomm, and Xilinx, just to mention a few. Xilinx has already delivered CCIX support for some of its FPGA platforms. CXL includes Intel, Hewlett Packard Enterprise, and Dell EMC. Some companies are in both camps, like Huawei. The mixes also include connector, software, and test-and-measurement companies like Keysight, Amphenol, and Microsoft. The lists tend to be rather extensive.
It would really be nice if CCIX and CXL became one. However, without having a detailed look at the two standards, it’s difficult to distinguish similarities and differences. The challenge is that compatible hosts need to be available to take advantage of the accelerators or memory being added to a system.
PCIe has been a great success because there was one standard and everyone benefited from the ability to mix and match hosts and peripherals. CCIX and CXL are more ambitious in terms of functionality. The next two years will be critical to their success or failure.