Compute Express Link (CXL) from the CXL Consortium is based on PCIe, which allows compute engines access to lots of resources like solid-state storage. This TechXchange delves into how CXL works, where it's headed, and where it's used now. It includes articles about selected products as well.
On this page, we have a CXL Quick Poll to see who is using CXL and a CXL Overview section that offers a short overview of the CXL technology.
These are TechXchanges that drill down into aspects of CXL.
CXL Overview
Compute Express Link (CXL) is an industry-standard, cache-coherent interconnect based on PCI Express (PCIe). CXL 1.0 was based on PCIe Gen 5. The CXL standard has improved over multiple releases that's managed by the CXL Consortium. It typically targets large data-center or cloud compute environments and is especially useful for large artificial intelligence and machine learning (AI/ML) training environments that place storage in high demand.
The CXL Specification 3.0 was released in 2022. It uses PCIe 6.0 as the physical interface. PCIe 6.0 switched to PAM-4 encoding, which doubled the bandwidth. The CXL 3.0 standard added new features such as fabric support with multi-level switching. It supports multiple device types per port and improves the cache coherency with peer-to-peer DMA and memory sharing.
CXL 3.1 added features like the Trusted-Execution-Environment Security Protocol (TSP). The CXL 3.1 stamdard also introduced the fabric manager API definition for a port-based routing (PBR) switch, and inter-host communication support using global integrated memory (GIM) (Fig. 1).
CXL-attached memory is one of the primary uses for CXL. CXL-attached memory cards and modules typically include a controller and lots of DRAM. This can be accessed by host processors, including FPGAs, GPUs, etc., as part of main memory. It allows significantly more memory to be available to a compute environment than what would be possible using local storage or proprietary memory sharing interconnects. This disaggregation of compute and memory allows scalablility of the overall compute environment.
CXL Sub-Protocols and Device Types
CXL defines three sub-protocols:
- CXL.io
- CXL.cache
- CXL.mem
CXL.io provides DMA and I/O enhancements over an above the standard PCIe definitions. The CXL.cache sub-protocol specifies the interaction between a host and a peripheral device but with a cache coherent mode of operation. CXL.mem defines a cache-coherent load/store operation that is compatible with a typical processor.
The three device types include:
- Type 1 - CXL.io and CXL.cache support
- Type 2 - CXL.io, CXL.cache and CXL.mem support
- Type 3 - CXL.io and CXL.mem support