CXL is based on PCIe that allows compute engines access to lots of resources like solid state storage. This TechXchange delves into how CXL works, where it is headed and where it is used now. It includes articles about selected products as well.
CXL Overview
Compute Express Link (CXL) is an industry-standard, cache-coherent interconnect based on PCI Express (PCIe). CXL 1.0 was based on PCIe Gen 5. The CXL standard has improved over multiple releases that is managed by the CXL Consortium. It typically targets large, data center or cloud compute environments and is especially useful for large artificial intelligence and machine learning (AI/ML) training environments that place storage in high demand.
The CXL Specification 3.0 was released in 2022. It uses PCIe 6.0 as the physical interface. PCIe 6.0 switched to PAM-4 encoding which doubled the bandwidth. The CXL 3.0 standard added new features such as fabric support with multi-level switching. It supports multiple device types per port and improves the cache coherency with peer-to-peer DMA and memory sharing. CXL 3.1 added features like the Trusted-Execution-Environment Security Protocol (TSP). The CXL 3.1 stamdard also introduced the fabric manager API definition for a port based routing (PBR) switch, inter-host communication support using global integrated memory (GIM) (Fig. 1).
CXL-attached memory is one of the primary uses for CXL. CXL-attached memory cards and modules typically include a controller and lots of DRAM. This can be accessed by host processors, including FPGAs, GPUs, etc., as part of main memory. It allows significantly more memory to be available to a compute environment than what would be possible using local storage or proprietary memory sharing interconnects. This disaggregation of compute and memory allows scalablility of the overall compute environment.
CXL Sub-Protocols and Device Types
CXL defines three sub-protocols:
- CXL.io
- CXL.cache
- CXL.mem
CXL.io provides DMA and I/O enhancements over an above the standard PCIe definitions. The CXL.cache sub-protocol specifies the interaction between a host and a peripheral device but with a cache coherent mode of operation. CXL.mem defines a cache-coherent load/store operation that is compatible with a typical processor.
The three device types include:
- Type 1 - CXL.io and CXL.cache support
- Type 2 - CXL.io, CXL.cache and CXL.mem support
- Type 3 - CXL.io and CXL.mem support
CXL Standards and Architecture
These articles and videos address the CXL standard as well as the overall architeture. The CXL standard is based on PCI Express (PCIe) so we include details about that as well.
CXL Memory and Storage
CXL-Attached memory is just one use of CXL but it is one of the main ones at this point in time. This is why we break out some of the articles and address this aspect of CXL
CXL Trends and Industry Insights
This section includes articles and interviews about CXL trends. This includes how it is reshaping the data center to how it fits with respect to other storage technologies.
CXL Implementations and Products
This section includes articles and videos of product and other CXL implementations. This is desgiend to provide a representative collection rather than an exhaustive list as the number of CXL products has grown significantly since its inception. CXL is now being delivered in a wide range of platforms.
CXL and the Data Center
CXL can be used for embedded and server applications but its primary target is the cloud and data centers that require massive amounts of storage. CXL provides that scalable mechanism that provides direct application access to cache-coherent storage.