214182392 © Pavlinec | Dreamstime.com
67000e0c8a8a5a138bf53234 Cxl Txc 214182392 Pavlinec Dreamstime

CXL for Memory and More

Oct. 18, 2024
Check our more Compute Express Link (CXL) articles and videos.

Compute Express Link (CXL) from the CXL Consortium is based on PCIe, which allows compute engines access to lots of resources like solid-state storage. This TechXchange delves into how CXL works, where it's headed, and where it's used now. It includes articles about selected products as well. 

On this page, we have a CXL Quick Poll to see who is using CXL and a CXL Overview section that offers a short overview of the CXL technology.

These are TechXchanges that drill down into aspects of CXL. 

TechXchange

CXL Standards and Architecture

This TechXchange examines the Compute Express Link (CXL) architecture and standards.
TechXchange

CXL Memory and Storage

One of the main targets for CXL is memory expansion.
TechXchange

CXL Trends and Industry Insights

Check out trends and insights for Compute Express Link technology.
TechXchange

CXL Implementations and Products

Here's a collection of Compute Express Link products that includes chips, boards, and modules.

CXL Quick Poll

Take the poll to immediately see the current results. 

CXL Overview

Compute Express Link (CXL) is an industry-standard, cache-coherent interconnect based on PCI Express (PCIe). CXL 1.0 was based on PCIe Gen 5. The CXL standard has improved over multiple releases that's managed by the CXL Consortium. It typically targets large data-center or cloud compute environments and is especially useful for large artificial intelligence and machine learning (AI/ML) training environments that place storage in high demand. 

The CXL Specification 3.0 was released in 2022. It uses PCIe 6.0 as the physical interface. PCIe 6.0 switched to PAM-4 encoding, which doubled the bandwidth. The CXL 3.0 standard added new features such as fabric support with multi-level switching. It supports multiple device types per port and improves the cache coherency with peer-to-peer DMA and memory sharing.

CXL 3.1 added features like the Trusted-Execution-Environment Security Protocol (TSP). The CXL 3.1 stamdard also introduced the fabric manager API definition for a port-based routing (PBR) switch, and inter-host communication support using global integrated memory (GIM) (Fig. 1).

CXL-attached memory is one of the primary uses for CXL. CXL-attached memory cards and modules typically include a controller and lots of DRAM. This can be accessed by host processors, including FPGAs, GPUs, etc., as part of main memory. It allows significantly more memory to be available to a compute environment than what would be possible using local storage or proprietary memory sharing interconnects. This disaggregation of compute and memory allows scalablility of the overall compute environment. 

CXL Sub-Protocols and Device Types

CXL defines three sub-protocols:

  • CXL.io
  • CXL.cache
  • CXL.mem

CXL.io provides DMA and I/O enhancements over an above the standard PCIe definitions. The CXL.cache sub-protocol specifies the interaction between a host and a peripheral device but with a cache coherent mode of operation. CXL.mem defines a cache-coherent load/store operation that is compatible with a typical processor. 

The three device types include: 

  • Type 1 - CXL.io and CXL.cache support
  • Type 2 - CXL.io, CXL.cache and CXL.mem support
  • Type 3 - CXL.io and CXL.mem support

Check Out More TechXchanges

Dreamstime
techxchange_1920_x1080
TechXchange

Latest TechXchanges

Check out the newest TechXchanges on Electronic Design
Dreamstime
techxchange_1920_x1080
TechXchange

Electronic Design TechXchange by Category

Check out all our topic-focused TechXchange content collections.
ID 38307812 © Cammeraydave | Dreamstime.com
id_38307812__cammeraydave__dreamstime
TechXchange

Find a TechXchange

Search for a TechXchange

Sponsored Recommendations

Comments

To join the conversation, and become an exclusive member of Electronic Design, create an account today!