Memory Fabric Forum Blog
Emulating CXL Shared Memory Devices in QEMU
Emulating CXL Shared Memory Devices in QEMUby Ryan Willis and Gregory PriceOverview In this article, we will accomplish the following: Building and installing a working branch of QEMU Launching a pre-made QEMU lab with 2 hosts utilizing a shared memory device...
CXL: A New Memory High-Speed Interconnect Fabric
CXL enables the disaggregation of memory from compute, which can dramatically improve the performance of data-intensive workloads.
Compute Express Link (CXL): What It Is and How It Works
Steve Scargall of MemVerge, explains what CXL is, how it works, and why it is a game changer for various applications, such as AI/ML, HPC, databases, and analytics. He also highlights some industry trends for cloud computing and data center infrastructures that adopt CXL standardization.
CXL memory pools: Just how big can they be?
CXL 2.0 will enable memory pooling, which sounds great, but a bit vague. How big can the memory pools be? There is no firm answer yet, but we can take a stab at it. How does a petabyte sound? Or more?
How Compute Express Link (CXL) Can Boost Your AI/ML Performance
Steve Scargall of MemVerge, discusses how compute express link (CXL) addresses memory challenges in AI/ML, ensuring optimal performance for complex and data-intensive applications.
CXL-led big memory taking over from age of SAN
CXL 2.0 could create external memory arrays pretty much in the same way Fiber Channel paved the way for external SAN arrays back in the mid-1990s.
Reducing Network Overhead with Compute Express Link (CXL)
CXL improves the performance and efficiency of computing and decreases TCO by streamlining and enhancing low-latency networking and memory coherency.
The Dawn of the CXL Era
The Dawn of the CXL Era As the first server processor that will officially support the CXL 1.1+ memory interconnect, AMD 4th Gen EPYC Processors mark the beginning of the CXL era. CXL (Compute Express Link) is a new industry standard that promises memory capacity and...