HMComp: Extending Near-Memory Capacity using Compression in Hybrid Memory
Paper i proceeding, 2024

Hybrid memories, especially combining a first-tier near memory using High-Bandwidth Memory (HBM) and a second-tier far memory using DRAM, can realize a large and low cost, high-bandwidth main memory. State-of-the-art hybrid memories typically use a flat hierarchy where blocks are swapped between near and far memory based on bandwidth demands. However, this may cause significant overheads for metadata storage and traffic. While using a fixed-size, near-memory cache and compressing data in near memory can help, precious near-memory capacity is still wasted by the cache and the metadata needed to manage a compressed hybrid memory. This paper proposes HMComp, a flat hybrid-memory architecture, in which compression techniques free up near-memory capacity to be used as a cache for far memory data to cut down swap traffic without sacrificing any memory capacity. Moreover, through a carefully crafted metadata layout, we show that metadata can be stored in less costly far memory, thus avoiding to waste any near-memory capacity. Overall, HMComp offers a speedup of single-thread performance of up to 22%, on average 13%, and traffic reduction due to swapping of up to 60% and by 41% on average compared to flat hybrid memory designs.

Memory Compression


Hybrid Memory

Memory Management


Qi Shao

Chalmers, Data- och informationsteknik, Datorteknik

Angelos Arelakis

ZeroPoint Technologies

Per Stenström

ZeroPoint Technologies

Chalmers, Data- och informationsteknik, Dator- och nätverkssystem

Proceedings of the International Conference on Supercomputing

9798400706103 (ISBN)

38th ACM International Conference on Supercomputing, ICS 2024
Kyoto, Japan,





Mer information

Senast uppdaterat