Towards Large-Capacity and Cost-Effective Main Memories
Doktorsavhandling, 2017
Large, multi-terabyte main memories per processor socket are instrumental to address the continuously growing performance demands of domains like high-performance computing, databases, and big data. It is an important objective to design large-capacity main memories in a way that maximizes their cost-effectiveness and at the same time minimizes performance losses caused by cost-effective tradeoffs. This thesis addresses a number of issues towards this objective.
First, parallel memory protocols, that are key to large main memories, have a limited number of pins. This implies that to address future capacities, the protocols would have to multiplex the pins to transfer wider addresses in a greater number of cycles, hurting performance. This thesis contributes with the concept of adaptive row addressing, comprising three techniques, as a general approach to minimize the performance losses of such cost-effective parallel memory protocols, and, in fact, make them as efficient as an idealized protocol with many enough pins to transfer each address in one cycle.
Second, emerging Storage-Class Memory (SCM) technologies can potentially revolutionize main memory design by enabling large-capacity and cost-effective hybrid main memories, that combine DRAM and SCM. However, they add multiple dimensions to the design space of main memories. Detailed exploration of such design spaces solely by means of simulation or prototyping is inefficient. This thesis contributes with Crystal, an analytic method for partitioning hybrid-memory area between DRAM and SCM at design time, and Rock, a framework for pruning design spaces of hybrid memories. Crystal and Rock help system architects to quickly and correctly identify the most promising design points for subsequent detailed evaluation.
Third, in hybrid main memories, DRAM is the limited resource, and co-running programs compete for it. Fair and at the same time high-performance management of such memories is an important and open issue. To avoid long operating-system overheads, this management has to be performed by hardware. This thesis contributes with ProFess: a probabilistic hybrid main memory management framework for high performance and fairness. ProFess includes two hardware-based mechanisms that cooperate to significantly improve fairness, performance, and energy-efficiency compared to the state-of-the-art.
Fairness
Energy Efficiency
Hybrid Main Memory
Performance
Design-Space Exploration
Parallel Memory Protocols
Large-Capacity Local Memory
Hardware-Based Hybrid Memory Management
Cost-Effectiveness