Global Management of Cache Hierarchies
Paper in proceeding, 2010

Cache memories currently treat all blocks as if they were equally important. This assumption of equally important blocks is not always valid. For instance, not all blocks deserve to be in L1 cache. We therefore propose globalized block placement. We present a global placement algorithm for managing blocks in a cache hierarchy by deciding where in the hierarchy an incoming block should be placed. Our technique makes decisions by adapting to access patterns of different blocks. The contributions of this paper are fourfold. First, we motivate our solution by demonstrating the im- portance of a globalized placement scheme. Second, we present a method to categorize cache block behavior into one of four categories. Third, we present one potential design exploiting this categorization. Finally, we demonstrate the performance of our design. The proposed scheme enhances overall system performance (IPC) by an average of 12% over a traditional LRU scheme while reducing traffic between L1 cache and L2 cache by an average of 20%, using SPEC CPU benchmark suite. All of this is achieved with a table as small as 3 KBytes.

memory hierarchies

resource management

Author

Mohamed Zahran

City University of New York (CUNY)

Sally A McKee

Chalmers, Computer Science and Engineering (Chalmers), Computer Engineering (Chalmers)

7th ACM International Conference on Computing Frontiers, CF'10; Bertinoro; Italy; 17 May 2010 through 19 May 2010

131-139
978-145030044-5 (ISBN)

Subject Categories

Computer and Information Science

Information Science

Other Electrical Engineering, Electronic Engineering, Information Engineering

DOI

10.1145/1787275.1787315

ISBN

978-145030044-5

More information

Created

10/8/2017