Compression Techniques for Code Size and Data Bandwidth Reduction
Licentiatavhandling, 2006
A challenge in the design of high performance computer systems is how to transfer
data efficiently between main memory and the faster and smaller memory located on
the processor chip. Main memory holds both the program to be executed and the data
it needs to perform its tasks.
This thesis focuses on how compression techniques can be used to store programs
more efficiently and how the data can be encoded in such a way that the effective bandwidth
of the link between memory and the processor chip is improved.
In the first part of the thesis, different dictionary based compression schemes are
evaluated and a new flexible scheme for efficient compression and execution of compressed
programs is proposed. Since some sequences of instructions are more common
than others, it is possible to reduce the memory needed for the program with little hardware
overhead.
In order to transfer data efficiently, this thesis also analyzes and classifies what types
of value locality we can exploit in the data. Current state-of-the-art compression techniques
are analyzed in the context of this categorization. Using this information, I show
that it is possible to efficiently combine different techniques that work on different types
of locality into a more efficient compression algorithm.
Finally, I identify that the data-link compression schemes scale poorly with the number
of nodes in a multiprocessor system. By studying frequent value encoding in such a
framework I show that in some configurations, it is possible to reuse frequent values at
each node and achieve significantly better bandwidth reductions than the baseline case.
static code compression
high performance
Computer architecture
data link compression