Compressed Memory Hierarchy Dongrui SHE Jianhua HUI
Slide 2
The research paper: A compressed memory hierarchy using an
indirect index cache. By Erik G. Hallnor and Steven K. Reinhardt
Advanced Computer Architecture Laboratory EECS Department
University of Michigan
Introduction Memory capacity and Memory bandwidth The amount of
cache cannot be increased without bound; Scarce resource: memory
bandwidth;
Slide 5
Application of data compression First, adding a compressed main
memory system (Memory Expansion Technology, MXT) Second, Storing
compressed data in the cache, then data be transmitted in
compressed form between main memory and cache
Slide 6
A key challenge Management of variable-sized data blocks:
128-byte Block After compression, 58 bytes unused
Memory eXpansion Technology A server class system with hardware
compressed main memory. Using LZSS compression algorithm. For most
applications, two to one compression (2:l). Hardware compression of
memory has a negligible performance penalty.
Slide 9
Hardware organization Sector translation table Each entry has 4
physical addr that each points to a 256B sector.
Cache compression Most designs for power savings, using more
conventional cache structures: unused storage benefits only by not
consuming power. To use the space freed by compression, new cache
structure is needed.
Conventional Cache Structure Tag associated statically with a
block When data is compressed
Slide 14
Solution: Indirect Index Cache A tag entry not associated with
a particular data block A tag entry contains a pointer to data
block
Slide 15
IIC structure The cache can be fully associative
Slide 16
Extend IIC to compressed data Tag contains multiple pointers to
smaller data blocks
Slide 17
Software-managed Blocks grouped into prioritized pools based on
frequency Victim is chosen from lowest-priority non- empty pool
Generational Replacement
Slide 18
Additional Cost Compression/decompression engine More space for
the tag entries Extra resource for replacement algorithm Area is
roughly 13% larger
Evaluation lsd Over 50% gain with only 10% area overhead
Slide 22
Evaluation
Slide 23
Summary Advantages: Increase Effective Capacity &
Bandwidth; Power Saving From Less Memory Access Drawbacks: Increase
Hardware Complexity Power Consumption of Additional Hardware
Slide 24
Future work Overall power consumption study Use it in embedded
system