Hawkeye Cache Replacement
Computing & Wireless : Computing Methods
Available for licensing
- Calvin Lin, Ph.D. , Computer Science
- Akanksha Jain, M.S. , Center for Information Security
The performance of many modern applications is limited by the delay experienced by the CPU when accessing data from main memory. Hardware caches play an important role in alleviating this memory bottleneck by reducing the average amount of time it takes the CPU to access data from main memory.
The effectiveness of memory caches is significantly influenced by the replacement policy that is used where, in order to make room for a new entry on the cache, an existing one is evicted. This process is dictated by the use of replacement algorithms, which have been the subject of considerable research. The optimal replacement algorithm is Belady?s algorithm, which relies on future knowledge to make replacement decisions. Because Belady?s algorithm is infeasible, existing cache replacement solutions are based on simple heuristics that perform significantly worse than Belady?s algorithm.
The inventors have developed Hawkeye, a cache replacement policy that can learn from Belady?s optimal replacement algorithm by applying this algorithm to past cache accesses in order to influence future cache replacement decisions. Unlike other policies, which rely on short-term information, Hawkeye exploits long-term information to provide significant improvements in miss reductions.
- Great performance: High miss rate reduction
- Robust: Not restricted to a few access patterns
- Efficient: Hawkeye?s hardware budget is comparable to the budget of the popular LRU (Least Recently Used) policy
- Highly accurate reconstruction of the Belady?s optimal algorithm solution using a novel liveness vector-based algorithm
- Sampling strategy captures long-term behavior within a limited hardware budget; necessary for accurate reconstruction of Belady?s optimal algorithm
- PC-based Binary Classifier learns the optimal caching behavior of past load instructions; used to predict the behavior of future cache accesses with the same load instruction
Chip design companies (e.g., Intel, AMD, Oracle, Texas Instruments, ARM, NVIDIA); DRAM caches; software caches (e.g., memcached)
Proof of concept
- 1 U.S. patent application filed