Context Switch Aware Large TLB

Computing & Wireless : Computing Methods

Available for licensing

Inventors

  • Lizy John, Ph.D. , Electrical and Computer Engineering
  • Yashwant Marathe
  • Jee Ho Ryoo , Electrical & Computer Engineering
  • Nagendra Gulur , Texas Instruments, Inc.

Background/unmet need

Computing in virtualized environments has become a common practice for many businesses. Typically, hosting companies aim for lower operational costs by targeting high utilization of host machines maintaining just enough machines to meet the demand. In this scenario, frequent virtual machine context switches are common, resulting in increased translation look-ahead buffer (TLB) miss-rates (often, by over 5X when context share doubled), and subsequent expensive page walks. Since each TLB miss in a virtual environment initiates a 2-D page walk, the data caches get filled with a large fraction of page table entries (often, in excess of 50%), thereby evicting potentially more useful data contents. Researchers at UT Austin have proposed a new method to address the problem of increased TLB miss rates and their adverse impact on data caches: a Context-Switch Aware Large TLB (CSALT).

Invention Description

 The invention proposes to partition the on-chip caches to house translation entries (TLB entries/page table entries) alongside data. The partitioning is achieved by means of a low overhead cache partitioning algorithm which allocates capacity for translation entries depending on the demand. Frequently used translation entries whose translation cannot be accommodated on the on-chip TLBs end up residing in the on-chip data caches. As a result, most of the TLB misses hit in the data caches reducing the average page walk latency. This is especially important in virtualized scenarios where context switching is involved.

Benefits/Advantages

  • A majority of the page walk latency is replaced with access latency of an on-chip data cache.
  • CSALT provides 85% improvement over systems employing a conventional L1/L2 TLBs.
  • A detailed 8-core evaluation provides 25% improvement over systems employing a large L3 TLB.

Features

    CSALT is an architecture that utilizes low overhead cache partitioning algorithms to allocate optimal cache capacity for translation entries housed alongside the data in on-chip data caches.

Market potential/applications

Computer technology companies.

Development Stage

Proof of concept

IP Status

  • 1 U.S. patent application filed

Web Links