25
C H A P T E R 4
Fine-Grained Replacement
Policies
Fine-Grained policies differentiate cache lines at the time of insertion, and this differentiation
is typically based on eviction information from previous lifetimes of similar cache lines. For
example, if a Fine-Grained policy learns that a line was evicted without being reused in its
previous lifetimes, then it can insert the line into the cache with low priority. By contrast, a
Coarse-Grained policy, such as LRU, will evict a line only after it has migrated from the MRU
position to the LRU position, so it forces the line to reside in the cache for a long period of
time—consuming precious cache space—just to determine that the line should be evicted. us,
by learning from the behavior of previous cache lines, Fine-Grained policies can make more
effective use of the cache.
We divide Fine-Grained policies into three broad categories based on the metric they
use for predicting insertion priorities. e first category (Section 4.1) consists of solutions that
predict expected reuse intervals for incoming lines. e second category (Section 4.2) consists of
solutions that predict just a binary caching decision (cache-friendly vs. cache-averse). e third
category, which is much smaller than the other two, includes policies [Beckmann and Sanchez,
2017, Kharbutli and Solihin, 2005] that introduces novel prediction metrics.
Fine-Grained solutions have several other design dimensions. First, since it can be cum-
bersome to remember the caching behavior of individual lines across multiple cache lifetimes,
these policies learn caching behaviors for groups of lines. For example, many solutions group lines
based on the address (PC) of the instruction that loaded the line, because lines that are loaded
by the same PC tend to have similar caching behavior. Recent solutions look at more sophisti-
cated ways to group cache lines [Jiménez and Teran, 2017, Teran et al., 2016]. A second design
dimension is the amount of history that is used for learning cache behavior.
Fine-Grained replacement policies have roots in two seemingly different contexts. One
line of work uses prediction to identify dead blocks—blocks that will not be used before being
evicted—that could be re-purposed for other uses. For example, one of the earliest motivations
for identifying dead blocks was to use them as prefetch buffers [Hu et al., 2002, Lai et al., 2001].
Another motivation was to turn off cache lines that are dead [Abella et al., 2005, Kaxiras et al.,
2001]. e second line of work generalizes hybrid re-reference interval policies [Jaleel et al.,
2010b] so that they are more learning based. Despite their different origins, these two lines of
research have converged to conceptually similar solutions.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset