6 2. A TAXONOMY OF CACHE REPLACEMENT POLICIES
Classification-Based Policies Classification-based policies classify incoming cache lines into
two categories: cache-friendly lines or cache-averse. e main idea is to preferentially evict
cache-averse lines to leave more space for cache-friendly lines, so cache-friendly lines are in-
serted with higher priority than cache-averse lines. A secondary ordering is maintained among
cache-friendly lines, typically based on recency. Classification-based policies are widely regarded
as state-of-the-art because (1) they can exploit a long history of past behavior to take smarter
decisions for the future, and (2) they can accommodate all kinds of cache access patterns. We
discuss these policies in Section 4.2.
Reuse Distance-Based Policies Reuse Distance-Based Policies predict detailed reuse dis-
tance information for incoming lines. Lines that exceed the expected reuse distance without
receiving hits are evicted from the cache. ese policies can be viewed as predictive variants
of either Recency-Based Policies or Frequency-Based Policies, because both Recency-Based
and Frequency-Based policies implicitly estimate reuse distances (via recency and frequency,
respectively) by monitoring a cache lines reuse behavior while it is cache-resident. e explicit
prediction of reuse distances using historical information presents unique advantages and dis-
advantages, which we discuss in Section 4.1.
2.3 DESIGN CONSIDERATIONS
e primary goal of replacement policies is to increase cache hit rates, and many design factors
contribute toward achieving a higher hit rate. We outline three such factors as follows.
Granularity: At what granularity are lines distinguished at the time of insertion? Are all
cache lines treated the same, or are they assigned different priorities based on historical
information?
History: How much historical information does the replacement policy utilize in making
its decisions?
Access Patterns: How specialized is the replacement policy to certain access patterns? Is it
robust to changes in access patterns or to mixes of different access patterns?
Figure 2.2 summarizes our complete taxonomy and shows how different classes of re-
placement policies address these design factors. In general, the trend, as we move to the right
of the figure, which generally corresponds with newer policies, is toward finer-grained solutions
that use longer histories and that can accommodate a wide variety of access patterns. e im-
portance of predicting cache behavior at fine granularity can be gauged by observing that the
top four solutions in the recent Cache Replacement Championship (2017) were all fine-grained.
Fine-grained predictions give state-of-the-art Fine-Grained policies two advantages. First, they
allow Fine-Grained policies to only dedicate cache space to lines that are most likely to benefit
from caching; by contrast, Coarse-Grained policies tend to repeatedly dedicate cache resources
2.3. DESIGN CONSIDERATIONS 7
to lines that do not yield any hits. Second, they allow Fine-Grained policies to dynamically de-
termine access patterns for different groups of lines; by contrast, Coarse-Grained policies assume
that the entire cache follows a uniform cache access pattern.
Replacement
Policies
Coarse-Grained
Policies
Recency Frequency Hybrid
Fine-Grained
Policies
Economic
Value
Reuse
Distance
Classification
Finer granularity
Longer History
More Access Patterns
LRU
EELRU
[Sigmetrics’99]
SegLRU
[Computer’94]
LIP [ISCA’07]
Shepherd Cache
[MICRO’07]
SRRIP [ISCA’10]
BRRIP [ISCA’10]
PDP [MICRO’12]
GIPPR [MICRO’13]
DBCP [ISCA’01]
EAF [PACT’12]
SDBP [MICRO’10]
SHiP [MICRO’11]
EAF [PACT’12]
Hawkeye [ISCA’16]
MPPPB
[MICRO’17]
Timekeeping
[ISCA’02]
AIP [ICCD’05]
ETA [ICCD’07]
Leeway
[PACT’17]
ARC [FAST’03]
DIP [ISCA’07]
DRRIP [ISCA’10]
EVA [HPCA’17] LFU
FBR
[Sigmetrics’90]
2Q [VLDB’94]
LFRU [ToC’01]
Figure 2.2: A taxonomy of cache replacement policies.
Among Fine-Grained policies, the trend is toward using longer histories as the state-of-
the-art Fine-Grained policy [Jain and Lin, 2016] samples information from a history that is
8 the size of the cache (see Chapter 4). e use of long histories allows the policy to detect
cache-friendly lines that have long-term reuse, which would otherwise be obfuscated by a long
sequence of intermediate cache-averse accesses.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset