10 3. COARSE-GRAINED REPLACEMENT POLICIES
be maintained either precisely or approximately [Handy, 1998] by associating with each line a
counter and updating it, as shown in Table 3.1.
Table 3.1: e LRU policy
LRU performs well when there is temporal locality of reference, that is, when data that
was used recently is likely to be reused in the near future. But it performs poorly for two types of
access patterns. First, it can be pessimal when the applications working set size exceeds the cache
size, leading to a phenomenon known as thrashing [Denning, 1968] (see Figure 3.1). Second,
LRU performs poorly in the presence of scans because it caches more recently used scans at the
expense of older lines that are more likely to be reused.
Accesses A B C A B C
Cache contents
after latest access
(A, _ ) (A, B) (C, B) (C, A) (B, A) (B, C)
Miss Miss
Miss
(Evict A)
Miss
(Evict B)
Miss
(Evict C)
Miss
(Evict A)
Thrashing Access Pattern: A, B, C, A, B, C
Time
Figure 3.1: An example of thrashing. e LRU policy produces no cache hits when the access
pattern cycles through a working set (3 in this example) that is larger than the cache capacity (2
in this case).
3.1.1 VARIANTS OF LRU
We now describe some of the notable variants of LRU that detect and accommodate thrashing
or streaming accesses.
Most Recently Used (MRU) e MRU Policy addresses thrashing by evicting new lines to
retain old lines. us, when the working set is larger than the cache, it is able to retain a portion
of the working set. For example, Figure 3.2 shows that for the thrashing access pattern shown
in Figure 3.1, MRU improves upon LRU’s hit rate by caching a portion of the working set—in
this example never evicting line A. Table 3.2 shows that the MRU policy is identical to LRU,
except that it evicts the line at the MRU position instead of the LRU position.
3.1. RECENCY-BASED POLICIES 11
Accesses A B C A B C
Cache contents
after latest access
(A, _ ) (A, B) (C, B) (C, A) (B, A) (B, C)
Miss Miss
Miss
(Evict B)
Hit
Miss
(Evict C)
Miss
(Evict B)
Thrashing Access Pattern: A, B, C, A, B, C
Time
Figure 3.2: e MRU policy avoids thrashing by caching a portion of the working set. In this
example, the MRU policy is able to cache A even though the working set size exceeds the cache
capacity of two lines.
Table 3.2: e MRU policy
While MRU is ideal for thrashing accesses, it performs poorly in the presence of recency-
friendly accesses, and it adapts poorly to changes in the applications working set, as it is unlikely
to cache anything from the new working set.
Early Eviction LRU (EELRU) e EELRU policy also avoids thrashing [Smaragdakis et al.,
1999]. e main idea is to detect cases where the working set size exceeds the cache size, at
which point a few lines are evicted early. us, early eviction discards a few randomly chosen
lines so that the remaining lines can be managed effectively by the LRU policy. More specifically,
EELRU evicts the LRU line when the working set fits in the cache, but it evicts the e
th
most
recently used line when it observes that too many lines are being touched in a roughly cyclic
pattern that is larger than main memory.
Figure 3.3 shows that EELRU distinguishes among three regions of the recency axis. e
left endpoint of the recency axis represents the MRU line, and the right endpoint represents
the LRU line. e LRU memory region consists of the most recently used lines, and positions
e and M mark the beginning of the early eviction region and late eviction region, respectively. On
a miss, the EELRU policy either evicts the line at the least recently used position (late region)
or the line at the e
th
position (early region).
To decide whether to use early eviction or LRU eviction, EELRU tracks the number of
hits received in each region. If the distribution is monotonically decreasing, then EELRU as-
sumes that there is no thrashing and evicts the LRU line. But if the distribution shows more hits
12 3. COARSE-GRAINED REPLACEMENT POLICIES
LRU memory
region
(MRU page)
1
(early eviction point) (main memory size)
e M
early region late region
(potential
eviction points)
Figure 3.3: e EELRU replacement policy; based on [Smaragdakis et al., 1999].
Table 3.3: e early eviction LRU (EELRU) policy
Insertion Promotion Aging Victim Selection
MRU position MRU position Move 1 position toward
LRU
Choose between LRU or e
th
recently used line
in the late region than the early region, then EELRU evicts from the early region, which allows
lines from the late region to remain longer in the cache. Table 3.3 summarizes the operation of
EELRU.
Segmented LRU (Seg-LRU) Segmented LRU handles scanning accesses by preferentially
retaining lines that have been accessed at least twice [Karedla et al., 1994]. Seg-LRU divides
the LRU stack into two logical segments (see Figure 3.4), namely, the probationary segment and
the protected segment. Incoming lines are added to the probationary segment and are promoted
to the protected segment when they receive a hit. us, lines in the protected segment have been
accessed at least twice, and scanning accesses are never promoted to the protected segment. On
an eviction, the least recently used line from the probationary segment is selected.
Table 3.4 summarizes the operation of Seg-LRU. New lines are inserted at the MRU po-
sition in the probationary segment, and on a cache hit, lines are promoted to the MRU position
in the protected segment. Because the protected segment is finite, a promotion to the protected
segment may force the migration of the LRU line in the protected segment to the MRU end
of the probationary segment, giving this line another chance to receive a hit before it is evicted
from the probationary segment. us, Seg-LRU can adapt to changes in the working set as old
lines eventually get demoted to the probationary segment.
Table 3.4: e Seg-LRU policy
Insertion Promotion Aging Victim Selection
MRU position in
probationary segment
MRU position in
protected segment
Increment recency
counter
LRU position from
probationary segment
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset