In the preceding two chapters we saw how to store information in arrays and lists, and how to sort, search, and process that information using LINQ. Important as sequential lists and rectangular arrays are, they don’t accommodate every possible requirement you could have for storing and structuring data. So in this final chapter on working with collections, we’ll look at some of the other collection classes offered by the .NET Framework.
A dictionary is a collection that enables you to look up information associated with some kind of value. .NET calls this sort of collection a dictionary because it is reminiscent of a traditional printed dictionary: the information is structured to make it easy to find the entry for a particular word—if you know what word you’re looking for, you can find it very quickly even among tens of thousands of definitions. The information you find when you’ve looked up the word depends on the sort of dictionary you bought—it might provide a definition of the word, but other kinds exist, such as dictionaries of quotations, or of etymology.
Likewise, a .NET dictionary collection is structured to enable quick and easy lookup of entries. The syntax looks very similar to array access, but where you’d expect to see a number, the index can be something else, such as a string, as shown in Example 9-1.
Just as printed dictionaries vary in what you get when you look up a
word, so can .NET dictionaries. The Dictionary
type in the System.Collections.Generic
namespace is a
generic type, letting you choose the type for both the
key—the value used for the index—and the value
associated with the index. (Note that there are some restrictions
regarding the key type—see the sidebar on the next page.) Example 9-1, which models a
traditional printed dictionary, uses strings for the index, and expects a
string as the result, so myDictionary
in that example would be defined as shown in Example 9-2.
Example 9-2. A dictionary with string keys and string values
Dictionary<string, string> myDictionary = new Dictionary<string, string>();
When you start using collection types that require multiple generic
type arguments such as this, specifying the full type name in the variable
declaration and then again in the constructor starts to look a bit
verbose, so if you’re using a dictionary with a local variable, you might
prefer to use the var
keyword introduced in
C# 3.0, as shown in Example 9-3.
Example 9-3. Avoiding repetitive strain injury with var
var myDictionary = new Dictionary<string, string>();
Remember, the var
keyword simply
asks C# to work out the variable’s type by looking at the expression
you’re using to initialize the variable. So Example 9-3 is exactly
equivalent to Example 9-2.
Just as arrays and other lists can be initialized with a list of values in braces, you can also provide an initializer list for a dictionary. As Example 9-4 shows, you must provide both a key and a value for each entry, so each entry is contained within nested braces to keep the key/value grouping clear.
Example 9-4. Dictionary initializer list
var myDictionary = new Dictionary<string, string>() { { "dog", "Not a cat." }, { "sea", "Big blue wobbly thing that mermaids live in." } };
Storing individual items in a dictionary also uses an array-like syntax:
myDictionary["sea"] = "Big blue wobbly thing that mermaids live in.";
As you may already have guessed, dictionaries exploit the C# indexer feature we saw in Chapter 8.
Dictionaries are extremely useful tools, because the situations they deal with—anyplace something is associated with something else—crop up all the time. They are so widely used that it’s helpful to look at some common concrete examples.
Computer systems often use inscrutable identifiers where people would normally use a name. For example, imagine a computer system for managing patients in a hospital. This system would need to maintain a list of appointments, and it would be useful for the hospital’s reception staff to be able to tell arriving patients where to go for their appointment. So the system would need to know about things such as buildings and departments—radiography, physiotherapy, and so on.
The system will usually have some sort of unique identifier for entities such as these in order to avoid ambiguity and ensure data integrity. But users of the system will most likely want to know that an appointment is in the Dr. Marvin Munroe Memorial Building, rather than, say, the building with ID 49. So the user interface will need to convert the ID to text.
The information about which ID corresponds to which building
typically belongs in a database, or possibly a configuration file. You
wouldn’t want to bake the list into a big switch
statement in the code, as that would
make it hard to support multiple customers, and would also need a new
software release anytime a building is built or renamed.
The user interface could just look up the name in the database every time it needs to display an appointment’s information, but there are a few problems with this. First, the computer running the UI might not have access to the database—if the UI is written as a client-side application in WPF or Windows Forms, there’s a good chance that it won’t—a lot of companies put databases behind firewalls that restrict access even on the internal network. And even if it did, making a request to the server for each ID-based field takes time—in a form with several such fields, you could easily end up causing a noticeable delay. And this would be unnecessary for data that’s not expected to change from day to day.
A dictionary can provide a better solution here. When the application starts up, it can load a dictionary to go from an ID to a name. Translating an ID to a displayable name then becomes as simple as this:
string buildingName = buildingIdToNameMap[buildingId];
As for how you would load this dictionary in the first place, that’ll depend on where the data is stored. Whether it’s coming from a file, a database, a web service, or somewhere else, you can use LINQ to initialize a dictionary—we’ll see how to do that in “Dictionaries and LINQ.”
Dictionaries are often used to cache information that is slow to fetch or create. Information can be placed in a dictionary the first time it is loaded, allowing an application to avoid that cost the next time the information is required. For example, suppose a doctor is with a patient, and wants to look at information regarding recent tests or medical procedures the patient has undergone. This will typically involve requesting records from some server to describe these patient encounters, and you might find that the application can be made considerably more responsive by keeping hold of the records on the client side after they have been requested so that they don’t have to be looked up again and again as the doctor scrolls through a list of records.
This is a very similar idea to the lookup usage we described earlier, but there are two important differences. First, caches usually need some sort of policy that decides when to remove data from the cache—if we add every record we load into the cache and never clear out old ones, the cache will consume more memory over time, slowing the program down, which is the opposite of the intended effect. The appropriate policy for removing items from a cache will be application-specific. In a patient record viewing application, the best approach might be to clear the cache as soon as the doctor starts looking up information for a new patient, since that suggests the doctor has moved on to a new appointment and therefore won’t be looking at the previous patient’s details anytime soon. But that policy works only because of how this particular system is used—other systems may require other mechanisms. Another popular heuristic is to set an upper limit on either the number of entries or the total size of the data in a cache, and to remove items based on when they were last retrieved.
The second difference between using a dictionary to cache data rather than to look up preloaded data is that in caching scenarios, the data involved is often more dynamic. A list of known countries won’t change very often, but patient records might change while in use, particularly if the patient is currently in the hospital. So when caching copies of information locally in a dictionary, you need to have some way of dealing with the fact that the information in the cache might be stale. (For example, although caching patient records locally would be useful, your application might need to deal with the possibility that new test results become available while the patient is with the doctor.) As with removal policy, detection of stale data requires application-specific logic.
For example, some data might never change—accounting data usually works this way, because even when data is discovered to be incorrect, it’s not usually modified. Legal auditing requirements usually mean you have to fix problems by adding a new accounting record that corrects the old one. So for this sort of entry, you know that a cache of any given record will never be stale. (Newer records might exist that supersede the cached record you have, but the cache of that record will still be consistent with whatever is in the database for that entry.)
Sometimes it might be possible to perform a relatively cheap
test to discover whether the cached record is consistent with the
information on a server. HTTP supports this—a client can send a request with an
If-Modified-Since
header,
containing the date at which the cached information was known to be
up-to-date. If the server has no newer information, it sends a very
short reply to confirm this, and will not send a new copy of the data.
Web browser caches use this to make web pages you’ve previously
visited load faster, while ensuring that you always see the most
recent version of the page.
But you may simply have to guess. Sometimes the best staleness heuristic available to you might be something such as “If the cached record we have is more than 20 minutes old, let’s get a fresh copy from the server.” But you need to be careful with this approach. Guesswork can sometimes lead to a cache that offers no useful performance improvements, or which produces data that is too stale to be useful, or both.
Regardless of the precise details of your cache removal and
staleness policies, the approach will look something like Example 9-5. (The Record
type in this example is not a class
library type, by the way. It’s just for illustration—it would be the
class for whatever data you want to cache.)
Example 9-5. Using a dictionary for caching
class RecordCache { private Dictionary<int, Record> cachedRecords = new Dictionary<int, Record>(); public Record GetRecord(int recordId) { Record result; if (cachedRecords.TryGetValue(recordId, out result)) { // Found item in cache, but is it stale? if (IsStale(result)) { result = null; } } if (result == null) { result = LoadRecord(recordId); // Add newly loaded record to cache cachedRecords[recordId] = result; } DiscardAnyOldCacheEntries(); return result; } private Record LoadRecord(int recordId) { ... Code to load the record would go here ... } private bool IsStale(Record result) { ... Code to work out whether the record is stale would go here ... } private void DiscardAnyOldCacheEntries() { ... Cache removal policy code would go here ... } }
Notice that this code does not use the indexer to look up a
cache entry. Instead, it uses a method called TryGetValue
. You use
this when you’re not sure if the dictionary will contain the entry
you’re looking for—in this case, the entry won’t be present the first
time we look it up. (The dictionary would throw an exception if we use
the indexer to look up a value that’s not present.) TryGetValue
returns true
if an entry for the given key was
found, and false
if not. Notice
that its second argument uses the out
qualifier and it uses this to return the
item when it’s found, or a null reference when it’s not found.
You might be wondering why TryGetValue
doesn’t just use a null
return value to indicate that a
record wasn’t found, rather than this slightly clumsy arrangement
with a bool
return value and an
out
argument. But that wouldn’t
work with value types, which cannot be null. Dictionaries can hold
either reference types or value types.
Another common use for a dictionary is when you want something that works like a property, but where the set of available properties is not necessarily fixed. For example, WCF is designed to send and receive messages over a wide range of network technologies, each of which may have its own unique characteristics. So WCF defines some normal properties and methods to deal with aspects of communication that are common to most scenarios, but also provides a dictionary of dynamic properties to handle transport-specific scenarios.
For example, if you are using WCF with HTTP-based communication,
you might want your client code to be able to modify the User-Agent
header. This header is specific
to HTTP, and so WCF doesn’t provide a property for this as part of its
programming model, because it wouldn’t do anything for most network
protocols. Instead, you control this with a dynamic property, added
via the WCF Message
type’s Properties
dictionary, as Example 9-6 shows.
Example 9-6. Setting a dynamic property on a WCF message
Message wcfMessage = CreateMessageSomehow(); HttpRequestMessageProperty reqProps = new HttpRequestMessageProperty(); reqProps.Headers.Add(HttpRequestHeader.UserAgent, "my user agent"); wcfMessage.Properties[HttpRequestMessageProperty.Name] = reqProps;
C# 4.0 introduces an alternative way to support dynamic
properties, through the dynamic
keyword (which we will describe in Chapter 18). This
makes it possible to use normal C# property access syntax with
properties whose availability is determined at runtime. So you might
think dynamic
makes dictionaries
redundant. In practice, dynamic
is normally used only when interacting with dynamic programming
systems such as scripting languages, so it’s not based on .NET’s
dictionary mechanisms.
The final common scenario we’ll look at for dictionaries is to provide efficient storage for a sparse array. A sparse array is indexed by an integer, like a normal array, but only a tiny fraction of its elements contain anything other than the default value. For a numeric element type that would mean the array is mostly zeros, while for a reference type it would be mostly nulls.
As an example of where this might be useful, consider a
spreadsheet. When you create a new spreadsheet, it appears to be a
large expanse of cells. But it’s not really storing information for
every cell. I just ran Microsoft Excel, pressed Ctrl-G to go to a
particular cell and typed in $XFD$1000000
, and then
entered a value for that cell. This goes to the 16,384th column (which
is as wide as Excel 2007 can go), and the 1 millionth row. Yet despite
spanning more than 16 billion cells, the file is only 8 KB. And that’s
because it doesn’t really contain all the cells—it only stores
information for the cells that contain something.
The spreadsheet is sparse—it is mostly empty. And it uses a representation that makes efficient use of space when the data is sparse.
If you try to create a rectangular array with 16,384 columns and 1 million rows, you’ll get an exception as such an array would go over the .NET 4 upper size limit for any single array of 2 GB. A newly created array always contains default values for all of its elements, so the information it contains is always sparse to start with—sparseness is a characteristic of the data, rather than the storage mechanism, but the fact that we simply cannot create a new, empty array this large demonstrates that a normal array doesn’t store sparse information efficiently.
There is no built-in type designed specifically for storing
sparse data, but we can use a dictionary to make such a thing. Example 9-7 uses a dictionary to provide
storage for a single-dimensional sparse array of double
elements. It uses long
as the index argument type to enable
the array to grow to a logical size that is larger than would be
possible with an int
, which tops
out at around 2.1 billion.
Example 9-7. A sparse array of numbers
class SparseArray { private Dictionary<long, double> nonEmptyValues = new Dictionary<long, double>(); public double this[long index] { get { double result; nonEmptyValues.TryGetValue(index, out result); return result; } set { nonEmptyValues[index] = value; } } }
Notice that this example doesn’t bother to check the return
value from TryGetValue
. That’s
because when it fails to find the entry, it sets the result to the
default value, and in the case of a double
, that means 0. And 0 is what we want
to return for an entry whose value has not been set yet.
The following code uses the SparseArray
class:
SparseArray big = new SparseArray(); big[0] = 123; big[10000000000] = 456; Console.WriteLine(big[0]); Console.WriteLine(big[2]); Console.WriteLine(big[10000000000]);
This sets the value of the first element, and also the element with an index of 10 billion—this simply isn’t possible with an ordinary array. And yet it works fine here, with minimal memory usage. The code prints out values for three indexes, including one that hasn’t been set. Here are the results:
123 0 456
Reading the value that hasn’t been set returns the default value of 0, as required.
Some arrays will be sparser than others, and there will inevitably come a point of insufficient sparseness at which this dictionary-based approach will end up being less efficient than simply using a large array. It’s hard to predict where the dividing line between the two techniques will fall, as it will depend on factors such as the type and quantity of data involved, and the range of index values. As with any implementation choice made on the grounds of efficiency, you should compare the performance against the simpler approach to find out whether you’re getting the benefit you hoped for.
The examples we’ve seen so far have all used the Dictionary
type defined in the System.Collections.Generic
namespace. But
that’s not the only dictionary. As we saw a couple of chapters ago, the
IEnumerable<T>
type lets us
write polymorphic code that can work with any sequential collection
class. We can do the same with a dictionary—the .NET Framework class
library defines an interface called IDictionary<TKey, TValue>
, which is
reproduced in Example 9-8.
Example 9-8. IDictionary<TKey, TValue>
namespace System.Collections.Generic { public interface IDictionary<TKey, TValue> : ICollection<KeyValuePair<TKey, TValue>>, IEnumerable<KeyValuePair<TKey, TValue>>, IEnumerable { void Add(TKey key, TValue value); bool ContainsKey(TKey key); bool Remove(TKey key); bool TryGetValue(TKey key, out TValue value); TValue this[TKey key] { get; set; } ICollection<TKey> Keys { get; } ICollection<TValue> Values { get; } } }
You can see the indexer—TValue
this[TKey]
—and the TryGetValue
method that
we already looked at. But as you can see, dictionaries also implement
other useful standard features.
The Add
method adds a new
entry to the dictionary. This might seem redundant because you can add
new entries with the indexer, but the difference is that the indexer
will happily overwrite an existing value. But if you call Add
, you are declaring that you believe this
to be a brand-new entry, so the method will throw an exception if the
dictionary already contained a value for the specified key.
There are members for helping you discover what’s already in the
dictionary—you can get a list of all the keys and values from the
Keys
and Values
properties. Both of these implement
ICollection<T>
, which is a
specialized version of IEnumerable<T>
that adds in useful
members such as Count
, Contains
, and CopyTo
.
Notice also that IDictionary<TKey,
TValue>
derives from IEnumerable<KeyPairValue<TKey,
TValue>>
. This means it’s possible to enumerate through
the contents of a dictionary with a foreach
loop. The KeyPairValue<TKey, TValue>
items
returned by the enumeration just package the key and associated value
into a single struct. We could add the method in Example 9-9 to the class in
Example 9-7, in order to print out just
those elements with a nondefault value.
Example 9-9. Iterating through a dictionary’s contents
public void ShowArrayContents() { foreach (var item in nonEmptyValues) { Console.WriteLine("Key: '{0}', Value: '{1}'", item.Key, item.Value); } }
Remember, the presence of IEnumerable<T>
is all that LINQ to
Objects needs, so we can use dictionaries with LINQ.
Because all IDictionary<TKey,
TValue>
implementations are also enumerable, we can run
LINQ queries against them. Given the RecordCache
class in Example 9-5, we might choose to
implement the cache item removal policy as shown in Example 9-10.
Example 9-10. LINQ query with dictionary source
private void DiscardAnyOldCacheEntries() { // Calling ToList() on source in order to query a copy // of the enumeration, to avoid exceptions due to calling // Remove in the foreach loop that follows. var staleKeys = from entry in cachedRecords.ToList() where IsStale(entry.Value) select entry.Key; foreach (int staleKey in staleKeys) { cachedRecords.Remove(staleKey); } }
But it’s also possible to create new dictionaries with LINQ
queries. Example 9-11
illustrates how to use the standard ToDictionary
LINQ operator.
Example 9-11. LINQ’s ToDictionary operator
IDictionary<int, string> buildingIdToNameMap = MyDataSource.Buildings.ToDictionary( building => building.ID, building => building.Name);
This example presumes that MyDataSource
is some data source class that
provides a queryable collection containing a list of buildings. Since
this information would typically be stored in a database, you would
probably use a database LINQ provider such as LINQ to Entities or LINQ
to SQL. The nature of the source doesn’t greatly matter, though—the
mechanism for extracting the resources into a dictionary object are the
same in any case. The ToDictionary
operator needs to be told how to extract the key from each item in the
sequence. Here we’ve provided a lambda expression that retrieves the
ID
property—again, this property
would probably be generated by a database mapping tool such as the ones
provided by the Entity Framework or LINQ to SQL. (We will be looking at
data access technologies in a later chapter.) This example supplies a
second lambda, which chooses the value—here we pick the Name
property. This second lambda is
optional—if you don’t provide it, ToDictionary
will just use the entire source
item from the stream as the value—so in this example, leaving out the
second lambda would cause ToDictionary
to return an IDictionary<int,
Building
>
(where
Building
is whatever type of object MyDataSource.Buildings
provides).
The code in Example 9-11 produces the same result as this:
var buildingIdToNameMap = new Dictionary<int, string>(); foreach (var building in MyDataSource.Buildings) { buildingIdToNameMap.Add(building.ID, building.Name); }
HashSet<T>
is a
collection of distinct values. If you add the same value twice, it will
ignore the second add
, allowing any
given value to be added only once. You could use this to ensure
uniqueness—for example, imagine an online chat server. If you wanted to
make sure that usernames are unique, you could maintain a HashSet<string>
of usernames used so far,
and check that a new user’s chosen name isn’t already in use by calling
the hash set’s Contains
method.
You might notice that List<T>
offers a Contains
method, and so
with a little extra code, you could implement a uniqueness check using
List<T>
. However, HashSet<T>
uses the same hash-code-based
fast lookup as a dictionary, so HashSet<T>
will be faster for large sets
than List<T>
.
HashSet<T>
was added in
.NET 3.5. Prior to that, people tended to use a dictionary with nothing
useful in the value as a way of getting fast hash-code-based uniqueness
testing.
.NET 4 adds SortedSet<T>
, which
is very similar to HashSet<T>
,
but adds the feature that if you iterate through the items in the set,
they will come out in order. (You can provide an IComparer<T>
to define the required order,
or you can use self-ordering types.) Obviously, you could achieve the same
effect by applying the OrderBy
LINQ
operator to a HashSet<T>
, but
SortedSet<T>
sorts the items as
they are added, meaning that they’re already sorted by the time you want
to iterate over them.
Both HashSet<T>
and
SortedSet<T>
offer various handy
set-based methods. You can determine whether an IEnumerable<T>
is a subset of (i.e., all
its elements are also found in) a set with the IsSubsetOf
, for example. The available methods
are defined by the common ISet<T>
interface,
reproduced in Example 9-12.
Example 9-12. ISet<T>
namespace System.Collections.Generic { public interface ISet<T> : ICollection<T>, IEnumerable<T>, IEnumerable { bool Add(T item); void ExceptWith(IEnumerable<T> other); void IntersectWith(IEnumerable<T> other); bool IsProperSubsetOf(IEnumerable<T> other); bool IsProperSupersetOf(IEnumerable<T> other); bool IsSubsetOf(IEnumerable<T> other); bool IsSupersetOf(IEnumerable<T> other); bool Overlaps(IEnumerable<T> other); bool SetEquals(IEnumerable<T> other); void SymmetricExceptWith(IEnumerable<T> other); void UnionWith(IEnumerable<T> other); } }
Queue<T>
is a handy
collection type for processing entities on a first come, first served
basis. For example, some doctors’ general practice surgeries operate an
appointment-free system. Since the time taken to see each patient in these
scenarios can vary wildly depending on the problem at hand, seeing
patients in turn can end up being a lot more efficient than allocating
fixed-length appointment slots.
We could model this with a Queue<Patient>
(where Patient
is some class defined by our
application). When a patient arrives, she would be added to a queue by
calling its Enqueue
method:
private Queue<Patient> waitingPatients = new Queue<Patient>(); ... public void AddPatientToQueue(Patient newlyArrivedPatient) { waitingPatients.Enqueue(newlyArrivedPatient); }
When a doctor has finished seeing one patient and is ready to see
the next, the Dequeue
method will
return the patient who has been in the queue longest, and will then remove
that patient from the queue:
Patient nextPatientToSee = waitingPatients.Dequeue();
While this example perfectly matches how Queue<T>
works, you probably wouldn’t
use a Queue<T>
here in
practice. You’d want to handle crashes and power failures gracefully in
this application, which means that in practice, you’d probably store the
list of waiting patients in a database, along with something such as a
ticket number indicating their place in the queue.
In-memory queues tend to show up more often in multithreaded servers for keeping track of outstanding work. But since we haven’t gotten to either the networking or the threading chapters yet, an example along those lines would be premature.
Queue<T>
implements
IEnumerable<T>
, so you can use
LINQ queries across items in the whole queue. It also implements ICollection<T>
, so you can discover
whether the queue is currently empty by inspecting its Count
property.
Queue<T>
operates in strict
first in, first out (FIFO) order, which is to say that Dequeue
will return items in exactly the same
order in which they were added with Enqueue
. That might be fine for general
practice, but it wouldn’t work so well for the emergency room.
If you’ve ever had to visit a hospital emergency room, you’ll know that waiting in a queue is one of the defining features of the experience unless you were either very lucky or very unlucky. If you were lucky, the queue will have been empty and you will not have had to wait. Alternatively, if you were unlucky, your condition may have been sufficiently perilous that you got to jump to the head of the queue.
In medical emergencies, a triage system will be in place to work out where each arriving patient should go in the queue. A similar pattern crops up in other scenarios—frequent fliers with gold cards may be allocated standby seats at the last minute even though others have been waiting for hours; celebrities might be able to walk right into a restaurant for which the rest of us have to book a table weeks in advance.
The LinkList<T>
class is
able to model these sorts of scenarios. At its simplest, you could use it
like a Queue<T>
—call AddLast
to add an item to the back of the queue
(as Enqueue
would), and RemoveFirst
to take the item off the head of the
queue (like Dequeue
would). But you can
also add an item to the front of the queue with AddFirst
. Or you can add items anywhere you
like in the queue with the AddBefore
and AddAfter
methods. Example 9-13 uses this to place new patients into the
queue.
Example 9-13. Triage in action
private LinkedList<Patient> waitingPatients = new LinkedList<Patient>(); ... LinkedListNode<Patient> current = waitingPatients.First; while (current != null) { if (current.Value.AtImminentRiskOfDeath) { current = current.Next; } else { break; } } if (current == null) { waitingPatients.AddLast(newPatient); } else { waitingPatients.AddBefore(current, newPatient); }
This code adds the new patient after all those patients in the queue whose lives appear to be at immediate risk, but ahead of all other patients—the patient is presumably either quite unwell or a generous hospital benefactor. (Real triage is a little more complex, of course, but you still insert items into the list in the same way, no matter how you go about choosing the insertion point.)
Note the use of LinkedListNode<T>
—this is how LinkedList<T>
presents the queue’s
contents. It allows us not only to see the item in the queue, but also to
navigate back and forth through the queue with the Next
and Previous
properties.
Whereas Queue<T>
operates a FIFO order, Stack<T>
operates a last in, first out (LIFO) order. Looking at this from a queuing
perspective, it seems like the height of unfairness—latecomers get priority over
those who arrived early. However, there are some situations in which this
topsy-turvy ordering can make sense.
A performance characteristic of most computers is that they tend to be able to work faster with data they’ve processed recently than with data they’ve not touched lately. CPUs have caches that provide faster access to data than a computer’s main memory can support, and these caches typically operate a policy where recently used data is more likely to stay in the cache than data that has not been touched recently.
If you’re writing a server-side application, you may consider throughput to be more important than fairness—the total rate at which you process work may matter more than how long any individual work item takes to complete. In this case, a LIFO order may make the most sense—work items that were only just put into a queue are much more likely to still live in the CPU’s cache than those that were queued up ages ago, and so you’ll get better throughput during high loads if you process newly arrived items first. Items that have sat in the queue for longer will just have to wait for a lull.
Like Queue<T>
, Stack<T>
offers a method to add an item,
and one to remove it. It calls these Push
and Pop
,
respectively. They are very similar to the queue’s Enqueue
and Dequeue
, except they both work off the same end
of the list. (You could get the same effect using a LinkedList
, and always calling AddFirst
and RemoveFirst
.)
A stack could also be useful for managing navigation history. The
Back button in a browser works in LIFO order—the first page it shows you
is the last one you visited. (And if you want a Forward button, you could
define a second stack—each time the user goes Back, Push
the current page onto the Forward stack.
Then if the user clicks Forward, Pop
a
page from the Forward stack, and Push
the current page onto the Back stack.)
The .NET Framework class library provides various useful collection
classes. We saw List<T>
in an
earlier chapter, which provides a simple resizable linear list of items.
Dictionaries store entries by associating them with keys, providing fast
key-based lookup. HashSet<T>
and
SortedSet<T>
manage sets of
unique items, with optional ordering. Queues, linked lists, and stacks
each manage a queue of items, offering various strategies for how the
order of addition relates to the order in which items come out of the
queue.