26.2Overview 413
to be considerably more limited, and players are more likely to play the game
without resetting for hours or even days on end. Thus, the dreaded “uptime
crash” is almost unavoidable without special precautions, careful planning, and
perhaps a custom memory management strategy that attempts to avoid fragmen-
tation automatically.
For further clarification of how fragmentation can occur within a program,
see Figure 26.1. Fragmentation can be demonstrated in three simple operations:
allocate twice and free once. If one allocation is quite large and is followed by a
small allocation, then once you release the large block back to the system, the
memory manager now has an approximate 0.5 fragmentation ratio. If the next
allocation is slightly larger than 50 percent of the total available memory, it will
fail.
Most decent memory managers have allocation strategies that react different-
ly based on the size of the allocation. Often, there is a specific handler for tiny
allocations (under 256 bytes, for instance) as well as a general allocation method
for everything larger. The idea behind this is primarily to reduce overhead in al-
locating very small blocks of memory that tend to represent the lion’s share of
allocations in most C/C++ programs. One happy consequence is that it prevents
that group of allocations from possibly splitting the big block in half and causing
fragmentation by preventing them from coming from the same memory
pool.
By extending this understanding to the design of a memory manager, it is
possible to reduce external fragmentation, at some small expense of internal
fragmentation. To illuminate by exaggeration, if you round up all of your alloca-
tions to the largest size you’ll ever allocate, then no external fragmentation is
important because no allocation will ever fail due to a fragmented heap, and any
allocation that gets freed can always fit any other allocation that may come in the
future. Of course, all of your memory would be exhausted before that could hap-
pen! Preposterous as it is, scaling back the idea and applying it to smaller alloca-
tions, where rounding up the size is less significant, makes the concept viable.
The trick is to make it fast and limit the amount of internal fragmentation (i.e.,
wasted space) the strategy produces.
Our approach is to make a very fast small block allocator (SBA) for alloca-
tions under 256 bytes, a reasonably fast medium block allocator (MBA) for allo-
cations that are larger than 256 bytes but smaller than a large allocation, and a
large block allocator (LBA) for any allocation of at least 4096 bytes. As men-
tioned above, the code on the website is templatized, so these numbers can be
modified trivially for your purposes, and when set to equal sizes, you can com-
pletely remove the SBA, the MBA, or both.