Caching

When some of your application function takes too long to compute, the useful technique to consider is caching. Caching is nothing but saving a return value for future reference. The result of a function or method that is expensive to run can be cached as long as:

  • The function is deterministic and the results have the same value every time, given the same input
  • The return value of the function continues to be useful and valid for some period of time (nondeterministic)

In other words, a deterministic function always returns the same result for the same set of arguments, whereas a nondeterministic one returns results that may vary in time. Such an approach usually greatly reduces the time of computation and allows you to save a lot of computer resources.

The most important requirement for any caching solution is to have a storage that allows you to retrieve saved values significantly faster than it takes to calculate them. Good candidates for caching are usually:

  • Results from callables that query databases
  • Results from callables that render static values, such as file content, web requests, or PDF rendering
  • Results from deterministic callables that perform complex calculations
  • Global mappings that keep track of values with expiration times, such as web session objects
  • Results that needs to be accessed often and quickly

Another important use case for caching is saving results from third-party APIs served over the Web. This may greatly improve application performance by cutting off the network latencies but also allows you to save money if you are billed for every request to such API.

Depending on your application architecture, the cache can be implemented in many ways and with various levels of complexity. There are many ways to provide caching and complex applications can use different approaches on different levels of the application architecture stack. Sometimes a cache may be as simple as a single global data structure (usually a dict) kept in the process memory. In other situations, you may want to set up a dedicated caching service that will run on carefully tailored hardware. This section will provide you with basic information on the most popular caching approaches and guide you through the usual use cases and also the common pitfalls.

Deterministic caching

Deterministic functions are the easiest and safest use case for caching. Deterministic functions always return the same value if given exactly the same input, so generally you can store their result indefinitely. The only limitation is the size of storage you use for caching. The simplest way to cache such results is to put them into process memory because it is usually the fastest place to retrieve data from. Such a technique is often called memoization.

Memoization is very useful when optimizing recursive functions that may evaluate the same inputs multiple times. We already discussed recursive implementation for the Fibonacci sequence in Chapter 7, Python Extensions in Other Languages. Back then, we tried to improve the performance of our program with C and Cython. Now we will try to achieve the same goal by simpler means—with the help of caching. But before we do that, let's recall the code for the fibonacci() function:

def fibonacci(n):
    """ Return nth Fibonacci sequence number computed recursively
    """
    if n < 2:
        return 1
    else:
        return fibonacci(n - 1) + fibonacci(n - 2)

As we see, fibonacci() is a recursive function that calls itself twice if the input value is more than two. This makes it highly inefficient. The run time complexity is O(2n) and its execution creates a very deep and vast call tree. For the large value, this function will take extremely long to execute and there is high chance of quickly exceeding the maximal recursion limit of the Python interpreter.

If you take a closer look at Figure 3, which presents an example call tree, you will see that it evaluates many of the intermediate results multiple times. A lot of time and resources could be saved if we could reuse some of these values.

Deterministic caching

Figure 3 Call tree for fibonacci(5) execution

A simple memoization attempt would be to store results of the previous runs in a dictionary and retrieve them if they are available. Both the recursive calls in the fibonacci() function are contained in a single line of code:

return fibonacci(n - 1) + fibonacci(n - 2)

We know that Python evaluates instructions from left to right. This means that, in this situation, the call to the function with a higher argument value will be executed before the call to the function with a lower argument. Thanks to this, we can provide memoizaton by constructing a very simple decorator:

def memoize(function):
    """ Memoize the call to single-argument function
    """
    call_cache = {}

    def memoized(argument):
        try:
            return call_cache[argument]
        except KeyError:
            return call_cache.setdefault(argument, function(argument))

    return memoized


@memoize
def fibonacci(n):
    """ Return nth Fibonacci sequence number computed recursively
    """
    if n < 2:
        return 1
    else:
        return fibonacci(n - 1) + fibonacci(n - 2)

We used the dictionary on the closure of the memoize() decorator as a simple storage from cached values. Saving and retrieving value to that data structure has an average O(1) complexity, so this greatly reduces the overall complexity of the memoized function. Every unique function call will be evaluated only once. The call tree of such an updated function is presented in Figure 4. Without going into mathematical proofs, we can visually deduce that without changing the core of the fibonacci() function, we reduced the complexity from the very expensive O(2n) to the linear O(n).

Deterministic caching

Figure 4 A call tree for fibonacci(5) execution with memoization

The implementation of our memoize() decorator is, of course, not perfect. It worked well for that simple example, but it definitely isn't a reusable piece of software. If you need to memoize functions with multiple arguments or want to limit the size of your cache, you need something more generic. Luckily, the Python standard library provides a very simple and reusable utility that may be used in most cases when you need to cache in memory the results of deterministic functions. It is the lru_cache(maxsize, typed) decorator from the functools module. The name comes from the LRU cache, which stands for last recently used. The additional parameters allow for finer control over memoization behavior:

  • maxsize: This sets the maximum size of the cache. The None value means no limit at all.
  • typed: This defines if the values of different types that compare as equal should be cached as giving the same result.

The usage of lru_cache in our Fibonacci sequence example would be as follows:

@lru_cache(None)
def fibonacci(n):
    """ Return nth Fibonacci sequence number computed recursively
    """
    if n < 2:
        return 1
    else:
        return fibonacci(n - 1) + fibonacci(n - 2)

Nondeterministic caching

The caching of nondeterministic functions is way more trickier that memoization. Due to the fact that every execution of such a function may give different results, it is usually impossible to use previous values for an arbitrarily long amount of time. What you need to do is to decide for how long a cached value can be considered valid. After a defined period of time passes, the stored results are considered to be stale and the cache needs to be refreshed by a new value.

Nondeterministic functions that are usually a subject of caching very often depend on some external state that is hard to track inside of your application code. Typical examples of components are:

  • Relational databases and generally any type of structured data storage engine
  • Third-party services accessible through network connection (web APIs)
  • Filesystems

So, in other words, nondeterministic caching is used in any situation when you temporarily use precomputed results without being sure if they represent a state that is consistent with the state of other system components (usually, the backing service).

Note that such an implementation of caching is obviously a trade-off. Thus, it is somehow related to the techniques we featured in the Using architectural trade-offs section. If you resign from running part of your code every time and instead use the results saved in the past, you are risking using data that becomes stale or represents an inconsistent state of your system. This way, you are trading the correctness and/or completeness for speed and performance.

Of course, such caching is efficient as long as the time taken to interact with the cache is less than the time taken by the function. If it's faster to simply recalculate the value, by all means do so! That's why setting up a cache has to be done only if it's worth it; setting it up properly has a cost.

The actual things that are cached are usually the whole results of interaction with other components of your system. If you want to save time and resources when communicating with the database, it is worth to cache expensive queries. If you want to reduce the number of I/O operations, you may want to cache the content of the files that are accessed very often (configuration files, for instance).

Techniques for caching non-deterministic functions are actually very similar to those used in caching the deterministic ones. The most notable difference is that they usually require the option to invalidate cached values by their age. This means that the lru_cache() decorator from the functools module has very limited use in such situations. It should not be so hard to extend this function to provide the expiration feature, but I will leave it as an exercise for you.

Cache services

We said that nondeterministic caching can be implemented using local process memory, but actually it is rarely done that way. It's because local process memory is very limited in its utility as storage for caching in large applications.

If you run into a situation where non-deterministic caching is your preferred solution to solve performance problems, you usually need something more than that. Usually, nondeterministic caching is your must have solution when you need to serve data or service to multiple users at the same time. If it's true, then sooner or later you will need to ensure that users can be served concurrently. While local memory provides a way to share data between multiple threads, it may not be the best concurrency model for every application. It does not scale well, so you will eventually need to run your application as multiple processes.

If you are lucky enough, you may need to run your application on hundreds or thousands of machines. If you would like to store cached values in local memory, it means that your cache needs to be duplicated on every process that requires it. It isn't only a total waste of resources. If every process has its own cache, that is already a trade-off between speed and consistency, how can you guarantee that all caches are consistent with each other?

Consistency across subsequent request is a serious concern (especially) for web applications with distributed backends. In complex distributed systems, it is extremely hard to ensure that the user will be always consistently served by the same process hosted on the same machine. It is of course doable to some extent, but once you solve that problem, ten others will pop up.

If you are making an application that will need to serve multiple concurrent users, then the best way to handle a nondeterministic cache is to use some dedicated service for that. With tools such as Redis or Memcached, you allow all your application processes to share the same cached results. This both reduces the usage of precious computing resources and saves you from problems caused by having multiple independent and inconsistent caches.

Memcached

If you want to be serious about caching, Memcached is a very popular and battle-tested solution. This cache server is used by big applications such as Facebook or Wikipedia to scale their websites. Among simple caching features, it has clustering capabilities that makes it possible to set up a highly efficient distributed cache system in no time.

The tool is Unix-based but can be driven from any platform and from many languages. There are many Python clients that differ slightly from each other but the basic usage is usually the same. The simplest interaction with Memcached almost always consists of three methods:

  • set(key, value): This saves the value for the given key
  • get(key): This gets the value for the given key if it exists
  • delete(key): This deletes the value under the given key if it exists

Here is an example of integration with Memcached using one of the popular Python packages—pymemcached:

from pymemcache.client.base import Client

# setup Memcached client running under 11211 port on localhost
client = Client(('localhost', 11211))

# cache some value under some key and expire it after 10 seconds
client.set('some_key', 'some_value', expire=10)

# retrieve value for the same key
result = client.get('some_key')

One of the downsides of Memcached is that it is designed to store values either as strings or a binary blob, and this isn't compatible with every native Python type. Actually, it is compatible with only one—strings. This means that more complex types need to be serialized in order to be successfully stored in Memcached. A common serialization choice for simple data structures is JSON. Here is an example of using JSON serialization with pymemcached:

import json
from pymemcache.client.base import Client

def json_serializer(key, value):
     if type(value) == str:
         return value, 1
     return json.dumps(value), 2

def json_deserializer(key, value, flags):
    if flags == 1:
        return value
    if flags == 2:
        return json.loads(value)
    raise Exception("Unknown serialization format")

client = Client(('localhost', 11211), serializer=json_serializer,
                deserializer=json_deserializer)
client.set('key', {'a':'b', 'c':'d'})
result = client.get('key')

The other problem that is very common when working with every caching service that works on the key/value storage principle is how to choose key names.

For cases when you cache simple function invocations that have basic parameters, the problem is usually simple. You can convert the function name and its arguments to strings and concatenate them together. The only thing you need to care about is to make sure there are no collisions between keys created for different functions if you use cache in many parts of your application.

A more problematic case is when cached functions have complex arguments consisting of dictionaries or custom classes. In that case, you need to find a way to convert such invocation signatures to cache keys in a consistent manner.

The last problem is that Memcached, like many other caching services, does not tend to like very long key strings. Usually, the shorter the better. Long keys may either reduce performance or just not fit the hardcoded service limits. For instance, if you cache whole SQL queries, the query strings themselves are generally good unique identifiers that could be used as keys. But on the other hand, complex queries are generally too long to be stored in typical caching services such as Memcached. A common practice is to calculate the MD5, SHA, or any other hash function and use it as a cache key instead. The Python standard library has a hashlib module that provides implementation for few popular hash algorithms.

Remember that calculating a hash comes at a price. However, sometimes it is the only viable solution. It is also a very useful technique when dealing with complex types that need to be used when creating cache keys. One important thing to care about when using hashing functions is hash collisions. There is no hash function that guarantees that collisions will never occur, so always be sure to know the probability and mind such risks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset