The idea behind memoization is to cache previous results to avoid recomputing them. We'll use considerably more memory, but we can also dramatically speed up performance by avoiding computation.
An ordinary function doesn't have a place to cache previous results. A function is not expected to be stateful. A callable object, however, can be stateful. It can include a cache of previous results.
The following is a memoized version of our Power callable object:
class Power5:
def __init__(self):
self.memo = {}
def __call__(self, x: int, n: int) -> int:
if (x, n) not in self.memo:
if n == 0:
self.memo[x, n] = 1
elif n % 2 == 1:
self.memo[x, n] = self.__call__(x, n-1) * x
elif n % 2 == 0:
t = self.__call__(x, n // 2)
self.memo[x, n] = t * t
else:
raise Exception("Logic Error")
return self.memo[x, n]
pow5: IntExp = Power5()
We revised our algorithm to work with a self.memo cache. This is initialized to an empty mapping. In the __call__() method, the cache is checked for previously computed answers.
If the parameter values have been requested previously, the cached result is returned and no computation is performed. This is the big speedup that we spoke of earlier.
Otherwise, the parameter values are not present in the cache. In this missing value case,the value of must be computed and saved. The three rules to compute the fast exponent are used to get and put values in the cache. This assures us that future calculations will be able to exploit the cached values.
The importance of memoization can't be stressed enough. The reduction in computation can be dramatic. It is commonly done by replacing a slow, expensive function with a callable object.