Sometimes we may need to store the return values of expensive function calls so that when the function is called again with the same argument(s) we can avoid recomputing the result and simply return the stored value. This practice is known as caching or memoization.
The functools module in the standard library offers the @lru_cache
decorator which is a powerful memoization tool.
Memoization with functools.lru_cache
The
decorator can be used to wrap a function and store its return values. This can save the time of future function calls, as the return value of a previously used argument can be quickly looked up in the cache instead of re-running the function.@lru_cache
@lru_cache(maxsize = 128, typed = False): def function_to _cache(*args): #statements
copy
maxsize |
The number of values that should be stored in the cache. If exceeded, the least used value is discarded. |
typed |
If True , arguments of different types will be cached separately even if they are equal in value. For example, f(5.0) and f(5) will be cached separately with distinct results. |
The least recently used (LRU) approach ensures that old entries are removed from the cache if the preset size is exceeded.
In the above example:
- We imported the lru_cache and the partial from the
functools
module. We then imported thetimeit
module for timing the execution time before an after the cache is populated - We defined the
fibonacci()
function and decorated it with the@lru_cache
decorator. The recursivefibonacci()
, as defined above, is infamous for the its exponential time complexity. - The other statements measures the time it takes for the
fibonacci()
function to execute when given 50 as the nth value. - The resulting effect is that the fibonacci value for each argument is only computed in the first call thus the higher execution time. In the subsequent calls, the cached value is returned thus resulting in far less execution time.
Get cache info
The @
lru_cache
decorator allows us to access cache information through the cache_info()
method. The method returns a CacheInfo
objects which contains some information about the cached function such as the number of cache hits, the number of cache misses, the total size of the cache and the maximum size of the cache.
Clear the cache
The @lru_cache
decorator includes the cache_clear()
method which can be used to clear the cached data. This is useful if you want to reset the cache and start fresh with new entries.
Conclusion
Memoization is a software optimization technique that conveniently stores and return the result of a function call based on the parameters. It is mostly useful if the involved function is deterministic in that its input yields the same output each time it is called with the same set of arguments.
The @lru_cache()
decorator in the functools
module is an efficient In-memory cache for optimizing functions that are expensive to execute. The functools
module is freely available in the standard library and hence no extra installation is needed.