API Reference Guide

Cache API

Falcon-Caching - a caching module for the Falcon web framework

class falcon_caching.AsyncCache(config: Dict[str, Any])

This is the central class for the caching

You need to initialize this object to setup the attributes of the caching and then supply the object’s middleware to the Falcon app.

Parameters:(dict of str (config) – str): Cache config settings
cache

An initialized ‘CACHE_TYPE’ cache from the backends.

Type:BaseCache
cache_args

Optional list passed during the cache class instantiation.

Type:list of str
cache_options (dict of str

str): Optional dictionary passed during the cache class instantiation.

config (dict of str

str): Cache config settings

add(*args, **kwargs) → bool

It adds a given key and value to the cache, but only if no record which such key already exists.

cached(timeout: int)

This is the decorator used to decorate a resource class or the requested method of the resource class

clear() → bool

It clears all cache - if the CACHE_KEY_PREFIX config attribute is used then it only removes key starting with that prefix, otherwise it flushes the whole database.

dec(*args, **kwargs) → Optional[int]

It decrements and returns the value of a numerical cache record. Only works for Redis and Redis Sentinel!

delete(*args, **kwargs) → bool

It deletes the cached record based on the provided key.

delete_many(*args, **kwargs) → bool

It deletes all cached record matching the list of keys provided.

delete_memoized(f, *args, **kwargs)

Deletes the specified functions caches, based by given parameters. If parameters are given, only the functions that were memoized with them will be erased. Otherwise all versions of the caches will be forgotten. Example:

@cache.memoize(50)
def random_func():
    return random.randrange(1, 50)
@cache.memoize()
def param_func(a, b):
    return a+b+random.randrange(1, 50)
::
>>> random_func()
43
>>> random_func()
43
>>> cache.delete_memoized(random_func)
>>> random_func()
16
>>> param_func(1, 2)
32
>>> param_func(1, 2)
32
>>> param_func(2, 2)
47
>>> cache.delete_memoized(param_func, 1, 2)
>>> param_func(1, 2)
13
>>> param_func(2, 2)
47

Delete memoized is also smart about instance methods vs class methods. When passing a instancemethod, it will only clear the cache related to that instance of that object. (object uniqueness can be overridden by defining the __repr__ method, such as user id). When passing a classmethod, it will clear all caches related across all instances of that class. Example:

class Adder(object):
    @cache.memoize()
    def add(self, b):
        return b + random.random()
::
>>> adder1 = Adder()
>>> adder2 = Adder()
>>> adder1.add(3)
3.23214234
>>> adder2.add(3)
3.60898509
>>> cache.delete_memoized(adder1.add)
>>> adder1.add(3)
3.01348673
>>> adder2.add(3)
3.60898509
>>> cache.delete_memoized(Adder.add)
>>> adder1.add(3)
3.53235667
>>> adder2.add(3)
3.72341788
Parameters:
  • fname – The memoized function.
  • *args – A list of positional parameters used with memoized function.
  • **kwargs – A dict of named parameters used with memoized function.

Note

Falcon-Caching uses inspect to order kwargs into positional args when the function is memoized. If you pass a function reference into fname, Falcon-Caching will be able to place the args/kwargs in the proper order, and delete the positional cache. However, if delete_memoized is just called with the name of the function, be sure to pass in potential arguments in the same order as defined in your function as args only, otherwise Falcon-Caching will not be able to compute the same cache key and delete all memoized versions of it.

Note

Falcon-Caching maintains an internal random version hash for the function. Using delete_memoized will only swap out the version hash, causing the memoize function to recompute results and put them into another key. This leaves any computed caches for this memoized function within the caching backend. It is recommended to use a very high timeout with memoize if using this function, so that when the version hash is swapped, the old cached results would eventually be reclaimed by the caching backend.

delete_memoized_verhash(f, *args)

Delete the version hash associated with the function. .. warning:

Performing this operation could leave keys behind that have
been created with this version hash. It is up to the application
to make sure that all keys that may have been created with this
version hash at least have timeouts so they will not sit orphaned
in the cache backend.
get(*args, **kwargs) → Any

It returns the value for the given key from the cache.

get_dict(*args, **kwargs) → Dict[Any, Any]

It returns the keys and values as dictionary for all requested keys.

get_many(*args, **kwargs) → List[Any]

It returns the list of values matching the list of keys.

has(*args, **kwargs) → bool

It determines if the given key is in the cache.

inc(*args, **kwargs) → Optional[int]

It increments and returns the value of a numerical cache record. Only works for Redis and Redis Sentinel!

memoize(timeout=None, make_name=None, unless=None, forced_update=None, response_filter=None, hash_method=<built-in function openssl_md5>, cache_none=False)

Use this to cache the result of a function, taking its arguments into account in the cache key. Information on Memoization. Example:

@cache.memoize(timeout=50)
def big_foo(a, b):
    return a + b + random.randrange(0, 1000)
::
>>> big_foo(5, 2)
753
>>> big_foo(5, 3)
234
>>> big_foo(5, 2)
753
The returned decorated function now has three function attributes
assigned to it.
    **uncached**
        The original undecorated function. readable only
    **cache_timeout**
        The cache timeout value for this function.
        For a custom value to take affect, this must be
        set before the function is called.
        readable and writable
    **make_cache_key**
        A function used in generating the cache_key used.
        readable and writable
Parameters:
  • timeout – Default None. If set to an integer, will cache for that amount of time. Unit of time is in seconds.
  • make_name – Default None. If set this is a function that accepts a single argument, the function name, and returns a new string to be used as the function name. If not set then the function name is used.
  • unless – Default None. Cache will always execute the caching facilities unless this callable is true. This will bypass the caching entirely.
  • forced_update – Default None. If this callable is true, cache value will be updated regardless cache is expired or not. Useful for background renewal of cached functions.
  • response_filter – Default None. If not None, the callable is invoked after the cached funtion evaluation, and is given one arguement, the response content. If the callable returns False, the content will not be cached. Useful to prevent caching of code 500 responses.
  • hash_method – Default hashlib.md5. The hash method used to generate the keys for cached results.
  • cache_none – Default False. If set to True, add a key exists check when cache.get returns None. This will likely lead to wrongly returned None values in concurrent situations and is not recommended to use.
middleware

Falcon middleware integration

set(*args, **kwargs) → bool

It stores the given key and value in the cache.

set_many(*args, **kwargs) → bool

It stores multiple records based on the dictionary of keys and values provided.

class falcon_caching.Cache(config: Dict[str, Any])

This is the central class for the caching

You need to initialize this object to setup the attributes of the caching and then supply the object’s middleware to the Falcon app.

Parameters:(dict of str (config) – str): Cache config settings
cache

An initialized ‘CACHE_TYPE’ cache from the backends.

Type:BaseCache
cache_args

Optional list passed during the cache class instantiation.

Type:list of str
cache_options (dict of str

str): Optional dictionary passed during the cache class instantiation.

config (dict of str

str): Cache config settings

add(*args, **kwargs) → bool

It adds a given key and value to the cache, but only if no record which such key already exists.

static cached(timeout: int)

This is the decorator used to decorate a resource class or the requested method of the resource class

clear() → bool

It clears all cache - if the CACHE_KEY_PREFIX config attribute is used then it only removes key starting with that prefix, otherwise it flushes the whole database.

dec(*args, **kwargs) → Optional[int]

It decrements and returns the value of a numerical cache record. Only works for Redis and Redis Sentinel!

delete(*args, **kwargs) → bool

It deletes the cached record based on the provided key.

delete_many(*args, **kwargs) → bool

It deletes all cached record matching the list of keys provided.

delete_memoized(f, *args, **kwargs)

Deletes the specified functions caches, based by given parameters. If parameters are given, only the functions that were memoized with them will be erased. Otherwise all versions of the caches will be forgotten. Example:

@cache.memoize(50)
def random_func():
    return random.randrange(1, 50)
@cache.memoize()
def param_func(a, b):
    return a+b+random.randrange(1, 50)
::
>>> random_func()
43
>>> random_func()
43
>>> cache.delete_memoized(random_func)
>>> random_func()
16
>>> param_func(1, 2)
32
>>> param_func(1, 2)
32
>>> param_func(2, 2)
47
>>> cache.delete_memoized(param_func, 1, 2)
>>> param_func(1, 2)
13
>>> param_func(2, 2)
47

Delete memoized is also smart about instance methods vs class methods. When passing a instancemethod, it will only clear the cache related to that instance of that object. (object uniqueness can be overridden by defining the __repr__ method, such as user id). When passing a classmethod, it will clear all caches related across all instances of that class. Example:

class Adder(object):
    @cache.memoize()
    def add(self, b):
        return b + random.random()
::
>>> adder1 = Adder()
>>> adder2 = Adder()
>>> adder1.add(3)
3.23214234
>>> adder2.add(3)
3.60898509
>>> cache.delete_memoized(adder1.add)
>>> adder1.add(3)
3.01348673
>>> adder2.add(3)
3.60898509
>>> cache.delete_memoized(Adder.add)
>>> adder1.add(3)
3.53235667
>>> adder2.add(3)
3.72341788
Parameters:
  • fname – The memoized function.
  • *args – A list of positional parameters used with memoized function.
  • **kwargs – A dict of named parameters used with memoized function.

Note

Falcon-Caching uses inspect to order kwargs into positional args when the function is memoized. If you pass a function reference into fname, Falcon-Caching will be able to place the args/kwargs in the proper order, and delete the positional cache. However, if delete_memoized is just called with the name of the function, be sure to pass in potential arguments in the same order as defined in your function as args only, otherwise Falcon-Caching will not be able to compute the same cache key and delete all memoized versions of it.

Note

Falcon-Caching maintains an internal random version hash for the function. Using delete_memoized will only swap out the version hash, causing the memoize function to recompute results and put them into another key. This leaves any computed caches for this memoized function within the caching backend. It is recommended to use a very high timeout with memoize if using this function, so that when the version hash is swapped, the old cached results would eventually be reclaimed by the caching backend.

delete_memoized_verhash(f, *args)

Delete the version hash associated with the function. .. warning:

Performing this operation could leave keys behind that have
been created with this version hash. It is up to the application
to make sure that all keys that may have been created with this
version hash at least have timeouts so they will not sit orphaned
in the cache backend.
get(*args, **kwargs) → Any

It returns the value for the given key from the cache.

get_dict(*args, **kwargs) → Dict[Any, Any]

It returns the keys and values as dictionary for all requested keys.

get_many(*args, **kwargs) → List[Any]

It returns the list of values matching the list of keys.

has(*args, **kwargs) → bool

It determines if the given key is in the cache.

inc(*args, **kwargs) → Optional[int]

It increments and returns the value of a numerical cache record. Only works for Redis and Redis Sentinel!

memoize(timeout=None, make_name=None, unless=None, forced_update=None, response_filter=None, hash_method=<built-in function openssl_md5>, cache_none=False)

Use this to cache the result of a function, taking its arguments into account in the cache key. Information on Memoization. Example:

@cache.memoize(timeout=50)
def big_foo(a, b):
    return a + b + random.randrange(0, 1000)
::
>>> big_foo(5, 2)
753
>>> big_foo(5, 3)
234
>>> big_foo(5, 2)
753
The returned decorated function now has three function attributes
assigned to it.
    **uncached**
        The original undecorated function. readable only
    **cache_timeout**
        The cache timeout value for this function.
        For a custom value to take affect, this must be
        set before the function is called.
        readable and writable
    **make_cache_key**
        A function used in generating the cache_key used.
        readable and writable
Parameters:
  • timeout – Default None. If set to an integer, will cache for that amount of time. Unit of time is in seconds.
  • make_name – Default None. If set this is a function that accepts a single argument, the function name, and returns a new string to be used as the function name. If not set then the function name is used.
  • unless – Default None. Cache will always execute the caching facilities unless this callable is true. This will bypass the caching entirely.
  • forced_update – Default None. If this callable is true, cache value will be updated regardless cache is expired or not. Useful for background renewal of cached functions.
  • response_filter – Default None. If not None, the callable is invoked after the cached funtion evaluation, and is given one arguement, the response content. If the callable returns False, the content will not be cached. Useful to prevent caching of code 500 responses.
  • hash_method – Default hashlib.md5. The hash method used to generate the keys for cached results.
  • cache_none – Default False. If set to True, add a key exists check when cache.get returns None. This will likely lead to wrongly returned None values in concurrent situations and is not recommended to use.
middleware

Falcon middleware integration

set(*args, **kwargs) → bool

It stores the given key and value in the cache.

set_many(*args, **kwargs) → bool

It stores multiple records based on the dictionary of keys and values provided.

Backends

BaseCache

class falcon_caching.backends.base.BaseCache(default_timeout=300)

Baseclass for the cache systems. All the cache systems implement this API or a superset of it.

Parameters:default_timeout – The default timeout (in seconds) that is used if no timeout is specified on set(). A timeout of 0 indicates that the cache never expires.
add(key, value, timeout=None)

Works like set() but does not overwrite the values of already existing keys.

Parameters:
  • key – the key to set
  • value – the value for the key
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 idicates that the cache never expires.
Returns:

Same as set(), but also False for already existing keys.

Return type:

boolean

clear()

Clears the cache. Keep in mind that not all caches support completely clearing the cache.

Returns:Whether the cache has been cleared.
Return type:boolean
dec(key, delta=1)

Decrements the value of a key by delta. If the key does not yet exist it is initialized with -delta.

For supporting caches this is an atomic operation.

Parameters:
  • key – the key to increment.
  • delta – the delta to subtract.
Returns:

The new value or None for backend errors.

delete(key)

Delete key from the cache.

Parameters:key – the key to delete.
Returns:Whether the key existed and has been deleted.
Return type:boolean
delete_many(*keys)

Deletes multiple keys at once.

Parameters:keys – The function accepts multiple keys as positional arguments.
Returns:Whether all given keys have been deleted.
Return type:boolean
get(key)

Look up key in the cache and return the value for it.

Parameters:key – the key to be looked up.
Returns:The value if it exists and is readable, else None.
get_dict(*keys)

Like get_many() but return a dict:

d = cache.get_dict("foo", "bar")
foo = d["foo"]
bar = d["bar"]
Parameters:keys – The function accepts multiple keys as positional arguments.
get_many(*keys)

Returns a list of values for the given keys. For each key an item in the list is created:

foo, bar = cache.get_many("foo", "bar")

Has the same error handling as get().

Parameters:keys – The function accepts multiple keys as positional arguments.
has(key)

Checks if a key exists in the cache without returning it. This is a cheap operation that bypasses loading the actual data on the backend.

This method is optional and may not be implemented on all caches.

Parameters:key – the key to check
inc(key, delta=1)

Increments the value of a key by delta. If the key does not yet exist it is initialized with delta.

For supporting caches this is an atomic operation.

Parameters:
  • key – the key to increment.
  • delta – the delta to add.
Returns:

The new value or None for backend errors.

set(key, value, timeout=None)

Add a new key/value to the cache (overwrites value, if key already exists in the cache).

Parameters:
  • key – the key to set
  • value – the value for the key
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 indicates that the cache never expires.
Returns:

True if key has been updated, False for backend errors. Pickling errors, however, will raise a subclass of pickle.PickleError.

Return type:

boolean

set_many(mapping, timeout=None)

Sets multiple keys and values from a mapping.

Parameters:
  • mapping – a mapping with the keys/values to set.
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 idicates that the cache never expires.
Returns:

Whether all given keys have been set.

Return type:

boolean

NullCache

class falcon_caching.backends.NullCache(default_timeout=300)

A cache that doesn’t cache. This can be useful for unit testing.

Parameters:default_timeout – a dummy parameter that is ignored but exists for API compatibility with other caches.
has(key)

Checks if a key exists in the cache without returning it. This is a cheap operation that bypasses loading the actual data on the backend.

This method is optional and may not be implemented on all caches.

Parameters:key – the key to check

SimpleCache

class falcon_caching.backends.SimpleCache(threshold=500, default_timeout=300, ignore_errors=False)

Simple memory cache for single process environments. This class exists mainly for the development server and is not 100% thread safe. It tries to use as many atomic operations as possible and no locks for simplicity but it could happen under heavy load that keys are added multiple times.

Parameters:
  • threshold – the maximum number of items the cache stores before it starts deleting some.
  • default_timeout – the default timeout that is used if no timeout is specified on set(). A timeout of 0 indicates that the cache never expires.
  • ignore_errors – If set to True the delete_many() method will ignore any errors that occured during the deletion process. However, if it is set to False it will stop on the first error. Defaults to False.
add(key, value, timeout=None)

Works like set() but does not overwrite the values of already existing keys.

Parameters:
  • key – the key to set
  • value – the value for the key
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 idicates that the cache never expires.
Returns:

Same as set(), but also False for already existing keys.

Return type:

boolean

delete(key)

Delete key from the cache.

Parameters:key – the key to delete.
Returns:Whether the key existed and has been deleted.
Return type:boolean
get(key)

Look up key in the cache and return the value for it.

Parameters:key – the key to be looked up.
Returns:The value if it exists and is readable, else None.
has(key)

Checks if a key exists in the cache without returning it. This is a cheap operation that bypasses loading the actual data on the backend.

This method is optional and may not be implemented on all caches.

Parameters:key – the key to check
set(key, value, timeout=None)

Add a new key/value to the cache (overwrites value, if key already exists in the cache).

Parameters:
  • key – the key to set
  • value – the value for the key
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 indicates that the cache never expires.
Returns:

True if key has been updated, False for backend errors. Pickling errors, however, will raise a subclass of pickle.PickleError.

Return type:

boolean

FileSystemCache

class falcon_caching.backends.FileSystemCache(cache_dir, threshold=500, default_timeout=300, mode=384, hash_method=<built-in function openssl_md5>, ignore_errors=False)

A cache that stores the items on the file system. This cache depends on being the only user of the cache_dir. Make absolutely sure that nobody but this cache stores files there or otherwise the cache will randomly delete files therein.

Parameters:
  • cache_dir – the directory where cache files are stored.
  • threshold – the maximum number of items the cache stores before it starts deleting some. A threshold value of 0 indicates no threshold.
  • default_timeout – the default timeout that is used if no timeout is specified on set(). A timeout of 0 indicates that the cache never expires.
  • mode – the file mode wanted for the cache files, default 0600
  • hash_method – Default hashlib.md5. The hash method used to generate the filename for cached results.
  • ignore_errors – If set to True the delete_many() method will ignore any errors that occured during the deletion process. However, if it is set to False it will stop on the first error. Defaults to False.
add(key, value, timeout=None)

Works like set() but does not overwrite the values of already existing keys.

Parameters:
  • key – the key to set
  • value – the value for the key
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 idicates that the cache never expires.
Returns:

Same as set(), but also False for already existing keys.

Return type:

boolean

clear()

Clears the cache. Keep in mind that not all caches support completely clearing the cache.

Returns:Whether the cache has been cleared.
Return type:boolean
delete(key, mgmt_element=False)

Delete key from the cache.

Parameters:key – the key to delete.
Returns:Whether the key existed and has been deleted.
Return type:boolean
get(key)

Look up key in the cache and return the value for it.

Parameters:key – the key to be looked up.
Returns:The value if it exists and is readable, else None.
has(key)

Checks if a key exists in the cache without returning it. This is a cheap operation that bypasses loading the actual data on the backend.

This method is optional and may not be implemented on all caches.

Parameters:key – the key to check
set(key, value, timeout=None, mgmt_element=False)

Add a new key/value to the cache (overwrites value, if key already exists in the cache).

Parameters:
  • key – the key to set
  • value – the value for the key
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 indicates that the cache never expires.
Returns:

True if key has been updated, False for backend errors. Pickling errors, however, will raise a subclass of pickle.PickleError.

Return type:

boolean

RedisCache

class falcon_caching.backends.Redis(host='localhost', port=6379, password=None, db=0, default_timeout=300, key_prefix=None, **kwargs)

Uses the Redis key-value store as a cache backend.

The first argument can be either a string denoting address of the Redis server or an object resembling an instance of a redis.Redis class.

Note: Python Redis API already takes care of encoding unicode strings on the fly.

Parameters:
  • host – address of the Redis server or an object which API is compatible with the official Python Redis client (redis-py).
  • port – port number on which Redis server listens for connections.
  • password – password authentication for the Redis server.
  • db – db (zero-based numeric index) on Redis Server to connect.
  • default_timeout – the default timeout that is used if no timeout is specified on set(). A timeout of 0 indicates that the cache never expires.
  • key_prefix – A prefix that should be added to all keys.

Any additional keyword arguments will be passed to redis.Redis.

add(key, value, timeout=None)

Works like set() but does not overwrite the values of already existing keys.

Parameters:
  • key – the key to set
  • value – the value for the key
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 idicates that the cache never expires.
Returns:

Same as set(), but also False for already existing keys.

Return type:

boolean

clear()

Clears the cache. Keep in mind that not all caches support completely clearing the cache.

Returns:Whether the cache has been cleared.
Return type:boolean
dec(key, delta=1)

Decrements the value of a key by delta. If the key does not yet exist it is initialized with -delta.

For supporting caches this is an atomic operation.

Parameters:
  • key – the key to increment.
  • delta – the delta to subtract.
Returns:

The new value or None for backend errors.

delete(key)

Delete key from the cache.

Parameters:key – the key to delete.
Returns:Whether the key existed and has been deleted.
Return type:boolean
delete_many(*keys)

Deletes multiple keys at once.

Parameters:keys – The function accepts multiple keys as positional arguments.
Returns:Whether all given keys have been deleted.
Return type:boolean
dump_object(value)

Dumps an object into a string for redis. By default it serializes integers as regular string and pickle dumps everything else.

get(key)

Look up key in the cache and return the value for it.

Parameters:key – the key to be looked up.
Returns:The value if it exists and is readable, else None.
get_many(*keys)

Returns a list of values for the given keys. For each key an item in the list is created:

foo, bar = cache.get_many("foo", "bar")

Has the same error handling as get().

Parameters:keys – The function accepts multiple keys as positional arguments.
has(key)

Checks if a key exists in the cache without returning it. This is a cheap operation that bypasses loading the actual data on the backend.

This method is optional and may not be implemented on all caches.

Parameters:key – the key to check
inc(key, delta=1)

Increments the value of a key by delta. If the key does not yet exist it is initialized with delta.

For supporting caches this is an atomic operation.

Parameters:
  • key – the key to increment.
  • delta – the delta to add.
Returns:

The new value or None for backend errors.

load_object(value)

The reversal of dump_object(). This might be called with None.

set(key, value, timeout=None)

Add a new key/value to the cache (overwrites value, if key already exists in the cache).

Parameters:
  • key – the key to set
  • value – the value for the key
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 indicates that the cache never expires.
Returns:

True if key has been updated, False for backend errors. Pickling errors, however, will raise a subclass of pickle.PickleError.

Return type:

boolean

set_many(mapping, timeout=None)

Sets multiple keys and values from a mapping.

Parameters:
  • mapping – a mapping with the keys/values to set.
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 idicates that the cache never expires.
Returns:

Whether all given keys have been set.

Return type:

boolean

when redis-py >= 3.0.0 and redis > 4, support this operation

RedisSentinelCache

class falcon_caching.backends.RedisSentinel(sentinels=None, master=None, password=None, db=0, default_timeout=300, key_prefix=None, **kwargs)

Uses the Redis key-value store as a cache backend.

The first argument can be either a string denoting address of the Redis server or an object resembling an instance of a redis.Redis class.

Note: Python Redis API already takes care of encoding unicode strings on the fly.

Parameters:
  • sentinels – A list or a tuple of Redis sentinel addresses.
  • master – The name of the master server in a sentinel configuration.
  • password – password authentication for the Redis server.
  • db – db (zero-based numeric index) on Redis Server to connect.
  • default_timeout – the default timeout that is used if no timeout is specified on set(). A timeout of 0 indicates that the cache never expires.
  • key_prefix – A prefix that should be added to all keys.

Any additional keyword arguments will be passed to redis.sentinel.Sentinel.

UWSGICache

class falcon_caching.backends.UWSGICache(default_timeout=300, cache='')

Implements the cache using uWSGI’s caching framework.

Note

This class cannot be used when running under PyPy, because the uWSGI API implementation for PyPy is lacking the needed functionality.

Parameters:
  • default_timeout – The default timeout in seconds.
  • cache – The name of the caching instance to connect to, for example: mycache@localhost:3031, defaults to an empty string, which means uWSGI will cache in the local instance. If the cache is in the same instance as the werkzeug app, you only have to provide the name of the cache.
add(key, value, timeout=None)

Works like set() but does not overwrite the values of already existing keys.

Parameters:
  • key – the key to set
  • value – the value for the key
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 idicates that the cache never expires.
Returns:

Same as set(), but also False for already existing keys.

Return type:

boolean

clear()

Clears the cache. Keep in mind that not all caches support completely clearing the cache.

Returns:Whether the cache has been cleared.
Return type:boolean
delete(key)

Delete key from the cache.

Parameters:key – the key to delete.
Returns:Whether the key existed and has been deleted.
Return type:boolean
get(key)

Look up key in the cache and return the value for it.

Parameters:key – the key to be looked up.
Returns:The value if it exists and is readable, else None.
has(key)

Checks if a key exists in the cache without returning it. This is a cheap operation that bypasses loading the actual data on the backend.

This method is optional and may not be implemented on all caches.

Parameters:key – the key to check
set(key, value, timeout=None)

Add a new key/value to the cache (overwrites value, if key already exists in the cache).

Parameters:
  • key – the key to set
  • value – the value for the key
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 indicates that the cache never expires.
Returns:

True if key has been updated, False for backend errors. Pickling errors, however, will raise a subclass of pickle.PickleError.

Return type:

boolean

MemcachedCache

class falcon_caching.backends.MemcachedCache(servers=None, default_timeout=300, key_prefix=None)

A cache that uses memcached as backend.

The first argument can either be an object that resembles the API of a memcache.Client or a tuple/list of server addresses. In the event that a tuple/list is passed, Werkzeug tries to import the best available memcache library.

This cache looks into the following packages/modules to find bindings for memcached:

  • pylibmc
  • google.appengine.api.memcached
  • memcached
  • libmc

Implementation notes: This cache backend works around some limitations in memcached to simplify the interface. For example unicode keys are encoded to utf-8 on the fly. Methods such as get_dict() return the keys in the same format as passed. Furthermore all get methods silently ignore key errors to not cause problems when untrusted user data is passed to the get methods which is often the case in web applications.

Parameters:
  • servers – a list or tuple of server addresses or alternatively a memcache.Client or a compatible client.
  • default_timeout – the default timeout that is used if no timeout is specified on set(). A timeout of 0 indicates that the cache never expires.
  • key_prefix – a prefix that is added before all keys. This makes it possible to use the same memcached server for different applications. Keep in mind that clear() will also clear keys with a different prefix.
add(key, value, timeout=None)

Works like set() but does not overwrite the values of already existing keys.

Parameters:
  • key – the key to set
  • value – the value for the key
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 idicates that the cache never expires.
Returns:

Same as set(), but also False for already existing keys.

Return type:

boolean

clear()

Clears the cache. Keep in mind that not all caches support completely clearing the cache.

Returns:Whether the cache has been cleared.
Return type:boolean
dec(key, delta=1)

Decrements the value of a key by delta. If the key does not yet exist it is initialized with -delta.

For supporting caches this is an atomic operation.

Parameters:
  • key – the key to increment.
  • delta – the delta to subtract.
Returns:

The new value or None for backend errors.

delete(key)

Delete key from the cache.

Parameters:key – the key to delete.
Returns:Whether the key existed and has been deleted.
Return type:boolean
delete_many(*keys)

Deletes multiple keys at once.

Parameters:keys – The function accepts multiple keys as positional arguments.
Returns:Whether all given keys have been deleted.
Return type:boolean
get(key)

Look up key in the cache and return the value for it.

Parameters:key – the key to be looked up.
Returns:The value if it exists and is readable, else None.
get_dict(*keys)

Like get_many() but return a dict:

d = cache.get_dict("foo", "bar")
foo = d["foo"]
bar = d["bar"]
Parameters:keys – The function accepts multiple keys as positional arguments.
get_many(*keys)

Returns a list of values for the given keys. For each key an item in the list is created:

foo, bar = cache.get_many("foo", "bar")

Has the same error handling as get().

Parameters:keys – The function accepts multiple keys as positional arguments.
has(key)

Checks if a key exists in the cache without returning it. This is a cheap operation that bypasses loading the actual data on the backend.

This method is optional and may not be implemented on all caches.

Parameters:key – the key to check
import_preferred_memcache_lib(servers)

Returns an initialized memcache client. Used by the constructor.

inc(key, delta=1)

Increments the value of a key by delta. If the key does not yet exist it is initialized with delta.

For supporting caches this is an atomic operation.

Parameters:
  • key – the key to increment.
  • delta – the delta to add.
Returns:

The new value or None for backend errors.

set(key, value, timeout=None)

Add a new key/value to the cache (overwrites value, if key already exists in the cache).

Parameters:
  • key – the key to set
  • value – the value for the key
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 indicates that the cache never expires.
Returns:

True if key has been updated, False for backend errors. Pickling errors, however, will raise a subclass of pickle.PickleError.

Return type:

boolean

set_many(mapping, timeout=None)

Sets multiple keys and values from a mapping.

Parameters:
  • mapping – a mapping with the keys/values to set.
  • timeout – the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 idicates that the cache never expires.
Returns:

Whether all given keys have been set.

Return type:

boolean

SASLMemcachedCache

class falcon_caching.backends.SASLMemcachedCache(servers=None, default_timeout=300, key_prefix=None, username=None, password=None, **kwargs)

SpreadSASLMemcachedCache

class falcon_caching.backends.SpreadSASLMemcachedCache(*args, **kwargs)

Simple Subclass of SASLMemcached client that will spread the value across multiple keys if they are bigger than a given treshold.

Spreading requires using pickle to store the value, which can significantly impact the performance.

delete(key)

Delete key from the cache.

Parameters:key – the key to delete.
Returns:Whether the key existed and has been deleted.
Return type:boolean
delete_many(*keys)

Deletes multiple keys at once.

Parameters:keys – The function accepts multiple keys as positional arguments.
Returns:Whether all given keys have been deleted.
Return type:boolean
get(key, chunk=True)

Get a cached value.

Parameters:chunk – If set to False, it will return a cached value that is spread across multiple keys.
has(key)

Checks if a key exists in the cache without returning it. This is a cheap operation that bypasses loading the actual data on the backend.

This method is optional and may not be implemented on all caches.

Parameters:key – the key to check
set(key, value, timeout=None, chunk=True)

Set a value in cache, potentially spreading it across multiple key.

Parameters:
  • key – The cache key.
  • value – The value to cache.
  • timeout – The timeout after which the cache will be invalidated.
  • chunk – If set to False, then spreading across multiple keys is disabled. This can be faster, but it will fail if the value is bigger than the chunks. It requires you to get back the object by specifying that it is not spread.

AsyncBackends

BaseCache

NullCache

SimpleCache

FileSystemCache

RedisCache

RedisSentinelCache

MemcachedCache