Forked
This fork adds monotonic clock and support for custom sleep functions, equivalent to litl#150 This allows use of this module with alternate async loops like Trio.
Function decoration for backoff and retry
This module provides function decorators which can be used to wrap a function such that it will be retried until some condition is met. It is meant to be of use when accessing unreliable resources with the potential for intermittent failures i.e. network resources and external APIs. Somewhat more generally, it may also be of use for dynamically polling resources for externally generated content.
Decorators support both regular functions for synchronous code and asyncio's coroutines for asynchronous code.
Since Kenneth Reitz's requests module has become a defacto standard for synchronous HTTP clients in Python, networking examples below are written using it, but it is in no way required by the backoff module.
The on_exception
decorator is used to retry when a specified exception
is raised. Here's an example using exponential backoff when any
requests
exception is raised:
@backoff.on_exception(backoff.expo,
requests.exceptions.RequestException)
def get_url(url):
return requests.get(url)
The decorator will also accept a tuple of exceptions for cases where the same backoff behavior is desired for more than one exception type:
@backoff.on_exception(backoff.expo,
(requests.exceptions.Timeout,
requests.exceptions.ConnectionError))
def get_url(url):
return requests.get(url)
Give Up Conditions
Optional keyword arguments can specify conditions under which to give up.
The keyword argument max_time
specifies the maximum amount
of total time in seconds that can elapse before giving up.
@backoff.on_exception(backoff.expo,
requests.exceptions.RequestException,
max_time=60)
def get_url(url):
return requests.get(url)
Keyword argument max_tries
specifies the maximum number of calls
to make to the target function before giving up.
@backoff.on_exception(backoff.expo,
requests.exceptions.RequestException,
max_tries=8,
jitter=None)
def get_url(url):
return requests.get(url)
In some cases the raised exception instance itself may need to be
inspected in order to determine if it is a retryable condition. The
giveup
keyword arg can be used to specify a function which accepts
the exception and returns a truthy value if the exception should not
be retried:
def fatal_code(e):
return 400 <= e.response.status_code < 500
@backoff.on_exception(backoff.expo,
requests.exceptions.RequestException,
max_time=300,
giveup=fatal_code)
def get_url(url):
return requests.get(url)
By default, when a give up event occurs, the exception in question is reraised and so code calling an on_exception-decorated function may still need to do exception handling. This behavior can optionally be disabled using the raise_on_giveup keyword argument.
In the code below, requests.exceptions.RequestException will not be raised when giveup occurs. Note that the decorated function will return None in this case, regardless of the logic in the on_exception handler.
def fatal_code(e):
return 400 <= e.response.status_code < 500
@backoff.on_exception(backoff.expo,
requests.exceptions.RequestException,
max_time=300,
raise_on_giveup=False,
giveup=fatal_code)
def get_url(url):
return requests.get(url)
This is useful for non-mission critical code where you still wish to retry the code inside of backoff.on_exception but wish to proceed with execution even if all retries fail.
The on_predicate
decorator is used to retry when a particular
condition is true of the return value of the target function. This may
be useful when polling a resource for externally generated content.
Here's an example which uses a fibonacci sequence backoff when the return value of the target function is the empty list:
@backoff.on_predicate(backoff.fibo, lambda x: x == [], max_value=13)
def poll_for_messages(queue):
return queue.get()
Extra keyword arguments are passed when initializing the
wait generator, so the max_value
param above is passed as a keyword
arg when initializing the fibo generator.
When not specified, the predicate param defaults to the falsey test, so the above can more concisely be written:
@backoff.on_predicate(backoff.fibo, max_value=13)
def poll_for_message(queue):
return queue.get()
More simply, a function which continues polling every second until it gets a non-falsey result could be defined like like this:
@backoff.on_predicate(backoff.constant, jitter=None, interval=1)
def poll_for_message(queue):
return queue.get()
The jitter is disabled in order to keep the polling frequency fixed.
You can also use the backoff.runtime
generator to make use of the
return value or thrown exception of the decorated method.
For example, to use the value in the Retry-After
header of the response:
@backoff.on_predicate(
backoff.runtime,
predicate=lambda r: r.status_code == 429,
value=lambda r: int(r.headers.get("Retry-After")),
jitter=None,
)
def get_url():
return requests.get(url)
A jitter algorithm can be supplied with the jitter
keyword arg to
either of the backoff decorators. This argument should be a function
accepting the original unadulterated backoff value and returning it's
jittered counterpart.
As of version 1.2, the default jitter function backoff.full_jitter
implements the 'Full Jitter' algorithm as defined in the AWS
Architecture Blog's Exponential Backoff And Jitter post.
Note that with this algorithm, the time yielded by the wait generator
is actually the maximum amount of time to wait.
Previous versions of backoff defaulted to adding some random number of
milliseconds (up to 1s) to the raw sleep value. If desired, this
behavior is now available as backoff.random_jitter
.
The backoff decorators may also be combined to specify different backoff behavior for different cases:
@backoff.on_predicate(backoff.fibo, max_value=13)
@backoff.on_exception(backoff.expo,
requests.exceptions.HTTPError,
max_time=60)
@backoff.on_exception(backoff.expo,
requests.exceptions.Timeout,
max_time=300)
def poll_for_message(queue):
return queue.get()
The decorator functions on_exception
and on_predicate
are
generally evaluated at import time. This is fine when the keyword args
are passed as constant values, but suppose we want to consult a
dictionary with configuration options that only become available at
runtime. The relevant values are not available at import time. Instead,
decorator functions can be passed callables which are evaluated at
runtime to obtain the value:
def lookup_max_time():
# pretend we have a global reference to 'app' here
# and that it has a dictionary-like 'config' property
return app.config["BACKOFF_MAX_TIME"]
@backoff.on_exception(backoff.expo,
ValueError,
max_time=lookup_max_time)
Both backoff decorators optionally accept event handler functions
using the keyword arguments on_success
, on_backoff
, and on_giveup
.
This may be useful in reporting statistics or performing other custom
logging.
Handlers must be callables with a unary signature accepting a dict argument. This dict contains the details of the invocation. Valid keys include:
- target: reference to the function or method being invoked
- args: positional arguments to func
- kwargs: keyword arguments to func
- tries: number of invocation tries so far
- elapsed: elapsed time in seconds so far
- wait: seconds to wait (
on_backoff
handler only) - value: value triggering backoff (
on_predicate
decorator only)
A handler which prints the details of the backoff event could be implemented like so:
def backoff_hdlr(details):
print ("Backing off {wait:0.1f} seconds after {tries} tries "
"calling function {target} with args {args} and kwargs "
"{kwargs}".format(**details))
@backoff.on_exception(backoff.expo,
requests.exceptions.RequestException,
on_backoff=backoff_hdlr)
def get_url(url):
return requests.get(url)
Multiple handlers per event type
In all cases, iterables of handler functions are also accepted, which
are called in turn. For example, you might provide a simple list of
handler functions as the value of the on_backoff
keyword arg:
@backoff.on_exception(backoff.expo,
requests.exceptions.RequestException,
on_backoff=[backoff_hdlr1, backoff_hdlr2])
def get_url(url):
return requests.get(url)
Getting exception info
In the case of the on_exception
decorator, all on_backoff
and
on_giveup
handlers are called from within the except block for the
exception being handled. Therefore exception info is available to the
handler functions via the python standard library, specifically
sys.exc_info()
or the traceback
module. The exception is also
available at the exception key in the details dict passed to the
handlers.
Backoff supports asynchronous execution in Python 3.5 and above.
To use backoff in asynchronous code based on
asyncio
you simply need to apply backoff.on_exception
or backoff.on_predicate
to coroutines.
You can also use coroutines for the on_success
, on_backoff
, and
on_giveup
event handlers, with the interface otherwise being identical.
The following examples use aiohttp asynchronous HTTP client/server library.
@backoff.on_exception(backoff.expo, aiohttp.ClientError, max_time=60)
async def get_url(url):
async with aiohttp.ClientSession(raise_for_status=True) as session:
async with session.get(url) as response:
return await response.text()
By default, backoff and retry attempts are logged to the 'backoff' logger. By default, this logger is configured with a NullHandler, so there will be nothing output unless you configure a handler. Programmatically, this might be accomplished with something as simple as:
logging.getLogger('backoff').addHandler(logging.StreamHandler())
The default logging level is INFO, which corresponds to logging anytime a retry event occurs. If you would instead like to log only when a giveup event occurs, set the logger level to ERROR.
logging.getLogger('backoff').setLevel(logging.ERROR)
It is also possible to specify an alternate logger with the logger
keyword argument. If a string value is specified the logger will be
looked up by name.
@backoff.on_exception(backoff.expo,
requests.exceptions.RequestException,
logger='my_logger')
# ...
It is also supported to specify a Logger (or LoggerAdapter) object directly.
my_logger = logging.getLogger('my_logger')
my_handler = logging.StreamHandler()
my_logger.addHandler(my_handler)
my_logger.setLevel(logging.ERROR)
@backoff.on_exception(backoff.expo,
requests.exceptions.RequestException,
logger=my_logger)
# ...
Default logging can be disabled all together by specifying
logger=None
. In this case, if desired alternative logging behavior
could be defined by using custom event handlers.