pip install Flask-Limiter

Quick start

from flask import Flask
from flask_limiter import Limiter
from flask_limiter.util import get_remote_address

app = Flask(__name__)
limiter = Limiter(
    default_limits=["200 per day", "50 per hour"]
@limiter.limit("1 per day")
def slow():
    return ":("

@limiter.limit("1/second", override_defaults=False)
def medium():
    return ":|"

def fast():
    return ":)"

def ping():
    return "PONG"

The above Flask app will have the following rate limiting characteristics:

  • Rate limiting by remote_address of the request

  • A default rate limit of 200 per day, and 50 per hour applied to all routes.

  • The slow route having an explicit rate limit decorator will bypass the default rate limit and only allow 1 request per day.

  • The medium route inherits the default limits and adds on a decorated limit of 1 request per second.

  • The ping route will be exempt from any default rate limits.


The built in flask static files routes are also exempt from rate limits.

Every time a request exceeds the rate limit, the view function will not get called and instead a 429 http error will be raised.

The Flask-Limiter extension

The extension can be initialized with the flask.Flask application in the usual ways.

Using the constructor

from flask_limiter import Limiter
from flask_limiter.util import get_remote_address

limiter = Limiter(app, key_func=get_remote_address)

Deferred app initialization using init_app

limiter = Limiter(key_func=get_remote_address)

Rate Limit Domain

Each Limiter instance is initialized with a key_func which returns the bucket in which each request is put into when evaluating whether it is within the rate limit or not.


Earlier versions of Flask-Limiter defaulted the rate limiting domain to the requesting users’ ip-address retreived via the flask_limiter.util.get_ipaddr() function. This behavior is being deprecated (since version 0.9.2) as it can be susceptible to ip spoofing with certain environment setups (more details at github issue #41 & flask apps and ip spoofing).

It is now recommended to explicitly provide a keying function as part of the Limiter initialization (Rate Limit Key Functions). Two utility methods are still provided:

Please refer to Deploying an application behind a proxy for an example.


The decorators made available as instance methods of the Limiter instance are


There are a few ways of using this decorator depending on your preference and use-case.

Single decorator

The limit string can be a single limit or a delimiter separated string

def my_route()
Multiple decorators

The limit string can be a single limit or a delimiter separated string or a combination of both.

def my_route():
Custom keying function

By default rate limits are applied based on the key function that the Limiter instance was initialized with. You can implement your own function to retrieve the key to rate limit by when decorating individual routes. Take a look at Rate Limit Key Functions for some examples..

def my_key_func():

@limiter.limit("100/day", my_key_func)
def my_route():


The key function is called from within a flask request context.

Dynamically loaded limit string(s)

There may be situations where the rate limits need to be retrieved from sources external to the code (database, remote api, etc…). This can be achieved by providing a callable to the decorator.

def rate_limit_from_config():
    return current_app.config.get("CUSTOM_LIMIT", "10/s")

def my_route():


The provided callable will be called for every request on the decorated route. For expensive retrievals, consider caching the response.


The callable is called from within a flask request context during the before_request phase.

Exemption conditions

Each limit can be exempted when given conditions are fulfilled. These conditions can be specified by supplying a callable as an `exempt_when` argument when defining the limit.

@limiter.limit("100/day", exempt_when=lambda: current_user.is_admin)
def expensive_route():

For scenarios where a rate limit should be shared by multiple routes (For example when you want to protect routes using the same resource with an umbrella rate limit).

Named shared limit

mysql_limit = limiter.shared_limit("100/hour", scope="mysql")

def r1():

def r2():

Dynamic shared limit: when a callable is passed as scope, the return value of the function will be used as the scope. Note that the callable takes one argument: a string representing the request endpoint.

def host_scope(endpoint_name):
    return request.host
host_limit = limiter.shared_limit("100/hour", scope=host_scope)

def r1():

def r2():


Shared rate limits provide the same conveniences as individual rate limits

  • Can be chained with other shared limits or individual limits

  • Accept keying functions

  • Accept callables to determine the rate limit value


This decorator simply marks a route as being exempt from any rate limits.


This decorator simply marks a function as a filter for requests that are going to be tested for rate limits. If any of the request filters return True no rate limiting will be performed for that request. This mechanism can be used to create custom white lists.

def header_whitelist():
    return request.headers.get("X-Internal", "") == "true"

def ip_whitelist():
    return request.remote_addr == ""

In the above example, any request that contains the header X-Internal: true or originates from localhost will not be rate limited.


The following flask configuration values are honored by Limiter. If the corresponding configuration value is passed in through the Limiter constructor, those will take precedence.


Deprecated since version 0.9.4.: Use RATELIMIT_DEFAULT instead.


A comma (or some other delimiter) separated string that will be used to apply a default limit on all routes. If not provided, the default limits can be passed to the Limiter constructor as well (the values passed to the constructor take precedence over those in the config). Rate limit string notation for details.


Whether default limits are applied per method, per route or as a combination of all method per route.


A function that should return a truthy value if the default rate limit(s) should be skipped for the current request. This callback is called in the flask request context before_request phase.


A function that should return a truthy value if a deduction should be made from the default rate limit(s) for the current request. This callback is called in the flask request context after_request phase.


A comma (or some other delimiter) separated string that will be used to apply limits to the application as a whole (i.e. shared by all routes).


A storage location conforming to the scheme in Storage scheme. A basic in-memory storage can be used by specifying memory:// though this should probably never be used in production. Some supported backends include:

  • Memcached: memcached://host:port

  • Redis: redis://host:port

  • GAE Memcached: gaememcached://host:port

For specific examples and requirements of supported backends please refer to Storage scheme.


A dictionary to set extra options to be passed to the storage implementation upon initialization. (Useful if you’re subclassing limits.storage.Storage to create a custom Storage backend.)


The rate limiting strategy to use. Rate limiting strategies for details.


Enables returning Rate-limiting Headers. Defaults to False


Overall kill switch for rate limits. Defaults to True


Header for the current rate limit. Defaults to X-RateLimit-Limit


Header for the reset time of the current rate limit. Defaults to X-RateLimit-Reset


Header for the number of requests remaining in the current rate limit. Defaults to X-RateLimit-Remaining


Header for when the client should retry the request. Defaults to Retry-After


Allows configuration of how the value of the Retry-After header is rendered. One of http-date or delta-seconds. (RFC2616).


Whether to allow failures while attempting to perform a rate limit such as errors with downstream storage. Setting this value to True will effectively disable rate limiting for requests where an error has occurred.


True/False. If enabled an in memory rate limiter will be used as a fallback when the configured storage is down. Note that, when used in combination with RATELIMIT_IN_MEMORY_FALLBACK the original rate limits will not be inherited and the values provided in


A comma (or some other delimiter) separated string that will be used when the configured storage is down.


Prefix that is prepended to each stored rate limit key. This can be useful when using a shared storage for multiple applications or rate limit domains.

Rate limit string notation

Rate limits are specified as strings following the format:

[count] [per|/] [n (optional)] [second|minute|hour|day|month|year]

You can combine multiple rate limits by separating them with a delimiter of your choice.


  • 10 per hour

  • 10/hour

  • 10/hour;100/day;2000 per year

  • 100/day, 500/7days


If rate limit strings that are provided to the Limiter.limit() decorator are malformed and can’t be parsed the decorated route will fall back to the default rate limit(s) and an ERROR log message will be emitted. Refer to Logging for more details on capturing this information. Malformed default rate limit strings will however raise an exception as they are evaluated early enough to not cause disruption to a running application.

Rate limiting strategies

Flask-Limiter comes with three different rate limiting strategies built-in. Pick the one that works for your use-case by specifying it in your flask config as RATELIMIT_STRATEGY (one of fixed-window, fixed-window-elastic-expiry, or moving-window), or as a constructor keyword argument. The default configuration is fixed-window.

Fixed Window

This is the most memory efficient strategy to use as it maintains one counter per resource and rate limit. It does however have its drawbacks as it allows bursts within each window - thus allowing an ‘attacker’ to by-pass the limits. The effects of these bursts can be partially circumvented by enforcing multiple granularities of windows per resource.

For example, if you specify a 100/minute rate limit on a route, this strategy will allow 100 hits in the last second of one window and a 100 more in the first second of the next window. To ensure that such bursts are managed, you could add a second rate limit of 2/second on the same route.

Fixed Window with Elastic Expiry

This strategy works almost identically to the Fixed Window strategy with the exception that each hit results in the extension of the window. This strategy works well for creating large penalties for breaching a rate limit.

For example, if you specify a 100/minute rate limit on a route and it is being attacked at the rate of 5 hits per second for 2 minutes - the attacker will be locked out of the resource for an extra 60 seconds after the last hit. This strategy helps circumvent bursts.

Moving Window


The moving window strategy is only implemented for the redis and in-memory storage backends. The strategy requires using a list with fast random access which is not very convenient to implement with a memcached storage.

This strategy is the most effective for preventing bursts from by-passing the rate limit as the window for each limit is not fixed at the start and end of each time unit (i.e. N/second for a moving window means N in the last 1000 milliseconds). There is however a higher memory cost associated with this strategy as it requires N items to be maintained in memory per resource and rate limit.

Rate-limiting Headers

If the configuration is enabled, information about the rate limit with respect to the route being requested will be added to the response headers. Since multiple rate limits can be active for a given route - the rate limit with the lowest time granularity will be used in the scenario when the request does not breach any rate limits.


The total number of requests allowed for the active window


The number of requests remaining in the active window.


UTC seconds since epoch when the window will be reset.


Seconds to retry after or the http date when the Rate Limit will be reset. The way the value is presented depends on the configuration value set in RATELIMIT_HEADER_RETRY_AFTER_VALUE and defaults to delta-seconds.


Enabling the headers has an additional cost with certain storage / strategy combinations.

  • Memcached + Fixed Window: an extra key per rate limit is stored to calculate X-RateLimit-Reset

  • Redis + Moving Window: an extra call to redis is involved during every request to calculate X-RateLimit-Remaining and X-RateLimit-Reset

The header names can be customised if required by either using the flask configuration (Configuration) values or by setting the header_mapping property of the Limiter as follows:

from flask_limiter import Limiter, HEADERS
limiter = Limiter()
limiter.header_mapping = {
    HEADERS.LIMIT : "X-My-Limit",
    HEADERS.RESET : "X-My-Reset",
    HEADERS.REMAINING: "X-My-Remaining"
# or by only partially specifying the overrides
limiter.header_mapping[HEADERS.LIMIT] = 'X-My-Limit'


Rate Limit Key Functions

You can easily customize your rate limits to be based on any characteristic of the incoming request. Both the Limiter constructor and the Limiter.limit() decorator accept a keyword argument key_func that should return a string (or an object that has a string representation).

Rate limiting a route by current user (using Flask-Login):

@limiter.limit("1 per day", key_func = lambda : current_user.username)
def test_route():
    return "42"

Rate limiting all requests by country:

from flask import request, Flask
import GeoIP
gi = GeoIP.open("GeoLiteCity.dat", GeoIP.GEOIP_INDEX_CACHE | GeoIP.GEOIP_CHECK_CACHE)

def get_request_country():
    return gi.record_by_name(request.remote_addr)['region_name']

app = Flask(__name__)
limiter = Limiter(app, default_limits=["10/hour"], key_func = get_request_country)

Custom Rate limit exceeded responses

The default configuration results in an abort(429) being called every time a rate limit is exceeded for a particular route. The exceeded limit is added to the response and results in an response body that looks something like:

<title>429 Too Many Requests</title>
<h1>Too Many Requests</h1>
<p>1 per 1 day</p>

If you want to configure the response you can register an error handler for the 429 error code in a manner similar to the following example, which returns a json response instead:

def ratelimit_handler(e):
    return make_response(
            jsonify(error="ratelimit exceeded %s" % e.description)
            , 429

Customizing rate limits based on response

For scenarios where the decision to count the current request towards a rate limit can only be made after the request has completed, a callable that accepts the current flask.Response object as its argument can be provided to the Limiter.limit() or Limiter.shared_limit() decorators through the deduct_when keyword arugment. A truthy response from the callable will result in a deduction from the rate limit.

As an example, to only count non 200 responses towards the rate limit

    deduct_when=lambda response: response.status_code != 200
def route():


All requests will be tested for the rate limit and rejected accordingly if the rate limit is already hit. The providion of the deduct_when argument only changes whether the request will count towards depleting the rate limit.

Using Flask Pluggable Views

If you are using a class based approach to defining view function, the regular method of decorating a view function to apply a per route rate limit will not work. You can add rate limits to your view classes using the following approach.

app = Flask(__name__)
limiter = Limiter(app, key_func=get_remote_address)

class MyView(flask.views.MethodView):
    decorators = [limiter.limit("10/second")]
    def get(self):
        return "get"

    def put(self):
        return "put"


This approach is limited to either sharing the same rate limit for all http methods of a given flask.views.View or applying the declared rate limit independently for each http method (to accomplish this, pass in True to the per_method keyword argument to Limiter.limit()). Alternatively, the limit can be restricted to only certain http methods by passing them as a list to the methods keyword argument.

The above approach has been tested with sub-classes of flask.views.View, flask.views.MethodView and flask.ext.restful.Resource.

Rate limiting all routes in a flask.Blueprint

Limiter.limit(), Limiter.shared_limit() & Limiter.exempt() can all be applied to flask.Blueprint instances as well. In the following example the login Blueprint has a special rate limit applied to all its routes, while the help Blueprint is exempt from all rate limits. The regular Blueprint follows the default rate limits.

app = Flask(__name__)
login = Blueprint("login", __name__, url_prefix = "/login")
regular = Blueprint("regular", __name__, url_prefix = "/regular")
doc = Blueprint("doc", __name__, url_prefix = "/doc")

def doc_index():
    return "doc"

def regular_index():
    return "regular"

def login_index():
    return "login"

limiter = Limiter(app, default_limits = ["1/second"], key_func=get_remote_address)



Each Limiter instance has a logger instance variable that is by default not configured with a handler. You can add your own handler to obtain log messages emitted by flask_limiter.

Simple stdout handler:

limiter = Limiter(app, key_func=get_remote_address)

Reusing all the handlers of the logger instance of the flask.Flask app:

app = Flask(__name__)
limiter = Limiter(app, key_func=get_remote_address)
for handler in app.logger.handlers:

Custom error messages

Limiter.limit() & Limiter.shared_limit() can be provided with an error_message argument to over ride the default n per x error message that is returned to the calling client. The error_message argument can either be a simple string or a callable that returns one.

app = Flask(__name__)
limiter = Limiter(app, key_func=get_remote_address)

def error_handler():
    return app.config.get("DEFAULT_ERROR_MESSAGE")

@limiter.limit("1/second", error_message='chill!')
def index():

@limiter.limit("10/second", error_message=error_handler)
def ping():

Deploying an application behind a proxy

If your application is behind a proxy and you are using werkzeug > 0.9+ you can use the werkzeug.contrib.fixers.ProxyFix fixer to reliably get the remote address of the user, while protecting your application against ip spoofing via headers.

from flask import Flask
from flask_limiter import Limiter
from flask_limiter.util import get_remote_address
from werkzeug.contrib.fixers import ProxyFix

app = Flask(__name__)
# for example if the request goes through one proxy
# before hitting your application server
app.wsgi_app = ProxyFix(app.wsgi_app, num_proxies=1)
limiter = Limiter(app, key_func=get_remote_address)



class flask_limiter.Limiter(app=None, key_func=None, global_limits=[], default_limits=[], default_limits_per_method=False, default_limits_exempt_when=None, default_limits_deduct_when=None, application_limits=[], headers_enabled=False, strategy=None, storage_uri=None, storage_options={}, auto_check=True, swallow_errors=False, in_memory_fallback=[], in_memory_fallback_enabled=False, retry_after=None, key_prefix='', enabled=True)[source]

Bases: object

The Limiter class initializes the Flask-Limiter extension.

  • appflask.Flask instance to initialize the extension with.

  • default_limits (list) – a variable list of strings or callables returning strings denoting global limits to apply to all routes. Rate limit string notation for more details.

  • default_limits_per_method (bool) – whether default limits are applied per method, per route or as a combination of all method per route.

  • default_limits_exempt_when (function) – a function that should return True/False to decide if the default limits should be skipped

  • default_limits_deduct_when (function) – a function that receives the current flask.Response object and returns True/False to decide if a deduction should be made from the default rate limit(s)

  • application_limits (list) – a variable list of strings or callables returning strings for limits that are applied to the entire application (i.e a shared limit for all routes)

  • key_func (function) – a callable that returns the domain to rate limit by.

  • headers_enabled (bool) – whether X-RateLimit response headers are written.

  • strategy (str) – the strategy to use. Refer to Rate limiting strategies

  • storage_uri (str) – the storage location. Refer to Configuration

  • storage_options (dict) – kwargs to pass to the storage implementation upon instantiation.

  • auto_check (bool) – whether to automatically check the rate limit in the before_request chain of the application. default True

  • swallow_errors (bool) – whether to swallow errors when hitting a rate limit. An exception will still be logged. default False

  • in_memory_fallback (list) – a variable list of strings or callables returning strings denoting fallback limits to apply when the storage is down.

  • in_memory_fallback_enabled (bool) – simply falls back to in memory storage when the main storage is down and inherits the original limits.

  • key_prefix (str) – prefix prepended to rate limiter keys.


check the limits for the current request




decorator to mark a view or all views in a blueprint as exempt from rate limits.


appflask.Flask instance to rate limit.

limit(limit_value, key_func=None, per_method=False, methods=None, error_message=None, exempt_when=None, override_defaults=True, deduct_when=None)[source]

decorator to be used for rate limiting individual routes or blueprints.

  • limit_value – rate limit string or a callable that returns a string. Rate limit string notation for more details.

  • key_func (function) – function/lambda to extract the unique identifier for the rate limit. defaults to remote address of the request.

  • per_method (bool) – whether the limit is sub categorized into the http method of the request.

  • methods (list) – if specified, only the methods in this list will be rate limited (default: None).

  • error_message – string (or callable that returns one) to override the error message used in the response.

  • exempt_when (function) – function/lambda used to decide if the rate limit should skipped.

  • override_defaults (bool) – whether the decorated limit overrides the default limits. (default: True)

  • deduct_when (function) – a function that receives the current flask.Response object and returns True/False to decide if a deduction should be done from the rate limit


decorator to mark a function as a filter to be executed to check if the request is exempt from rate limiting.


resets the storage if it supports being reset

shared_limit(limit_value, scope, key_func=None, error_message=None, exempt_when=None, override_defaults=True, deduct_when=None)[source]

decorator to be applied to multiple routes sharing the same rate limit.

  • limit_value – rate limit string or a callable that returns a string. Rate limit string notation for more details.

  • scope – a string or callable that returns a string for defining the rate limiting scope.

  • key_func (function) – function/lambda to extract the unique identifier for the rate limit. defaults to remote address of the request.

  • error_message – string (or callable that returns one) to override the error message used in the response.

  • exempt_when (function) – function/lambda used to decide if the rate limit should skipped.

  • override_defaults (bool) – whether the decorated limit overrides the default limits. (default: True)

  • deduct_when (function) – a function that receives the current flask.Response object and returns True/False to decide if a deduction should be done from the rate limit


exception flask_limiter.RateLimitExceeded(limit)[source]

Bases: werkzeug.exceptions.TooManyRequests

exception raised when a rate limit is hit.

The exception results in abort(429) being called.



the ip address for the current request (or if none found) based on the X-Forwarded-For headers.

Deprecated since version 0.9.2.


the ip address for the current request (or if none found)



Release Date: 2020-05-21

  • Bug Fix

    • Ensure headers provided explictely by setting _header_mapping take precedence over configuration values.


Release Date: 2020-05-20

  • Features

    • Add new deduct_when argument that accepts a function to decorated limits to conditionally perform depletion of a rate limit (Pull Request 248)

    • Add new default_limits_deduct_when argument to Limiter constructor to conditionally perform depletion of default rate limits

    • Add default_limits_exempt_when argument that accepts a function to allow skipping the default limits in the before_request phase

  • Bug Fix

    • Fix handling of storage failures during after_request phase.

  • Code Quality

    • Use github-actions instead of travis for CI

    • Use pytest instaad of nosetests

    • Add docker configuration for test dependencies

    • Increase code coverage to 100%

    • Ensure pyflake8 compliance


Release Date: 2020-02-26

  • Bug fix

    • Syntax error in version 1.2.0 when application limits are provided through configuration file (Issue 241)


Release Date: 2020-02-25

  • Add override_defaults argument to decorated limits to allow combinined defaults with decorated limits.

  • Add configuration parameter RATELIMIT_DEFAULTS_PER_METHOD to control whether defaults are applied per method.

  • Add support for in memory fallback without override (Pull Request 236)

  • Bug fix

    • Ensure defaults are enforced when decorated limits are skipped (Issue 238)


Release Date: 2019-10-02


Release Date: 2017-12-08

  • Bug fix

    • Duplicate rate limits applied via application limits (Issue 108)


Release Date: 2017-11-06

  • Improved documentation for handling ip addresses for applications behind proxiues (Issue 41)

  • Execute rate limits for decorated routes in decorator instead of before_request (Issue 67)

  • Bug Fix

    • Python 3.5 Errors (Issue 82)

    • RATELIMIT_KEY_PREFIX configuration constant not used (Issue 88)

    • Can’t use dynamic limit in default_limits (Issue 94)

    • Retry-After header always zero when using key prefix (Issue 99)


Release Date: 2017-08-18

  • Upgrade versioneer


Release Date: 2017-07-26

  • Add support for key prefixes


Release Date: 2017-05-01

  • Implemented application wide shared limits


Release Date: 2016-03-14

  • Allow reset of limiter storage if available


Release Date: 2016-03-04

  • Deprecation warning for default key_func get_ipaddr

  • Support for Retry-After header


Release Date: 2015-11-21

  • Re-expose enabled property on Limiter instance.


Release Date: 2015-11-13

  • In-memory fallback option for unresponsive storage

  • Rate limit exemption option per limit


Release Date: 2015-10-05

  • Bug fix for reported issues of missing (limits) dependency upon installation.


Release Date: 2015-10-03

  • Documentation tweaks.


Release Date: 2015-09-17

  • Remove outdated files from egg


Release Date: 2015-08-06

  • Fixed compatibility with latest version of Flask-Restful


Release Date: 2015-06-07

  • No functional change


Release Date: 2015-04-02

  • Bug fix for case sensitive methods whitelist for limits decorator


Release Date: 2015-03-20

  • Hotfix for dynamic limits with blueprints

  • Undocumented feature to pass storage options to underlying storage backend.


Release Date: 2015-03-02

  • methods keyword argument for limits decorator to specify specific http methods to apply the rate limit to.


Release Date: 2015-02-16


Release Date: 2015-02-03

  • Use Werkzeug TooManyRequests as the exception raised when available.


Release Date: 2015-01-30

  • Bug Fix

    • Fix for version comparison when monkey patching Werkzeug

      (Issue 24)


Release Date: 2015-01-09

  • Refactor core storage & ratelimiting strategy out into the limits package.

  • Remove duplicate hits when stacked rate limits are in use and a rate limit is hit.


Release Date: 2015-01-09

  • Refactoring of RedisStorage for extensibility (Issue 18)

  • Bug fix: Correct default setting for enabling rate limit headers. (Issue 22)


Release Date: 2014-10-21

  • Bug fix

    • Fix for responses slower than rate limiting window. (Issue 17.)


Release Date: 2014-10-01

  • Bug fix: in memory storage thread safety


Release Date: 2014-08-31

  • Support for manually triggering rate limit check


Release Date: 2014-08-26

  • Header name overrides


Release Date: 2014-07-13


Release Date: 2014-07-11

  • per http method rate limit separation (Recipe)

  • documentation improvements


Release Date: 2014-06-24


Release Date: 2014-06-13


Release Date: 2014-06-13

  • Bug fix

    • Werkzeug < 0.9 Compatibility (Issue 6.)


Release Date: 2014-06-12

  • Hotfix : use HTTPException instead of abort to play well with other extensions.


Release Date: 2014-06-12

  • Allow configuration overrides via extension constructor


Release Date: 2014-06-04

  • Improved implementation of moving-window X-RateLimit-Reset value.


Release Date: 2014-05-28


Release Date: 2014-05-26

  • Bug fix

    • Memory leak when using Limiter.storage.MemoryStorage (Issue 4.)

  • Improved test coverage


Release Date: 2014-02-20

  • Strict version requirement on six

  • documentation tweaks


Release Date: 2014-02-19

  • improved logging support for multiple handlers

  • allow callables to be passed to Limiter.limit decorator to dynamically load rate limit strings.

  • add a global kill switch in flask config for all rate limits.

  • Bug fixes

    • default key function for rate limit domain wasn’t accounting for X-Forwarded-For header.


Release Date: 2014-02-18

  • add new decorator to exempt routes from limiting.

  • Bug fixes

    • versioneer.py wasn’t included in manifest.

    • configuration string for strategy was out of sync with docs.


Release Date: 2014-02-15

  • python 2.6 support via counter backport

  • source docs.


Release Date: 2014-02-15

  • Implemented configurable strategies for rate limiting.

  • Bug fixes

    • better locking for in-memory storage

    • multi threading support for memcached storage


Release Date: 2014-02-14

  • Bug fixes

    • fix initializing the extension without an app

    • don’t rate limit static files


Release Date: 2014-02-13

  • first release.