Rate Limit Key Functions#

You can easily customize your rate limits to be based on any characteristic of the incoming request. Both the Limiter constructor and the limit() decorator accept a keyword argument key_func that should return a string (or an object that has a string representation).

Rate limiting a route by current user (using Flask-Login):

@limiter.limit("1 per day", key_func = lambda : current_user.username)
def test_route():
    return "42"

Rate limiting all requests by country:

from flask import request, Flask
import GeoIP

def get_request_country():
    return gi.record_by_name(request.remote_addr)['region_name']

app = Flask(__name__)
limiter = Limiter(get_request_country, app=app, default_limits=["10/hour"])

Custom Rate limit exceeded responses#

The default configuration results in a RateLimitExceeded exception being thrown (which effectively halts any further processing and a response with status `429`).

The exceeded limit is added to the response and results in an response body that looks something like:

<title>429 Too Many Requests</title>
<h1>Too Many Requests</h1>
<p>1 per 1 day</p>

For all routes that are rate limited#

If you want to configure the response you can register an error handler for the 429 error code in a manner similar to the following example, which returns a json response instead:

def ratelimit_handler(e):
    return make_response(
            jsonify(error=f"ratelimit exceeded {e.description}")
            , 429

New in version 2.6.0.

The same effect can be achieved by using the on_breach parameter when initializing the Limiter. If the callback passed to this parameter returns an instance of Response that response will be the one embedded into the RateLimitExceeded exception that is raised.

For example:

from flask import make_response, render_template
from flask_limiter import Limiter, RequestLimit

def default_error_responder(request_limit: RequestLimit):
    return make_response(
        render_template("my_ratelimit_template.tmpl", request_limit=request_limit)

app = Limiter(


If you have specified both an on_breach callback and registered a callback using the errorhandler() decorator, the one registered for 429 errors will still be called and could end up ignoring the response returned by the on_breach callback.

There may be legitimate reasons to do this (for example if your application raises 429 errors by itself or through another middleware).

This can be managed in the callback registered with errorhandler() by checking if the incoming error has a canned response and using that instead of building a new one:

def careful_ratelimit_handler(error):
    return error.get_response() or make_response(
          error=f"ratelimit exceeded {e.description}"


Changed in version 2.8.0: Any errors encountered when calling an on_breach callback will be re-raised unless swallow_errors is set to True

For specific rate limit decorated routes#

New in version 2.6.0.

If the objective is to only customize rate limited error responses for certain rate limited routes this can be achieved in a similar manner as above, through the on_breach parameter of the rate limit decorator.

Following the example from above where the extension was initialized with an on_breach callback, the index route below declares it’s own on_breach callback which instead of rendering a template returns a json response (with a 200 status code):

app = Limiter(

def index_ratelimit_error_responder(request_limit: RequestLimit):
    return jsonify({"error": "rate_limit_exceeded"})

@limiter.limit("10/minute", on_breach=index_ratelimit_error_responder)
def index():

The above example also demonstrates the subtle implementation detail that the response from Limiter.limiter.on_breach callback (if provided) will take priority over the response from the Limiter.on_breach callback if there is one.

Customizing the cost of a request#

By default whenever a request is served a cost of 1 is charged for each rate limit that applies within the context of that request.

There may be situations where a different value should be used.

The limit() and shared_limit() decorators both accept a cost parameter which accepts either a static int or a callable that returns an int.

As an example, the following configuration will result in a double penalty whenever Some reason is true

from flask import request, current_app

def my_cost_function() -> int:
    if .....: # Some reason
        return  2
    return 1

@limiter.limit("100/second", cost=my_cost_function)
def root():

A similar approach can be used for both default and application level limits by providing either a cost function to the Limiter constructor via the default_limits_cost or application_limits_cost parameters.

Customizing rate limits based on response#

For scenarios where the decision to count the current request towards a rate limit can only be made after the request has completed, a callable that accepts the current flask.Response object as its argument can be provided to the limit() or shared_limit() decorators through the deduct_when keyword argument. A truthy response from the callable will result in a deduction from the rate limit.

As an example, to only count non 200 responses towards the rate limit

    deduct_when=lambda response: response.status_code != 200
def route():

deduct_when can also be provided for default limits by providing the default_limits_deduct_when parameter to the Limiter constructor.


All requests will be tested for the rate limit and rejected accordingly if the rate limit is already hit. The provision of the deduct_when argument only changes whether the request will count towards depleting the rate limit.

Rate limiting Class-based Views#

If you are taking a class based approach for defining views, the recommended method (Class-based Views) of adding decorators is to add the limit() decorator to decorators in your view subclass as shown in the example below

app = Flask(__name__)
limiter = Limiter(get_remote_address, app=app)

class MyView(flask.views.MethodView):
    decorators = [limiter.limit("10/second")]

    def get(self):
        return "get"

    def put(self):
        return "put"


This approach is limited to either sharing the same rate limit for all http methods of a given flask.views.View or applying the declared rate limit independently for each http method (to accomplish this, pass in True to the per_method keyword argument to limit()). Alternatively, the limit can be restricted to only certain http methods by passing them as a list to the methods keyword argument.

Rate limiting all routes in a Blueprint#


Blueprint instances that are registered on another blueprint instead of on the main Flask instance had not been considered upto v2.3.0. Effectively they neither inherited the rate limits explicitly registered on the parent Blueprint nor were they exempt from rate limits if the parent had been marked exempt. (See #326, and the Nested Blueprints section below).

limit(), shared_limit() & exempt() can all be tpplied to flask.Blueprint instances as well. In the following example the login Blueprint has a special rate limit applied to all its routes, while the doc Blueprint is exempt from all rate limits. The regular Blueprint follows the default rate limits.

app = Flask(__name__)
login = Blueprint("login", __name__, url_prefix = "/login")
regular = Blueprint("regular", __name__, url_prefix = "/regular")
doc = Blueprint("doc", __name__, url_prefix = "/doc")

def doc_index():
    return "doc"

def regular_index():
    return "regular"

def login_index():
    return "login"

limiter = Limiter(get_remote_address, app=app, default_limits = ["1/second"])


Nested Blueprints#

New in version 2.3.0.

Nested Blueprints require some special considerations.

Exempting routes in nested Blueprints#

Expanding the example from the Flask documentation:

parent = Blueprint('parent', __name__, url_prefix='/parent')
child = Blueprint('child', __name__, url_prefix='/child')



Routes under the child blueprint do not automatically get exempted by default and have to be marked exempt explicitly. This behavior is to maintain backward compatibility and can be opted out of by adding DESCENDENTS to flags when calling Limiter.exempt():

    flags=ExemptionScope.DEFAULT | ExemptionScope.APPLICATION | ExemptionScope.DESCENDENTS

Explicitly setting limits / exemptions on nested Blueprints#

Using combinations of override_defaults parameter when explicitly declaring limits on Blueprints and the flags parameter when exempting Blueprints with exempt() the resolution of inherited and descendent limits within the scope of a Blueprint can be controlled.

Here’s a slightly involved example:

limiter = Limiter(
    default_limits = ["100/hour"],
    application_limits = ["100/minute"]

parent = Blueprint('parent', __name__, url_prefix='/parent')
child = Blueprint('child', __name__, url_prefix='/child')
grandchild = Blueprint('grandchild', __name__, url_prefix='/grandchild')

health = Blueprint('health', __name__, url_prefix='/health')



limiter.limit("1/second", override_defaults=False)(child)


Effectively this means:

  1. Routes under parent will override the application defaults and will be limited to 2 per minute

  2. Routes under child will respect both the parent and the application defaults and effectively be limited to At most 1 per second, 2 per minute and 100 per hour

  3. Routes under grandchild will not inherit either the limits from child or parent or the application defaults and allow 10 per minute

  4. All calls to /health/ will be exempt from all limits (including any limits that would otherwise be inherited from the Blueprints it is nested under due to the addition of the ANCESTORS flag).


Only calls to /health will be exempt from the application wide global limit of 100/minute.


Each Limiter instance has a registered Logger named flask-limiter that is by default not configured with a handler.

This can be configured according to your needs:

import logging
limiter_logger = logging.getLogger("flask-limiter")

# force DEBUG logging

# restrict to only error level

# Add a filter

# etc ..

Custom error messages#

limit() & shared_limit() can be provided with an error_message argument to over ride the default n per x error message that is returned to the calling client. The error_message argument can either be a simple string or a callable that returns one.

app = Flask(__name__)
limiter = Limiter(get_remote_address, app=app)

def error_handler():
    return app.config.get("DEFAULT_ERROR_MESSAGE")

@limiter.limit("1/second", error_message='chill!')
def index():

@limiter.limit("10/second", error_message=error_handler)
def ping():

Custom rate limit headers#

Though you can get pretty far with configuring the standard headers associated with rate limiting using configuration parameters available as described under Rate-limiting Headers - this may not be sufficient for your use case.

For such cases you can access the current_limit property from the Limiter instance from anywhere within a request context.

As an example you could leave the built in header population disabled and add your own with an after_request() hook:

app = Flask(__name__)
limiter = Limiter(get_remote_address, app=app)

def index():

def add_headers(response):
    if limiter.current_limit:
        response.headers["RemainingLimit"] = limiter.current_limit.remaining
        response.headers["ResetAt"] = limiter.current_limit.reset_at
        response.headers["MaxRequests"] = limiter.current_limit.limit.amount
        response.headers["WindowSize"] = limiter.current_limit.limit.get_expiry()
        response.headers["Breached"] = limiter.current_limit.breached
    return response

This will result in headers along the lines of:

< RemainingLimit: 0
< ResetAt: 1641691205
< MaxRequests: 1
< WindowSize: 1
< Breached: True

Deploying an application behind a proxy#

If your application is behind a proxy and you are using werkzeug > 0.9+ you can use the werkzeug.middleware.proxy_fix.ProxyFix fixer to reliably get the remote address of the user, while protecting your application against ip spoofing via headers.

from flask import Flask
from flask_limiter import Limiter
from flask_limiter.util import get_remote_address
from werkzeug.middleware.proxy_fix import ProxyFix

app = Flask(__name__)
# for example if the request goes through one proxy
# before hitting your application server
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1)
limiter = Limiter(get_remote_address, app=app)