Skip to content

API Reference

Cap Class

fastapicap.Cap

Cap()

Singleton-style Redis connection manager for Cap.

This class provides a shared, async Redis connection for all rate limiter instances. It is not meant to be instantiated; use the classmethod init_app to initialize the connection.

Attributes:

Name Type Description
redis Optional[Redis]

The shared aioredis Redis connection instance.

Example

Cap.init_app("redis://localhost:6379/0")

Now Cap.redis can be used by all limiters.

Prevent instantiation of Cap.

Raises:

Type Description
RuntimeError

Always, to enforce singleton usage.

Source code in fastapicap/connection.py
def __init__(self) -> None:
    """
    Prevent instantiation of Cap.

    Raises:
        RuntimeError: Always, to enforce singleton usage.
    """
    raise RuntimeError("Use class methods only; do not instantiate Cap.")

init_app classmethod

init_app(redis_url)

Initialize the shared Redis connection for Cap.

Parameters:

Name Type Description Default
redis_url str

The Redis connection URL.

required
Example

Cap.init_app("redis://localhost:6379/0")

Source code in fastapicap/connection.py
@classmethod
def init_app(cls, redis_url: str) -> None:
    """
    Initialize the shared Redis connection for Cap.

    Args:
        redis_url (str): The Redis connection URL.

    Example:
        Cap.init_app("redis://localhost:6379/0")
    """
    cls.redis = aioredis.from_url(redis_url, decode_responses=True)

Strategies Class

fastapicap.RateLimiter

RateLimiter(limit, seconds=0, minutes=0, hours=0, days=0, key_func=None, on_limit=None, prefix='cap')

Bases: BaseLimiter

Implements a Fixed Window rate limiting algorithm.

This limiter restricts the number of requests within a fixed time window. When a new window starts, the counter resets to zero. All requests within the same window consume from the same counter.

Parameters:

Name Type Description Default
limit int

The maximum number of requests allowed within the defined window. Must be a positive integer.

required
seconds int

The number of seconds defining the window size. Can be combined with minutes, hours, or days. Defaults to 0.

0
minutes int

The number of minutes defining the window size. Defaults to 0.

0
hours int

The number of hours defining the window size. Defaults to 0.

0
days int

The number of days defining the window size. Defaults to 0.

0
key_func Optional[Callable[[Request], str]]

An asynchronous or synchronous function to extract a unique key from the request. Defaults to client IP and path.

None
on_limit Optional[Callable[[Request, Response, int], None]]

An asynchronous or synchronous function called when the rate limit is exceeded. Defaults to raising HTTP 429.

None
prefix str

Redis key prefix for all limiter keys. Defaults to "cap".

'cap'

Attributes:

Name Type Description
limit int

The maximum requests allowed per window.

window_ms int

The calculated window size in milliseconds.

lua_script str

The Lua script used for fixed window logic in Redis.

Raises:

Type Description
ValueError

If the limit is not positive or if the calculated window_ms is not positive (i.e., all time units are zero).

Source code in fastapicap/strategy/fixed_window.py
def __init__(
    self,
    limit: int,
    seconds: int = 0,
    minutes: int = 0,
    hours: int = 0,
    days: int = 0,
    key_func: Optional[Callable[[Request], str]] = None,
    on_limit: Optional[Callable[[Request, Response, int], None]] = None,
    prefix: str = "cap",
) -> None:
    super().__init__(key_func=key_func, on_limit=on_limit, prefix=prefix)
    self.limit = limit
    self.window_ms = (
        (seconds * 1000)
        + (minutes * 60 * 1000)
        + (hours * 60 * 60 * 1000)
        + (days * 24 * 60 * 60 * 1000)
    )
    self.lua_script = FIXED_WINDOW
    self.prefix: str = f"{prefix}::{self.__class__.__name__}"

__call__ async

__call__(request, response)

Apply the rate limiting logic to the incoming request. It interacts with Redis to increment a counter within the current time window and checks if the limit has been exceeded.

Parameters:

Name Type Description Default
request Request

The incoming FastAPI request object.

required
response Response

The FastAPI response object. This can be modified by the on_limit handler if needed.

required

Raises:

Type Description
HTTPException

By default, if the rate limit is exceeded, BaseLimiter._default_on_limit will raise an HTTPException with status code 429. Custom on_limit functions may raise other exceptions or handle the response differently.

Source code in fastapicap/strategy/fixed_window.py
async def __call__(self, request: Request, response: Response):
    """
    Apply the rate limiting logic to the incoming request. It interacts with Redis to
    increment a counter within the current time window and checks if the
    limit has been exceeded.

    Args:
        request (Request): The incoming FastAPI request object.
        response (Response): The FastAPI response object. This can be
            modified by the `on_limit` handler if needed.

    Raises:
        HTTPException: By default, if the rate limit is exceeded,
            `BaseLimiter._default_on_limit` will raise an `HTTPException`
            with status code 429. Custom `on_limit` functions may raise
            other exceptions or handle the response differently.
    """
    redis = self._ensure_redis()
    await self._ensure_lua_sha(self.lua_script)
    key: str = await self._safe_call(self.key_func, request)
    full_key = f"{self.prefix}:{key}"
    result = await redis.evalsha(
        self.lua_sha, 1, full_key, str(self.limit), str(self.window_ms)
    )
    allowed = result == 0
    retry_after = int(result / 1000) if not allowed else 0
    if not allowed:
        await self._safe_call(self.on_limit, request, response, retry_after)

fastapicap.SlidingWindowRateLimiter

SlidingWindowRateLimiter(limit, seconds=0, minutes=0, hours=0, days=0, key_func=None, on_limit=None, prefix='cap')

Bases: BaseLimiter

Implements an Approximated Sliding Window rate limiting algorithm.

This algorithm provides a more accurate and smoother rate limiting experience than a simple Fixed Window, while being more memory-efficient than a pure log-based sliding window. It works by maintaining counters for the current fixed window and the immediately preceding fixed window. The effective count for the sliding window is then calculated as a weighted sum of the requests in the previous window and the current window, based on how much of the previous window still "slides" into the current view.

Parameters:

Name Type Description Default
limit int

The maximum number of requests allowed within the defined sliding window. Must be a positive integer.

required
seconds int

The number of seconds defining the size of the individual fixed window segments that make up the sliding window. This value, combined with others, determines the window_ms. Defaults to 0.

0
minutes int

The number of minutes defining the window segment size. Defaults to 0.

0
hours int

The number of hours defining the window segment size. Defaults to 0.

0
days int

The number of days defining the window segment size. Defaults to 0.

0
key_func Optional[Callable[[Request], str]]

An asynchronous or synchronous function to extract a unique key from the request. Defaults to client IP and path. The function should accept a fastapi.Request object and return a str.

None
on_limit Optional[Callable[[Request, Response, int], None]]

An asynchronous or synchronous function called when the rate limit is exceeded. Defaults to raising HTTP 429. The function should accept a fastapi.Request, fastapi.Response, and an int (retry_after seconds), and should not return a value.

None
prefix str

Redis key prefix for all limiter keys. Defaults to "cap".

'cap'

Attributes:

Name Type Description
limit int

The maximum requests allowed within the sliding window.

window_ms int

The calculated size of a single fixed window segment in milliseconds (e.g., if you set seconds=60, this is 60000ms). The sliding window itself covers a period equivalent to window_ms.

lua_script str

The Lua script used for the approximated sliding window logic in Redis.

Raises:

Type Description
ValueError

If the limit is not positive or if the calculated window_ms is not positive (i.e., all time units are zero).

Note

This implementation relies on a Redis Lua script to atomically manage and count requests within the current and previous fixed window segments. The retry_after value provided by the Lua script indicates the approximate time in seconds until the next request might be allowed.

Source code in fastapicap/strategy/sliding_window.py
def __init__(
    self,
    limit: int,
    seconds: int = 0,
    minutes: int = 0,
    hours: int = 0,
    days: int = 0,
    key_func: Optional[Callable[[Request], str]] = None,
    on_limit: Optional[Callable[[Request, Response, int], None]] = None,
    prefix: str = "cap",
):
    super().__init__(key_func=key_func, on_limit=on_limit, prefix=prefix)
    self.limit = limit
    if limit <= 0:
        raise ValueError("Limit must be a positive integer.")
    self.window_ms = (
        (seconds * 1000)
        + (minutes * 60 * 1000)
        + (hours * 60 * 60 * 1000)
        + (days * 24 * 60 * 60 * 1000)
    )
    self.lua_script = SLIDING_WINDOW
    self.prefix: str = f"{prefix}::{self.__class__.__name__}"

__call__ async

__call__(request, response)

Applies the approximated sliding window rate limiting logic to the incoming request.

This method is the core of the rate limiter. It interacts with Redis to increment counters for the current and previous window segments and checks if the estimated count within the sliding window exceeds the limit.

Parameters:

Name Type Description Default
request Request

The incoming FastAPI request object.

required
response Response

The FastAPI response object. This can be modified by the on_limit handler if needed.

required

Raises:

Type Description
HTTPException

By default, if the rate limit is exceeded, BaseLimiter._default_on_limit will raise an HTTPException with status code 429. Custom on_limit functions may raise other exceptions or handle the response differently.

Source code in fastapicap/strategy/sliding_window.py
async def __call__(self, request: Request, response: Response):
    """
    Applies the approximated sliding window rate limiting logic to the incoming request.

    This method is the core of the rate limiter. It interacts with Redis to
    increment counters for the current and previous window segments and
    checks if the estimated count within the sliding window exceeds the limit.

    Args:
        request (Request): The incoming FastAPI request object.
        response (Response): The FastAPI response object. This can be
            modified by the `on_limit` handler if needed.

    Raises:
        HTTPException: By default, if the rate limit is exceeded,
            `BaseLimiter._default_on_limit` will raise an `HTTPException`
            with status code 429. Custom `on_limit` functions may raise
            other exceptions or handle the response differently.
    """
    redis = self._ensure_redis()
    await self._ensure_lua_sha(self.lua_script)
    key: str = await self._safe_call(self.key_func, request)
    now_ms = int(time.time() * 1000)
    curr_window_start = now_ms - (now_ms % self.window_ms)
    prev_window_start = curr_window_start - self.window_ms
    curr_key = f"{self.prefix}:{key}:{curr_window_start}"
    prev_key = f"{self.prefix}:{key}:{prev_window_start}"
    result = await redis.evalsha(
        self.lua_sha,
        2,
        curr_key,
        prev_key,
        str(curr_window_start),
        str(self.window_ms),
        str(self.limit),
    )
    allowed = result == 0
    retry_after = int(result / 1000) if not allowed else 0
    if not allowed:
        await self._safe_call(self.on_limit, request, response, retry_after)

fastapicap.TokenBucketRateLimiter

TokenBucketRateLimiter(capacity, tokens_per_second=0, tokens_per_minute=0, tokens_per_hour=0, tokens_per_day=0, key_func=None, on_limit=None, prefix='cap')

Bases: BaseLimiter

Implements the Token Bucket rate limiting algorithm.

The Token Bucket algorithm works like a bucket that tokens are continuously added to at a fixed refill_rate. Each request consumes one token. If a request arrives and there are tokens available in the bucket, the request is processed, and a token is removed. If the bucket is empty, the request is denied (or queued). The capacity defines the maximum number of tokens the bucket can hold, allowing for bursts of traffic up to that capacity. This algorithm is excellent for controlling the average rate of requests while permitting bursts.

Parameters:

Name Type Description Default
capacity int

The maximum number of tokens the bucket can hold. This determines the maximum burst size allowed. Must be a positive integer.

required
tokens_per_second float

The rate at which tokens are added to the bucket, in tokens per second. If combined with other tokens_per_* arguments, they are summed to determine the total refill_rate. Defaults to 0.

0
tokens_per_minute float

The token refill rate in tokens per minute. Defaults to 0.

0
tokens_per_hour float

The token refill rate in tokens per hour. Defaults to 0.

0
tokens_per_day float

The token refill rate in tokens per day. Defaults to 0.

0
key_func Optional[Callable[[Request], str]]

An asynchronous or synchronous function to extract a unique key from the request. Defaults to client IP and path. The function should accept a fastapi.Request object and return a str.

None
on_limit Optional[Callable[[Request, Response, int], None]]

An asynchronous or synchronous function called when the rate limit is exceeded. Defaults to raising HTTP 429. The function should accept a fastapi.Request, fastapi.Response, and an int (retry_after seconds), and should not return a value.

None
prefix str

Redis key prefix for all limiter keys. Defaults to "cap".

'cap'

Attributes:

Name Type Description
capacity int

The configured maximum bucket capacity.

refill_rate float

The total calculated token refill rate in tokens per millisecond.

lua_script str

The Lua script used for token bucket logic in Redis.

Raises:

Type Description
ValueError

If the capacity is not positive, or if the total calculated refill_rate is not positive. This ensures a valid configuration for the token bucket.

Source code in fastapicap/strategy/token_bucket.py
def __init__(
    self,
    capacity: int,
    tokens_per_second: float = 0,
    tokens_per_minute: float = 0,
    tokens_per_hour: float = 0,
    tokens_per_day: float = 0,
    key_func: Optional[Callable[[Request], str]] = None,
    on_limit: Optional[Callable[[Request, Response, int], None]] = None,
    prefix: str = "cap",
):
    super().__init__(key_func=key_func, on_limit=on_limit, prefix=prefix)
    if capacity <= 0:
        raise ValueError("Capacity must be a positive integer.")

    self.capacity = capacity
    total_tokens = (
        tokens_per_second
        + tokens_per_minute / 60
        + tokens_per_hour / 3600
        + tokens_per_day / 86400
    )
    self.refill_rate = total_tokens / 1000
    self.lua_script = TOKEN_BUCKET
    self.prefix: str = f"{prefix}::{self.__class__.__name__}"

    if self.refill_rate <= 0:
        raise ValueError(
            "Refill rate must be positive."
            "Check your tokens_per_second/minute/hour/day arguments."
        )

__call__ async

__call__(request, response)

Applies the Token Bucket rate limiting logic to the incoming request.

This method is the core of the rate limiter. It interacts with Redis to simulate token consumption and bucket refill, determining if the request is allowed.

Parameters:

Name Type Description Default
request Request

The incoming FastAPI request object.

required
response Response

The FastAPI response object. This can be modified by the on_limit handler if needed.

required

Raises:

Type Description
HTTPException

By default, if the rate limit is exceeded, BaseLimiter._default_on_limit will raise an HTTPException with status code 429. Custom on_limit functions may raise other exceptions or handle the response differently.

Source code in fastapicap/strategy/token_bucket.py
async def __call__(self, request: Request, response: Response):
    """
    Applies the Token Bucket rate limiting logic to the incoming request.

    This method is the core of the rate limiter. It interacts with Redis to simulate
    token consumption and bucket refill, determining if the request is allowed.

    Args:
        request (Request): The incoming FastAPI request object.
        response (Response): The FastAPI response object. This can be
            modified by the `on_limit` handler if needed.

    Raises:
        HTTPException: By default, if the rate limit is exceeded,
            `BaseLimiter._default_on_limit` will raise an `HTTPException`
            with status code 429. Custom `on_limit` functions may raise
            other exceptions or handle the response differently.
    """
    redis = self._ensure_redis()
    await self._ensure_lua_sha(self.lua_script)
    key: str = await self._safe_call(self.key_func, request)
    full_key = f"{self.prefix}:{key}"
    now = int(time.time() * 1000)
    result = await redis.evalsha(
        self.lua_sha,
        1,
        full_key,
        str(self.capacity),
        str(self.refill_rate),
        str(now),
    )
    allowed = result == 0
    retry_after = int(result) // 1000 if not allowed else 0
    if not allowed:
        await self._safe_call(self.on_limit, request, response, retry_after)

fastapicap.LeakyBucketRateLimiter

LeakyBucketRateLimiter(capacity, leaks_per_second=0, leaks_per_minute=0, leaks_per_hour=0, leaks_per_day=0, key_func=None, on_limit=None, prefix='cap')

Bases: BaseLimiter

Implements the Leaky Bucket rate limiting algorithm.

The Leaky Bucket algorithm models traffic flow like water in a bucket. Requests are "drops" added to the bucket. If the bucket overflows (exceeds capacity), new requests are rejected. "Water" (requests) leaks out of the bucket at a constant rate, making space for new requests. This algorithm is known for producing a smooth, constant output rate of requests, which helps in preventing bursts from overwhelming downstream services.

Parameters:

Name Type Description Default
capacity int

The maximum capacity of the bucket. This represents the maximum number of requests that can be held in the bucket before new requests are rejected. Must be a positive integer.

required
leaks_per_second float

The rate at which requests "leak" (are processed) from the bucket, in requests per second. If combined with other leaks_per_* arguments, they are summed. Defaults to 0.

0
leaks_per_minute float

The leak rate in requests per minute. Defaults to 0.

0
leaks_per_hour float

The leak rate in requests per hour. Defaults to 0.

0
leaks_per_day float

The leak rate in requests per day. Defaults to 0.

0
key_func Optional[Callable[[Request], str]]

An asynchronous or synchronous function to extract a unique key from the request. Defaults to client IP and path. The function should accept a fastapi.Request object and return a str.

None
on_limit Optional[Callable[[Request, Response, int], None]]

An asynchronous or synchronous function called when the rate limit is exceeded. Defaults to raising HTTP 429. The function should accept a fastapi.Request, fastapi.Response, and an int (retry_after seconds), and should not return a value.

None
prefix str

Redis key prefix for all limiter keys. Defaults to "cap".

'cap'

Attributes:

Name Type Description
capacity int

The configured maximum bucket capacity.

leak_rate float

The total calculated leak rate in requests per millisecond.

lua_script str

The Lua script used for leaky bucket logic in Redis.

Raises:

Type Description
ValueError

If the capacity is not positive, or if the total calculated leak_rate is not positive. This ensures a valid configuration for the leaky bucket.

Source code in fastapicap/strategy/leaky_bucket.py
def __init__(
    self,
    capacity: int,
    leaks_per_second: float = 0,
    leaks_per_minute: float = 0,
    leaks_per_hour: float = 0,
    leaks_per_day: float = 0,
    key_func: Optional[Callable[[Request], str]] = None,
    on_limit: Optional[Callable[[Request, Response, int], None]] = None,
    prefix: str = "cap",
):
    super().__init__(key_func=key_func, on_limit=on_limit, prefix=prefix)
    self.capacity = capacity
    if capacity <= 0:
        raise ValueError("Capacity must be a positive integer.")
    total_leaks = (
        leaks_per_second
        + leaks_per_minute / 60
        + leaks_per_hour / 3600
        + leaks_per_day / 86400
    )
    self.leak_rate = total_leaks / 1000
    self.lua_script = LEAKY_BUCKET
    self.prefix: str = f"{prefix}::{self.__class__.__name__}"

__call__ async

__call__(request, response)

Applies the leaky bucket rate limiting logic to the incoming request.

This method is the core of the rate limiter. It interacts with Redis to simulate adding a "drop" to the bucket and checks if it overflows.

Parameters:

Name Type Description Default
request Request

The incoming FastAPI request object.

required
response Response

The FastAPI response object. This can be modified by the on_limit handler if needed.

required

Raises:

Type Description
HTTPException

By default, if the rate limit is exceeded, BaseLimiter._default_on_limit will raise an HTTPException with status code 429. Custom on_limit functions may raise other exceptions or handle the response differently.

Source code in fastapicap/strategy/leaky_bucket.py
async def __call__(self, request: Request, response: Response):
    """
    Applies the leaky bucket rate limiting logic to the incoming request.

    This method is the core of the rate limiter. It interacts with Redis to simulate
    adding a "drop" to the bucket and checks if it overflows.

    Args:
        request (Request): The incoming FastAPI request object.
        response (Response): The FastAPI response object. This can be
            modified by the `on_limit` handler if needed.

    Raises:
        HTTPException: By default, if the rate limit is exceeded,
            `BaseLimiter._default_on_limit` will raise an `HTTPException`
            with status code 429. Custom `on_limit` functions may raise
            other exceptions or handle the response differently.
    """
    redis = self._ensure_redis()
    await self._ensure_lua_sha(self.lua_script)
    key: str = await self._safe_call(self.key_func, request)
    full_key = f"{self.prefix}:{key}"
    now = int(time.time() * 1000)
    result = await redis.evalsha(
        self.lua_sha,
        1,
        full_key,
        str(self.capacity),
        str(self.leak_rate),
        str(now),
    )
    allowed = result == 0
    retry_after = (int(result) + 999) // 1000 if not allowed else 0
    if not allowed:
        await self._safe_call(self.on_limit, request, response, retry_after)

fastapicap.GCRARateLimiter

GCRARateLimiter(burst, tokens_per_second=0, tokens_per_minute=0, tokens_per_hour=0, tokens_per_day=0, key_func=None, on_limit=None, prefix='cap')

Bases: BaseLimiter

Implements the Generic Cell Rate Algorithm (GCRA) for rate limiting.

GCRA is a popular algorithm that controls the rate of events by tracking the "Theoretical Arrival Time" (TAT) of the next allowed event. It's often used for API rate limiting as it provides a smooth, burstable rate.

This limiter allows for a burst of requests up to burst capacity, and then enforces a steady rate defined by tokens_per_second, tokens_per_minute, tokens_per_hour, or tokens_per_day.

Parameters:

Name Type Description Default
burst int

The maximum number of additional requests that can be served instantly (i.e., the "burst" capacity beyond the steady rate). This defines how many requests can be handled without delay if the system has been idle.

required
tokens_per_second float

The steady rate of tokens allowed per second. If combined with other tokens_per_* arguments, they are summed. Defaults to 0.

0
tokens_per_minute float

The steady rate of tokens allowed per minute. Defaults to 0.

0
tokens_per_hour float

The steady rate of tokens allowed per hour. Defaults to 0.

0
tokens_per_day float

The steady rate of tokens allowed per day. Defaults to 0.

0
key_func Optional[Callable[[Request], str]]

An asynchronous function to extract a unique key from the request. This key is used to identify the subject being rate-limited (e.g., client IP, user ID). If None, BaseLimiter._default_key_func (client IP + path) is used.

None
on_limit Optional[Callable[[Request, Response, int], None]]

An asynchronous function called when the rate limit is exceeded. It receives the request, response object, and the retry_after value in seconds. If None, BaseLimiter._default_on_limit (which raises an HTTPException 429) is used.

None
prefix str

A string prefix for all Redis keys used by this limiter. Defaults to "cap".

'cap'

Attributes:

Name Type Description
burst int

The configured burst capacity.

tokens_per_second float

The total calculated steady rate in tokens per second.

period float

The calculated time period (in milliseconds) between allowed tokens.

lua_script str

The Lua script used for GCRA logic in Redis.

Raises:

Type Description
ValueError

If the total calculated tokens_per_second is not positive. This ensures that a meaningful rate limit is defined.

Note

The GCRA_LUA script handles the core rate-limiting logic in Redis, ensuring atomic operations. The retry_after value returned by the Lua script (if a limit is hit) indicates the number of milliseconds until the next request would be allowed.

Source code in fastapicap/strategy/gcra.py
def __init__(
    self,
    burst: int,
    tokens_per_second: float = 0,
    tokens_per_minute: float = 0,
    tokens_per_hour: float = 0,
    tokens_per_day: float = 0,
    key_func: Optional[Callable[[Request], str]] = None,
    on_limit: Optional[Callable[[Request, Response, int], None]] = None,
    prefix: str = "cap",
):
    super().__init__(key_func=key_func, on_limit=on_limit, prefix=prefix)
    self.burst = burst
    total_tokens_per_second = (
        tokens_per_second
        + tokens_per_minute / 60
        + tokens_per_hour / 3600
        + tokens_per_day / 86400
    )
    if total_tokens_per_second <= 0:
        raise ValueError(
            "At least one of tokens_per_second, tokens_per_minute, "
            "tokens_per_hour, or tokens_per_day must be positive."
        )

    self.tokens_per_second = total_tokens_per_second
    self.period = 1000.0 / self.tokens_per_second
    self.lua_script = GCRA_LUA
    self.prefix: str = f"{prefix}::{self.__class__.__name__}"

__call__ async

__call__(request, response)

Executes the GCRA rate-limiting logic for the incoming request.

This method is designed to be used as a FastAPI dependency or decorator. It interacts with Redis to check if the request is allowed based on the configured GCRA parameters. If the limit is exceeded, it calls the on_limit handler.

Parameters:

Name Type Description Default
request Request

The incoming FastAPI request object.

required
response Response

The FastAPI response object. This can be modified by the on_limit handler if needed.

required

Raises:

Type Description
HTTPException

By default, if the rate limit is exceeded, BaseLimiter._default_on_limit will raise an HTTPException with status code 429. Custom on_limit functions may raise other exceptions or handle the response differently.

Source code in fastapicap/strategy/gcra.py
async def __call__(self, request: Request, response: Response):
    """
    Executes the GCRA rate-limiting logic for the incoming request.

    This method is designed to be used as a FastAPI dependency or decorator.
    It interacts with Redis to check if the request is allowed based on
    the configured GCRA parameters. If the limit is exceeded, it calls
    the `on_limit` handler.

    Args:
        request (Request): The incoming FastAPI request object.
        response (Response): The FastAPI response object. This can be
            modified by the `on_limit` handler if needed.

    Raises:
        HTTPException: By default, if the rate limit is exceeded,
            `BaseLimiter._default_on_limit` will raise an `HTTPException`
            with status code 429. Custom `on_limit` functions may raise
            other exceptions or handle the response differently.
    """
    redis = self._ensure_redis()
    await self._ensure_lua_sha(self.lua_script)
    key: str = await self._safe_call(self.key_func, request)
    full_key = f"{self.prefix}:{key}"
    now = int(time.time() * 1000)
    result = await redis.evalsha(
        self.lua_sha,
        1,
        full_key,
        str(self.burst),
        str(self.tokens_per_second / 1000),  # tokens/ms
        str(self.period),
        str(now),
    )
    allowed = result[0] == 1
    retry_after = int(result[1]) if not allowed else 0
    if not allowed:
        await self._safe_call(self.on_limit, request, response, retry_after)

fastapicap.SlidingWindowLogRateLimiter

SlidingWindowLogRateLimiter(limit, window_seconds=0, window_minutes=0, window_hours=0, window_days=0, key_func=None, on_limit=None, prefix='cap')

Bases: BaseLimiter

Implements a Sliding Window (Log-based) rate limiting algorithm.

This is the most accurate form of the sliding window algorithm. It works by storing a timestamp for every request made by a client within a Redis sorted set. When a new request comes in, the algorithm first removes all timestamps that fall outside the current sliding window. Then, it counts the number of remaining timestamps within the window. If this count is below the limit, the request is allowed, and its timestamp is added to the set. This method ensures precise rate limiting as the window truly "slides" over time.

Parameters:

Name Type Description Default
limit int

The maximum number of requests allowed within the defined sliding window. Must be a positive integer.

required
window_seconds int

The number of seconds defining the size of the sliding window. Can be combined with minutes, hours, or days. Defaults to 0.

0
window_minutes int

The number of minutes defining the window size. Defaults to 0.

0
window_hours int

The number of hours defining the window size. Defaults to 0.

0
window_days int

The number of days defining the window size. Defaults to 0.

0
key_func Optional[Callable[[Request], str]]

An asynchronous or synchronous function to extract a unique key from the request. Defaults to client IP and path. The function should accept a fastapi.Request object and return a str.

None
on_limit Optional[Callable[[Request, Response, int], None]]

An asynchronous or synchronous function called when the rate limit is exceeded. Defaults to raising HTTP 429. The function should accept a fastapi.Request, fastapi.Response, and an int (retry_after seconds), and should not return a value.

None
prefix str

Redis key prefix for all limiter keys. Defaults to "cap".

'cap'

Attributes:

Name Type Description
limit int

The maximum requests allowed within the sliding window.

window_seconds int

The total calculated window size in seconds.

lua_script str

The Lua script used for the log-based sliding window logic in Redis.

Raises:

Type Description
ValueError

If the limit is not positive or if the calculated window_seconds is not positive (i.e., all time units are zero).

Note

This implementation uses Redis sorted sets (ZADD, ZREMRANGEBYSCORE, ZCARD) to store and manage request timestamps, ensuring atomic operations for accurate rate limiting.

Source code in fastapicap/strategy/sliding_window_log.py
def __init__(
    self,
    limit: int,
    window_seconds: int = 0,
    window_minutes: int = 0,
    window_hours: int = 0,
    window_days: int = 0,
    key_func: Optional[Callable[[Request], str]] = None,
    on_limit: Optional[Callable[[Request, Response, int], None]] = None,
    prefix: str = "cap",
):
    super().__init__(key_func=key_func, on_limit=on_limit, prefix=prefix)
    self.limit = limit
    if limit <= 0:
        raise ValueError("Limit must be a positive integer.")
    self.window_seconds = (
        window_seconds
        + window_minutes * 60
        + window_hours * 3600
        + window_days * 86400
    )
    if self.window_seconds <= 0:
        raise ValueError(
            "Window must be positive (set seconds, minutes, hours, or days)"
        )
    self.lua_script = SLIDING_LOG_LUA
    self.prefix: str = f"{prefix}::{self.__class__.__name__}"

__call__ async

__call__(request, response)

Applies the log-based sliding window rate limiting logic to the incoming request.

This method is the core of the rate limiter. It interacts with Redis to manage request timestamps within a sorted set and checks if the total count within the current sliding window exceeds the configured limit.

Parameters:

Name Type Description Default
request Request

The incoming FastAPI request object.

required
response Response

The FastAPI response object. This can be modified by the on_limit handler if needed.

required

Raises:

Type Description
HTTPException

By default, if the rate limit is exceeded, BaseLimiter._default_on_limit will raise an HTTPException with status code 429. Custom on_limit functions may raise other exceptions or handle the response differently.

Source code in fastapicap/strategy/sliding_window_log.py
async def __call__(self, request: Request, response: Response):
    """
    Applies the log-based sliding window rate limiting logic to the incoming request.

    This method is the core of the rate limiter. It interacts with Redis to
    manage request timestamps within a sorted set and checks if the total
    count within the current sliding window exceeds the configured limit.

    Args:
        request (Request): The incoming FastAPI request object.
        response (Response): The FastAPI response object. This can be
            modified by the `on_limit` handler if needed.

    Raises:
        HTTPException: By default, if the rate limit is exceeded,
            `BaseLimiter._default_on_limit` will raise an `HTTPException`
            with status code 429. Custom `on_limit` functions may raise
            other exceptions or handle the response differently.
    """
    redis = self._ensure_redis()
    await self._ensure_lua_sha(self.lua_script)
    key: str = await self._safe_call(self.key_func, request)
    full_key = f"{self.prefix}:{key}"
    now = int(time.time() * 1000)
    window_ms = self.window_seconds * 1000
    result = await redis.evalsha(
        self.lua_sha,
        1,
        full_key,
        str(now),
        str(window_ms),
        str(self.limit),
    )
    allowed = result == 1
    retry_after = 0 if allowed else int(result)
    if not allowed:
        await self._safe_call(self.on_limit, request, response, retry_after)