Timeout
You can define an optional timeout in milliseconds, after which the request will be allowed to pass regardless of what the current limit is. This can be useful if you don’t want network issues to cause your application to reject requests.Block until ready
In case you don’t want to reject a request immediately but wait until it can be processed, we also providelimit
method and takes an identifier and returns the
same response. However if the current limit has already been exceeded, it will
automatically wait until the next window starts and will try again. Setting the
timeout parameter (in milliseconds) will cause the returned Promise to resolve
in a finite amount of time.
Ephemeral Cache
For extreme load or denial of service attacks, it might be too expensive to call redis for every incoming request, just to find out it should be blocked because they have exceeded the limit. You can use an ephemeral in memory cache by passing theephemeralCache
option:
Using multiple limits
Sometimes you might want to apply different limits to different users. For example you might want to allow 10 requests per 10 seconds for free users, but 60 requests per 10 seconds for paid users. Here’s how you could do that:MultiRegion replicated ratelimiting
Using a single redis instance has the downside of providing low latencies only to the part of your userbase closest to the deployed db. That’s why we also builtMultiRegionRatelimit
which replicates the state across multiple redis
databases as well as offering lower latencies to more of your users.
MultiRegionRatelimit
does this by checking the current limit in the closest db
and returning immediately. Only afterwards will the state be asynchronously
replicated to the other databases leveraging
CRDTs. Due
to the nature of distributed systems, there is no way to guarantee the set
ratelimit is not exceeded by a small margin. This is the tradeoff for reduced
global latency.