Back

API Rate Limiting — Part 2

API Rate Limiting — Part 2

image alt text

Last time we discussed rate limiting and 2 strategies for implementing the solution (Leaky bucketToken bucket) algorithms , this part we will go through 2 more strategies: Fixed window and Sliding window algorithms.

You can check Part 1 from here

Fixed Window

In this algorithm we will record for each user the count of requests in a certain time interval , for example using rate limit of 1 request per minute, for user_1 at time 10:00 we have 1 request, and at 10:01 we have 1 request, if a new request came for same user at 10:01, we’ll check this specific minute and we’ll check if it has reached the threshold and then drop the request.

for example at rate 1 request per minute:

// we will keep track of every time interval (minute) requests
// arrive at, at worst case scenario we'll have entry for each
// minute.
=> memory is empty
// new request came at the 10:01 minute
// we check for this minute entry, we don't find it so we create it // and set new value to 1, and pass the request ✔
=> "user_1_14900000000": 1
// new request came at the 10:01 minute,
// we check this minute count, we'll find it's 1 which is the limit // per minute, so request gets dropped ❌

Drawbacks

In addition to the fact it takes much more memory than token bucket, In worst case this can allow double number of limit in same minute.

Example for later issue: 2 requests came at 10:30:59 and 10:31:00, we account those as 2 separate minutes, but the fact is that those 2 requests happened in 2 consecutive seconds.

In my opinion it’s not a big deal :)

Sliding Window

Here we store a new timestamp for each new request, each user will have a hash map in which we will have all timestamps with their occurrences counts.

So at each new request we’ll store the new timestamp for that user if not exist before, if it does we’ll just increment it’s value.

Hence to limit requests at each new request we can go and sum all requests counts in the past time interval (for example a minute) and then decide if it has exceeded the threshold or not and either drop request or not.

// assume the threshold is 4 per minute
// this is example of the structure for a user's map that has 2
// timestamps and we can count that users requests in the last
// minute which is 3 in this case
=> "user_1": {
  "1480000000": 1,
  "1480005000": 2
}
// new request in same minute, we count total count in last minute, which is 3 so we pass the request ✔
=> "user_1": {
  "1480000000": 1,
  "1480005000": 2,
  "1480006000": 1
}
// new request in same minute, we count total count in last minute,
which is 4, we reached the threshold so drop request ❌

— — — — — — — — — —

It's not always about the receiver (server side), sometimes you want to make your audience (developers) be aware that requests are rate limited, right?

It’s up to you to let users know you have rate limited their requests, for example if you are building public API then you must let your users know that your API is rate limited so they can handle the error, some API maintainers even provide sample code on how to handle that error using exponential retry for example.

Other cases when spammers or attackers are abusing your application you may not need to let them know that you do rate limiting so they won’t make a workaround you can still return status 200 telling them ok, and meanwhile you don’t process their requests 😉.

If you decided to show some errors here is some standards to backoff requests

Backoff

Notify your developers that they are rate limited at least by status code: 429 Too Many Requests status code.

Sometimes you may want to help your developers by returning the timestamp in which they can retry sending the request and get processed.

One API that does that job perfectly is Github, you can go check their API, they always return values like these to help developers backoff their requests and organize their retries.

“limit”: 60,
“remaining”: 54,
“reset”: 1587335374

I enjoyed writing about this topic, good luck building your API 💪 💪

Resources:

Please leave a comment if you have any thought about the topic

Thanks for reading ❤️