Play nicely: working with rate-limited APIs efficiently w/.NET, async

  1. Home
  2. »
  3. Blog
  4. »
  5. Play nicely: working with rate-limited APIs efficiently w/.NET, async

Everyone and their dog has an API nowadays – but many of those APIs are rate-limited in order to stop DDoS attacks and protect against overwhelming resource usage on the server.

Here I present a fairly simple way to write code which automatically respects rate limits on APIs. This came about as a part of a small tool we developed to import a fairly large set of data from a heavily rate-limited API.

This solution in particular:

  1. Is implemented in .NET, though it could be easily ported to other technologies
  2. Doesn’t wait for a “rate-limited” response from the server before attempting to spam it again: it respects the limit and schedules calls in a permitted fashion
  3. Uses an asynchronous programming model effectively to enable highly efficient, highly parallel code
  4. Allows you to write application/API client code carefree, safe in the knowledge that you won’t trip the limiter

Great, so let’s get started. The trick is to keep track of the next request slot, an instant in time representing the next allowable point where an API call can be made without tripping the limit.

The ground rules of the algorithm:

  1. When making a request, the next request slot is checked.  If it’s in the future, wait until that time.  If not, don’t.
  2. Update the next request slot:
    1. If it’s the first request, or the slot was in the past, set it to the current time, plus the required interval
    2. Otherwise, add the required interval to it
  3. Make the request!
  4. If you happen to get a HTTP 429 (Too Many Requests) – for whatever reason – go back to the start and wait for the next slot

That’s it! This option only really becomes possible with the async programming model, since we’d otherwise be holding up a thread for each parallel request. That’s no good!

Note: the “required interval” is the time which must elapse between requests in order to adhere to the rate limit. E.g. for a 10-requests-per-second API, this interval would be 100ms. You might want to build a slight safety factor into this too.

In order to make the code work well for concurrent / parallel calls, some locking might be required around the first two steps. The code below should serve as a good explanation or starting point:

So – now you can rest easy knowing that you won’t take your API provider’s infrastructure while trying to harvest their juicy, juicy data.

 


Featured image by Greyson Joralemon on Unsplash

More to Explore

Software glossary

AI