Readit News logoReadit News
jjice · 2 days ago
I had this in my read later and finally had the chance to get to it.

The first few solutions are all fine for probably most cases. The dismissal of the fixed window solutions has no explanation behind why we don't want that.

> The first downside is a client could submit all his requests for a given second at the end of that second, and then submit all his requests for the next second at the start of that next second, which is not what we want.

I'm okay with that. My current company gives customers API limits per minute. Do you want to fill that up in ten seconds or across the full minute? Doesn't matter, we let you do it. I assume plenty of services acknowledge that this is fine for their use case too.

> We needed to use a transaction to do something that’s supposedly trivial for a given database product

What is the use of a transaction bad? It's not explained.

I've written quite a few Redis based rate limiters for services that get a good amount of traffic and I've never had an issue. I'm not writing APIs that get AWS or OpenAI levels of traffic, but most people aren't. Redis has never hiccuped on me with rate limits in a way that I've ever noticed. Plus the code was written within an hour or two each time, along with tests.

And then, as many of the other comments here have pointed out, this is from a rate limits as a service company, so that explains the empty attacks on Redis for this use case. Not to knock this product - it may be incredible for some serious rate limiting use cases, but most people using Redis for their rate limits don't have that.

I also wouldn't be surprised if having a hosted ElastiCache cluster in AWS is cheaper than their service while providing the same benefit for the majority of companies' scale, while also avoiding vendor lock in (because I can get Redis anywhere I turn). Hell, at a previous place I worked we also threw other unrelated cache data in the same ElastiCache cluster without thinking about it, because Redis is so convenient and powerful when it's appropriate.

theamk · 15 days ago
This blog is kinda strange because it presents a working solution, and then immediately dismisses it with:

> We wanted a rate limiter and now we’re learning a niche programming language just to execute some code in a database that has nothing relevant to our task but a roundabout way of storing an int into RAM.

If you this seems like unfair criticism, read the blog title. It is "Ratelimitly’s Official Engineering Blog". What you are supposed to do is (1) get scared about complexity of the task (2) contact author's startup and buy rate-limiting-as-a-service (the pricing is "contact us" BTW)

cchance · 14 days ago
LMFAO so this is another advert-as-blogpost share
whycombagator · 15 days ago
This is not a very thorough article. It is more a critique on rate limiting approaches and difficulty, for the author, in implementing them with Redis.

Here is an excerpt from the token bucket section:

> It seems there’s no way to do a token bucket with the main Redis primitives such as SET, EXPIRE, and INCR. It would require two variables and the client has to read one before choosing how to update the second, which would require pausing the whole database while a client carries this out. So what people do is execute code in Redis. This is done using modules or scripts. Using a module here is dubious: You’re now just loading a C program (shared library) into Redis; why not just load it into an actual computer? So let’s look at “scripts” instead. Scripts are pieces of Lua code executed in Redis.

shows brief example

> Now it’s probably time to ask ourselves why we are here. We wanted a rate limiter and now we’re learning a niche programming language just to execute some code in a database that has nothing relevant to our task but a roundabout way of storing an int into RAM. This concludes my quest to discover how or why rate limiters are implemented in Redis.

This highlights the overall technical depth of the article and should inform you of how authoritative it should be considered.

Magmalgebra · 15 days ago
I wish the article talked more about why people use Redis as a rate limiter and why alternatives might be superior. Anecdotally I see the following play out repeatedly:

1) You probably already have Redis running

2) Adding a "good enough" rate limiter is easy

3) Faster solutions are usually more work to maintain given modern skillsets

If you are a b2b SaaS company odds are your company will exceed 10 billion in market cap looong before Redis rate limiting is a meaningful bottleneck.

theamk · 15 days ago
The superior alternative is clearly author's startup. Who does not love 3rd party, cloud-hosted service in critical dependency chain on every page of your website?
tbrownaw · 15 days ago
Me. I don't love that at all.
pokstad · 15 days ago
Exactly. Don’t boil the ocean searching for a perfect solution. Create solutions that match their requirements and nothing more.
biimugan · 15 days ago
Presumably the superior solution is the product that bears the same name as this blog post. Which I take it is in the process of being released since I can't find many technical details about it.
manbash · 15 days ago
I tend to agree, but in a more general and broad sense. That is, Rate-Limiting is always a deployment specific feature. If it's for limiting user requests, then it should be a component of the ingress/API Gateway and be as robust.
eurleif · 15 days ago
>Now it’s probably time to ask ourselves why we are here. We wanted a rate limiter and now we’re learning a niche programming language just to execute some code in a database that has nothing relevant to our task but a roundabout way of storing an int into RAM.

I've found it quite useful to have a central slice of RAM that different processes, including ones running on different machines, can execute scripts against over the network.

If you have Web servers running across different machines, you don't want rate limiter state to be local to those machines. A client shouldn't be able to make extra requests because the load balancer sends its traffic to a different machine. So that necessitates a central slice of RAM, accessed over the network.

Maybe a dedicated rate limiting service? It might perform better, but I've used redis token bucket rate limiting, implemented via a Lua script, at scale without much issue. And what about the foundational things redis provides, like persistence and failover? Or what if you don't just want rate limiting; you want (for example) mutexes[0] too? You can implement all of that and more, of course, and I'm not saying that's always a bad idea, but at some point you may risk running into Greenspun's Tenth Rule (but for redis instead of Common Lisp). There's something to be said for making the central-slice-of-networked-RAM part a platform, which you can build on with scripts to do whatever you need.

[0] https://redis.io/docs/latest/develop/clients/patterns/distri...

pcl · 15 days ago
At my previous employer, we used Redis for publicly-documented rate limits, but we applied them after our rate limiting infrastructure already evaluated defensive infrastructure-oriented rate limits.

We generally found that this worked out fine. Customer-facing limits were numbers measured in hours or days; infra limits were measured in seconds or minutes.

Certainly you wouldn’t want to rely on a central db for infra-oriented rate limiting, in any case.

jauntywundrkind · 15 days ago
From the redis docs:

  local current
  current = redis.call("incr",KEYS[1])
  if current == 1 then
      redis.call("expire",KEYS[1],1)
  end
https://redis.io/docs/latest/commands/incr/

There's so many valid good & serious reasons why we are so scared, so afraid as engineers. The feeling that chaos grows in around us, the fear of opening doors to complexity we might not be able to return from.

But we also let ourselves be scared of so many ghost tales. Use some Lua! Go ahead! Maybe there are places where being afraid & limiting ourselves makes sense, but for 99% of people: some 'yes we can' attitude is amazing. We are the empowered capable doers, us engineers. Let it rip!

(I do also want a society respectful & deference to the massive complexity of the universe about us. Uncertainty abounds! But it should not be a cudgel.)

ltbarcly3 · 15 days ago
Article is literally written by someone who admits they don't know about redis within the first paragraph.

We used redis for a rate limiter for years and it never caused any issues at all. You just have to understand how redis works a little and make sure your algorithm scales. You really can't get much cheaper than token bucket.