Readit News logoReadit News
jjice commented on Don't use Redis as a rate limiter   medium.com/ratelimitly/wh... · Posted by u/5pl1n73r
jjice · 2 days ago
I had this in my read later and finally had the chance to get to it.

The first few solutions are all fine for probably most cases. The dismissal of the fixed window solutions has no explanation behind why we don't want that.

> The first downside is a client could submit all his requests for a given second at the end of that second, and then submit all his requests for the next second at the start of that next second, which is not what we want.

I'm okay with that. My current company gives customers API limits per minute. Do you want to fill that up in ten seconds or across the full minute? Doesn't matter, we let you do it. I assume plenty of services acknowledge that this is fine for their use case too.

> We needed to use a transaction to do something that’s supposedly trivial for a given database product

What is the use of a transaction bad? It's not explained.

I've written quite a few Redis based rate limiters for services that get a good amount of traffic and I've never had an issue. I'm not writing APIs that get AWS or OpenAI levels of traffic, but most people aren't. Redis has never hiccuped on me with rate limits in a way that I've ever noticed. Plus the code was written within an hour or two each time, along with tests.

And then, as many of the other comments here have pointed out, this is from a rate limits as a service company, so that explains the empty attacks on Redis for this use case. Not to knock this product - it may be incredible for some serious rate limiting use cases, but most people using Redis for their rate limits don't have that.

I also wouldn't be surprised if having a hosted ElastiCache cluster in AWS is cheaper than their service while providing the same benefit for the majority of companies' scale, while also avoiding vendor lock in (because I can get Redis anywhere I turn). Hell, at a previous place I worked we also threw other unrelated cache data in the same ElastiCache cluster without thinking about it, because Redis is so convenient and powerful when it's appropriate.

jjice commented on Waymo granted permit to begin testing in New York City   cnbc.com/2025/08/22/waymo... · Posted by u/achristmascarl
TulliusCicero · 2 days ago
It's fascinating seeing all the comments elsewhere anytime Waymo starts testing in another city along the lines of, "ah, but how will they handle X, Y, and Z here?? Checkmate, robots!" despite having already launched service in several other cities.

Granted, NYC is the biggest city in the US, so maybe that sort of reaction is more reasonable there than when people in Dallas or Boston do it.

jjice · 2 days ago
NYC (at least the parts I've spent a bit of time in) is pretty grid like with fairly simple roads. The drivers are the hard part :)

I am excited to see them tackle Boston at some point because of how strange some of those roads are. The first time I had ever been I came to an intersection that was all one ways and there were like 7 entry/exit points. My GPS said turn left, but there were three paths I'd consider left. Thank god I was walking.

And I don't really pose much doubt because it seems like Waymo's rollout plan has been solid, but I'm just interested to see how well they tackle different cities.

jjice commented on What is going on right now?   catskull.net/what-the-hel... · Posted by u/todsacerdoti
xnorswap · 3 days ago
I won't say too much, but I recently had an experience where it was clear that when talking with a colleague, I was getting back chat GPT output. I felt sick, like this just isn't how it should be. I'd rather have been ignored.

It didn't help that the LLM was confidently incorrect.

The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.

In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.

I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.

With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).

jjice · 3 days ago
It's so upsetting to see people take the powerful tool that is an LLM and pretend like it's a solution for everything. It's not. They're awesome at a lot of things, but they need a user that has context and knowledge to know when to apply or direct it in a different way.

The amount of absolutely shit LLM code I've reviewed at work is so sad, especially because I know the LLM could've written much better code if the prompter did a better job. The user needs to know when the solution is viable for an LLM to do or not, and a user will often need to make some manual changes anyway. When we pretend an LLM can do it all, it creates slop.

I just had a coworker a few weeks ago produce a simple function that wrapped a DB query in a function (normal so far), but wrote 250 lines of tests for it. All the code was clearly LLM generated (the comments explaining the most mundane of code was the biggest give away). The tests tested nothing. It mocked the ORM and then tested the return of the mock. We were testing that the mocking framework worked? I told him that I don't think the tests added much value since the function was so simple and that we could remove them. He said he thought they provided value, with no explanation, and merged the code.

Now fast forward to the other day and I run into the rest of the code again and now it's sinking in how bad the other LLM code was. Not that it's wrong, but it's poorly designed and full of bloat.

I have no issue with the LLM - they can do some incredible things and they're a powerful tool in the tool belt, but they are to be used in conjunction with a human that knows what they're doing (at least in the context of programming).

Kind of a rant, but I absolutely see a future where some code bases are well maintained and properly built, while others have tacked on years of vibe-coded trash that now only an LLM can even understand. And the thing that will decide which direction a code base goes in will be the engineers involved.

jjice commented on What services or apps did you see abroad and wonder: why don't we have them?    · Posted by u/ekusiadadus
ukoki · 6 days ago
In London contactless payment via credit/debit/Apple/Android has automatic daily and weekly caps

https://tfl.gov.uk/fares/find-fares/capping

Depending on the frequency of travel, it can be cheaper to get season tickets though

jjice · 3 days ago
New York's MTA does this as well. You just need to make sure it's the same card being used.

https://omny.info/fares

jjice commented on What services or apps did you see abroad and wonder: why don't we have them?    · Posted by u/ekusiadadus
Nextgrid · 6 days ago
The Swiss public transport ticketing system. Their app uses location services to automatically determine your fare, so you don’t need to buy tickets in advance: https://www.sbb.ch/en/travel-information/apps/sbb-mobile/eas....

As a bonus there are no ticket barriers so no queues and no overheads of maintaining those machines.

jjice · 3 days ago
I'm a bit confused - how does this differ from something like The Tube in London where I tap on and tap off and it'll charge me based on the entry and exit points and whatever the appropriate "zone" travel I did was?
jjice commented on What services or apps did you see abroad and wonder: why don't we have them?    · Posted by u/ekusiadadus
trillic · 4 days ago
as does Chicago. Credit card or just any tap to pay card on your phone works tapping in and out.
jjice · 3 days ago
Boston was the last major city I saw to actually add this in the US (every other major city metro I've seen has it). Boston got it last Spring or Summer if I recall. The MBTA is a nightmare, but at least I don't have to reload my Charlie Card anymore...
jjice commented on Ask HN: What to do with a pure mathematics degree?    · Posted by u/mareoclasico
mareoclasico · 4 days ago
Yeah, already touched the usual stuff: C++, Python, R...

But the question is what to do with that! haha

jjice · 4 days ago
Unfortunately, I think you have a lot of options. Having a lot of options makes it harder to choose what to spend time on and dive into. Might be worth some exploration into the different sides of software development and see if anything catches your eye.

Best of luck!

jjice commented on Margin debt surges to record high   advisorperspectives.com/d... · Posted by u/pera
jjice · 4 days ago
I'm sure trading on margin is for some reason actually a positive concept for the economy at large for reasons I can't understand, but god damn does it feel like a bad idea to see huge spikes of debt to prop up what already feels like an absurdly out of balance market.

I don't know how to actually tell if the market is overvalued, but man when I see Palantir has a PE of like 500, Tesla almost 200, and Apple is like 35, I can't help but think there too much hype.

But I have literally no idea. Macroeconomics is way out of my wheelhouse, and I'm usually wrong.

Here's to an index fund...

jjice commented on Pixel 10 Phones   blog.google/products/pixe... · Posted by u/gotmedium
KoolKat23 · 4 days ago
You can already ask Gemini those questions on your phone.

This is more popping up magically before you needed to ask.

Both are great (when they work).

jjice · 4 days ago
Oh really? I switched to an iPhone end of last year (for non-AI reasons), so I may be missing out. Is this on on-device model, or does it still dispatch to hosted Gemini? But I'd imagine that Gemini would have a great integration with Calendar and Gmail.
jjice commented on Pixel 10 Phones   blog.google/products/pixe... · Posted by u/gotmedium
pornel · 4 days ago
The Nano model is 3.2B parameters at 4bit quantization. This is quite small compared to what you get from hosted chatbots, and even compared to open-weights models runnable on desktops.

It's cool to have something like this available locally anyway, but don't expect it to have reasoning capabilities. At this size it's going to be naive and prone to hallucinations. It's going to be more like a natural language regex and a word association game.

jjice · 4 days ago
The big win for those small local models to me isn't knowledge based (I'll leave that to the large hosted models), but more so a natural language interface that can then dispatch to tool calls and summarize results. I think this is where they have the opportunity to shine. You're totally right that these are going to be awful for knowledge.

u/jjice

KarmaCake day5450November 1, 2019View Original