Readit News logoReadit News
jbryu commented on Ask HN: Why is Cloudflare sending my US traffic to London?    · Posted by u/jbryu
eastdakota · 4 months ago
Free customers are served from the nearest location where we have capacity. If we’re capacity constrained then free customers will be the first to be rerouted to another facility with capacity. That typically only happens for a very narrow window during any day. It has nothing to do with load to your particular site. It has to do with a region’s capacity and a group of customers (e.g., FREE PRO BIZ ENTERPRISE).
jbryu · 4 months ago
Whoa did not expect the CEO of Cloudflare to comment here! Thanks for the response. The extended periods of high latency was concerning, but I did some more digging and saw that your team is aware of this and working on it: https://www.answeroverflow.com/m/1409539854747963523 Hoping things work out!
jbryu commented on Ask HN: Why is Cloudflare sending my US traffic to London?    · Posted by u/jbryu
dc396 · 4 months ago
User experience is always at the whim of ISP agreements unless you are paying for point to point links.

Sounds like you're experiencing vagaries of somebody (maybe Cloudflare, maybe some other ISP Cloudflare is peering with) doing traffic engineering, probably to reduce congesting on particular paths. The recommendation to go with the Pro plan is likely just the first step, the next step is to open a ticket and get them to fix it -- that's what you're paying them for.

Dropping Cloudflare is, of course, an option as most of the security stuff they do can be handled by competent security folks, but you (may?) need to find someone similar if your site is at risk of DDoS.

jbryu · 4 months ago
Thanks for the response. After doing some more digging it looks like this is a known issue at Cloudflare and they're actively working on it: https://www.answeroverflow.com/m/1409539854747963523
jbryu commented on Things that helped me get out of the AI 10x engineer imposter syndrome   colton.dev/blog/curing-yo... · Posted by u/coltonv
lordnacho · 4 months ago
I'm getting a lot of side-quest productivity out of AI. There's always a bunch of things I could do, but they are tedious. Yet they are still things I wish I could get done. Those kinds of things AI is fantastic at. Building a mock, making tests, abstracting a few things into libraries, documentation.

So it's not like I'm delivering features in one day that would have taken two weeks. But I am delivering features in two weeks that have a bunch of extra niceties attached to them. Reality being what it is, we often release things before they are perfect. Now things are a bit closer to perfect when they are released.

I hope some of that extra work that's done reduces future bug-finding sessions.

jbryu · 4 months ago
Side-quest productivity is a great way to put it... It does feel like AI effectively enables the opposite of "death by a thousand cuts" (life by a thousand bandaids?)
jbryu commented on Modern Node.js Patterns   kashw1n.com/blog/nodejs-2... · Posted by u/eustoria
mcv · 4 months ago
Do you need an LLM for this? I've made my own in-house fork of a Java library without any LLM help. I needed apache.poi's excel handler to stream, which poi only supports in one direction. Someone had written a poi-compatible library that streamed in the other direction, but it had dependencies incompatible with mine. So I made my own fork with dependencies that worked for me. That got me out of mvn dependency hell.

Of course I'd rather not maintain my own fork of something that always should have been part of poi, but this was better than maintaining an impossible mix of dependencies.

jbryu · 4 months ago
For forking and changing a few things here and there, I could see how there might be less of a need for LLMs, especially if you know what you're doing. But in my case I didn't actually fork `ts-rest`, I built a much smaller custom abstraction from the ground-up and I don't consider myself to be a top-tier dev. In this case it felt like LLMs provided a lot more value, not necessarily because the problem was overly difficult but moreso because of the time saved. Had LLMs not existed, I probably would have never considered doing this as the opportunity cost would have felt too high (i.e. DX work vs critical user-facing work). I estimate it would have taken me ~2 weeks or more to finish the task without LLMs, whereas with LLMs it only took a few days.

I do feel we're heading in a direction where building in-house will become more common than defaulting to 3rd party dependencies—strictly because the opportunity costs have decreased so much. I also wonder how code sharing and open source libraries will change in the future. I can see a world where instead of uploading packages for others to plug into their projects, maintainers will instead upload detailed guides on how to build and customize the library yourself. This approach feels very LLM friendly to me. I think a great example of this is with `lucia-auth`[0] where the maintainer deprecated their library in favour of creating a guide. Their decision didn't have anything to do with LLMs, but I would personally much rather use a guide like this alongside AI (and I have!) rather than relying on a 3rd party dependency whose future is uncertain.

[0] https://lucia-auth.com/

jbryu commented on Modern Node.js Patterns   kashw1n.com/blog/nodejs-2... · Posted by u/eustoria
jbryu · 5 months ago
Just last week I was about to integrate `ts-rest` into a project for the same reasons you mentioned above... before I realized they don't have express v5 support yet: https://github.com/ts-rest/ts-rest/issues/715

I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.

I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.

jbryu · 4 months ago
nvm I'm dumb lol, `ts-rest` does support express v5: https://github.com/ts-rest/ts-rest/pull/786. Don't listen to my misinformation above!!

I would say this oversight was a blessing in disguise though, I really do appreciate minimizing dependencies. If I could go back in time knowing what I know now, I still would've gone down the same path.

jbryu commented on Modern Node.js Patterns   kashw1n.com/blog/nodejs-2... · Posted by u/eustoria
exhaze · 5 months ago
Tangential, but thought I'd share since validation and API calls go hand-in-hand: I'm personally a fan of using `ts-rest` for the entire stack since it's the leanest of all the compile + runtime zod/json schema-based validation sets of libraries out there. It lets you plug in whatever HTTP client you want (personally, I use bun, or fastify in a node env). The added overhead is totally worth it (for me, anyway) for shifting basically all type safety correctness to compile time.

Curious what other folks think and if there are any other options? I feel like I've searched pretty exhaustively, and it's the only one I found that was both lightweight and had robust enough type safety.

jbryu · 5 months ago
Just last week I was about to integrate `ts-rest` into a project for the same reasons you mentioned above... before I realized they don't have express v5 support yet: https://github.com/ts-rest/ts-rest/issues/715

I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.

I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.

jbryu commented on Ask HN: Why does my Node.js multiplayer game lag at 500 players with low CPU?    · Posted by u/jbryu
bravesoul2 · 6 months ago
Try cluster mode? I.e. use all cores.

Anyway please follow up or blog when you solve it. Sounds interesting.

jbryu · 6 months ago
unintuitively, less cores ended up being the fix... I did a small writeup here: https://news.ycombinator.com/item?id=44436679
jbryu commented on Ask HN: Why does my Node.js multiplayer game lag at 500 players with low CPU?    · Posted by u/jbryu
jjice · 6 months ago
Node gives access to event loop utilization stats that may be of value.

    import { performance, EventLoopUtilization } from 'node:perf_hooks'
    performance.eventLoopUtilization()
See the docs for how it works and how to derive some value from it.

We had a similar situation where our application was heavily IO bound (very little CPU) which caused some initial confusion with slowdown. We ended up added better metrics surrounding IO and the event loop which lead to us batch dequeuing our jobs in a more reasonable way that made the entire application much more effective.

If you crack the nut on this issue, I'd love to see an update comment detailing what the issue and solution was!

jbryu · 6 months ago
Nut has been cracked! https://news.ycombinator.com/item?id=44436679

And yeah, I've been using prometheus' `collectDefaultMetrics()` function so far to see event loop metrics, but it looks like node:perf_hooks might provide a more detailed output... thanks for sharing

jbryu commented on Ask HN: Why does my Node.js multiplayer game lag at 500 players with low CPU?    · Posted by u/jbryu
jbryu · 6 months ago
Big thanks to everyone who commented so far, I wasn't able to reply to everyone (busy trying to fix the issue!) but grateful for everyone's insights.

I ended up figuring out a fix but it's a little embarrassing... Optimizing certain parts of socket.io helped a little (eg installing bufferutil: https://www.npmjs.com/package/bufferutil), but the biggest performance gain I found was actually going from 2 node.js containers on a single server to just 1! To be exact I was able to go from ~500 concurrent players on a single server to ~3000+. I feel silly because had I been load-testing with 1 container from the start, I would've clearly seen the performance loss when scaling up to 2 containers. Instead I went on a wild goose chase trying to fix things that had nothing to do with the real issue[0].

In the end it seems like the bottleneck was indeed happening at the NIC/OS layer rather than the application layer. Apparently the NIC/OS prefers to deal with a single process screaming `n` packets at it rather than `x` processes screaming `n/x` packets. In fact it seems like the bigger `x` is, the worse performance degrades. Perhaps something to do with context switching, but I'm not 100% sure. Unfortunately given my lacking infra/networking knowledge this wasn't intuitive to me at all - it didn't occur to me that scaling down could actually improve performance!

Overall a frustrating but educational experience. Again, thanks to everyone who helped along the way!

TLDR: premature optimization is the root of all evil

[0] Admittedly AI let me down pretty bad here. So far I've found AI to be an incredible learning and scaffolding tool, but most of my LLM experiences have been in domains I feel comfortable in. This time around though, it was pretty sobering to realize that I had been effectively punked by AI multiple times over. The hallucination trap is very real when working in domains outside your comfort zone, and I think I would've been able to debug more effectively had I relied more on hard metrics.

u/jbryu

KarmaCake day42December 13, 2023View Original