I very much dislike such features in a runtime or app.
The "proper" place to solve this, is in the OS. Where it has been solved, including all the inevitable corner cases, already.
Why reinvent this wheel, adding complexity, bug-surface, maintenance burden and whatnot to your project? What problem dies it solve that hasn't been solved by other people?
For years, I heard it's better to use cron, because the problem was already solved the right way(tm). My experience with cron has been about a dozen difficult fixes in production of cron not running / not with the right permission / errors lost without being logged / ... Changing / upgrading OSes became a problem. I since switched to a small node script with a basic scheduler in it, I had ZERO issues in 7 years. My devs happily add entries in the scheduler without bothering me. We even added consistency checks, asserts, scheduled one time execution tasks, ... and now multi server scheduling.
Deployments that need to configure OSes in a particular way are difficult (the existence of docker, kubernetes, snap are symptoms of this difficulty). It requires a high level of privilege to do so. Upgrades and rollbacks are challenging, if ever done. OSes sometimes don't provide solution when we go beyond one hardware.
If "npm start" can restrain the permissions to what it should be for the given version of the code, I will use it and I'll be happy.
This is a nice idea, but what do you do when the OS tooling is not that good? macOS is a good example, they have OS level sandboxing [0], but the docs are practically nonexistent and the only way to figure it out is to read a bunch of blog posts by people who struggled with it before you. Baking it into Node means that at least theoretically you get the same thing out of the box on every OS.
Except, the OS hasn’t actually solved it. Any program you can run can access arbitrary files of yours and it’s quite difficult to actually control that access even if you want to limit the blast radius of your own software. Seriously - what software works you use? Go write eBPF to act as a mini adhoc hypervisor to enforce difficult to write policies via seLinux? That only even works if you’re the admin of the machine which isn’t necessarily the same person writing the software they want to code defensively.
Also modern software security is really taking a look at strengthening software against supply chain vulnerabilities. That looks less like traditional OS and more like a capabilities model where you start with a set of limited permissions and even within the same address space it’s difficult to obtain a new permission unless your explicitly given a handle to it (arguably that’s how all permissions should work top to bottom).
How would you do this in a native fashion? I mean I believe you (chroot jail I think it was?), but not everyone runs on *nix systems, and perhaps more importantly, not all Node developers know or want to know much about the underlying operating system. Which is to their detriment, of course, but a lot of people are "stuck" in their ecosystem. This is arguably even worse in the Java ecosystem, but it's considered a selling point (write once run anywhere on the JVM, etc).
> What problem does it solve that hasn't been solved by other people?
nothing. Except for "portability" arguments perhaps.
Java has had security managers and access restrictions built in but it never worked very well (and is quite cumbersome to use in practice). And there's been lots of bypasses over the years, and patch work fixes etc.
Tbh, the OS is the only real security you could trust, as it's as low a level as any application would typically go (unless you end up in driver/kernal space, like those anti-virus/anti-cheat/crowdstrike apps).
But platform vendors always want to NIH and make their platform slightly easier and still present the similar level of security.
This is my thought on using dotenv libraries. The app shouldn’t have to load environment variables, only read them. Using a dotenv function/plugin like in omz is far more preferable.
How would you solve this at the OS level across Linux, macOS and Windows?
I've been trying to figure out a good way to do this for my Python projects for a couple of years now. I don't yet trust any of the solutions I've come up with - they are inconsistent with each other and feel very ironed to me making mistakes due to their inherent complexity and lack of documentation that I trust.
> Why reinvent this wheel, adding complexity, bug-surface, maintenance burden and whatnot to your project? What problem dies it solve that hasn't been solved by other people?
Whilst this is (effectively) an Argument From Authority, what makes you assume the Node team haven't considered this? They're famously conservative about implementing anything that adds indirection or layers. And they're very *nix focused.
I am pretty sure they've considered "I could just run this script under a different user"
(I would assume it's there because the Permissions API covers many resources and side effects, some of which would be difficult to reproduce across OSes, but I don't have the original proposal to look at and verify)
OS level checks will inevitably work differently on different OSes and different versions. Having a check like this in the app binary itself means you can have a standard implementation regardless of the OS running the app.
I often hear similar arguments for or against database level security rules. Row level security, for example, is a really powerful feature and in my opinion is worth using when you can. Using RLS doesn't mean you skip checking authorization rules at the API level though, you check on author in your business logic _and_ in the database.
Putting network restrictions in the application layer also causes awkward issues for the org structures of many enterprises.
For example, the problem of "one micro service won't connect to another" was traditionally an ops / environments / SRE problem. But now the app development team has to get involved, just in case someone's used one of these new restrictions. Or those other teams need to learn about node.
This is non consensual devops being forced upon us, where everyone has to learn everything.
How many apps do you think has properly set user and access rights only to what they need? In production? If even that percentage was high, how about developers machines, people that run some node scripts which might import whoever knows what? It is possible to have it running safely, but I doubt it's a high percentage of people. Feature like this can increase that percentage
Genuine question, as I've not invested much into understanding this. What features of the OS would enable these kinds of network restrictions? Basic googling/asking AI points me in the direction of things that seem a lot more difficult in general, unless using something like AppArmor, at which point it seems like you're not quite in OS land anymore.
Path restrictions look simple, but they're very difficult to implement correctly.
PHP used to have (actually, still has) an "open_basedir" setting to restrict where a script could read or write, but people found out a number of ways to bypass that using symlinks and other shenanigans. It took a while for the devs to fix the known loopholes. Looks like node has been going through a similar process in the last couple of years.
Similarly, I won't be surprised if someone can use DNS tricks to bypass --allow-net restrictions in some way. Probably not worth a vulnerability in its own right, but it could be used as one of the steps in a targeted attack. So don't trust it too much, and always practice defense in depth!
Last time a major runtime tried implementing such restrictions on VM level, it was .NET - and it took that idea from Java, which did it only 5 years earlier.
In both Java and .NET VMs today, this entire facility is deprecated because they couldn't make it secure enough.
I wouldn't trust it to be done right. It's like a bank trusting that all their customers will do the right thing. If you want MAC (as opposed to DAC), do it in the kernel like it's supposed to be; use apparmor or selinux. And both of those methods will allow you to control way more than just which files you can read / write.
Yeah but you see, this requires to be deployed along side the application somehow with the help of the ops team. While changing the command line is under control of the application developer.
I don't understand this sort of complaint. Would you prefer that they didn't worked on this support ever? Exactly what's your point? Airing trust issues?
The killer upgrade here isn’t ESM. It’s Node baking fetch + AbortController into core. Dropping axios/node-fetch trimmed my Lambda bundle and shaved about 100 ms off cold-start latency. If you’re still npm i axios out of habit, 2025 Node is your cue to drop the training wheels.
Tangential, but thought I'd share since validation and API calls go hand-in-hand: I'm personally a fan of using `ts-rest` for the entire stack since it's the leanest of all the compile + runtime zod/json schema-based validation sets of libraries out there. It lets you plug in whatever HTTP client you want (personally, I use bun, or fastify in a node env). The added overhead is totally worth it (for me, anyway) for shifting basically all type safety correctness to compile time.
Curious what other folks think and if there are any other options? I feel like I've searched pretty exhaustively, and it's the only one I found that was both lightweight and had robust enough type safety.
Just last week I was about to integrate `ts-rest` into a project for the same reasons you mentioned above... before I realized they don't have express v5 support yet: https://github.com/ts-rest/ts-rest/issues/715
I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.
I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.
I've been impressed with Hono's zod Validator [1] and the type-safe "RPC" clients [2] you can get from it. Most of my usage of Hono has been in Deno projects, but it seems like it has good support on Node and Bun, too.
Type safety for API calls is huge. I haven't used ts-rest but the compile-time validation approach sounds solid. Way better than runtime surprises. How's the experience in practice? Do you find the schema definition overhead worth it or does it feel heavy for simpler endpoints?
Also want to shout out ts-rest. We have a typescript monorepo where the backend and frontend import the api contract from a shared package, making frontend integration both type-safe and dead simple.
I migrated from ts-rest to Effect/HttpApi. It's an incredible ecosystem, and Effect/Schema has over taken my domain layer. Definitely a learning curve though.
While true, in practice you'd only write this code once as a utility function; compare two extra bits of code in your own utility function vs loading 36 kB worth of JS.
Yeah, that's the classic bundle size vs DX trade-off. Fetch definitely requires more boilerplate. The manual response.ok check and double await is annoying. For Lambda where I'm optimizing for cold starts, I'll deal with it, but for regular app dev where bundle size matters less, axios's cleaner API probably wins for me.
Except you might want different error handling for different error codes. For example, our validation errors return a JSON object as well but with 422.
So treating "get a response" and "get data from a response" separately works out well for us.
There has to be something wrong with a tech stack (Node + Lambda) that adds 100ms latency for some requests, just to gain the capability [1] to send out HTTP requests within an environment that almost entirely communicates via HTTP requests.
[1] convenient capability - otherwise you'd use XMLHttpRequest
1. This is not 100ms latency for requests. It's 100ms latency for the init of a process that loads this code. And this was specifically in the context of a Lambda function that may only have 128MB RAM and like 0.25vCPU. A hello world app written in Java that has zero imports and just prints to stdout would have higher init latency than this.
2. You don't need to use axios. The main value was that it provides a unified API that could be used across runtimes and has many convenient abstractions. There were plenty of other lightweight HTTP libs that were more convenient than the stdlib 'http' module.
Interceptors (and extensions in general) are the killer feature for axios still. Fetch is great for scripts, but I wouldn't build an application on it entirely; you'll be rewriting a lot or piecing together other libs.
Right?! I think a lot of devs got stuck in the axios habit from before Node 18 when fetch wasn't built-in. Plus axios has that batteries included feel with interceptors, auto-JSON parsing, etc. But for most use cases, native fetch + a few lines of wrapper code beats dragging in a whole dependency.
This is all very good news. I just got an alert about a vulnerability in a dependency of axios (it's an older project). Getting rid of these dependencies is a much more attractive solution than merely upgrading them.
As a library author it's the opposite, while fetch() is amazing, ESM has been a painful but definitely worth upgrade. It has all the things the author describes.
Interesting to get a library author's perspective. To be fair, you guys had to deal with the whole ecosystem shift: dual package hazards, CJS/ESM compatibility hell, tooling changes, etc so I can see how ESM would be the bigger story from your perspective.
Those... are not mutually exclusive as killer upgrade. No longer having to use a nonsense CJS syntax is absolutely also a huge deal.
Web parity was "always" going to happen, but the refusal to add ESM support, and then when they finally did, the refusal to have a transition plan for making ESM the default, and CJS the fallback, has been absolutely grating for the last many years.
With node:fetch you're going to have to write a wrapper for error handling/logging/retries etc. in any app/service of size. After a while, we ended up with something axios/got-like anyway that we had to fix a bunch of bugs in.
It has always astonished me that platforms did not have first class, native "http client" support. Pretty much every project in the past 20 years has needed such a thing.
Also, "fetch" is lousy naming considering most API calls are POST.
That's a category error. Fetch is just refers to making a request. POST is the method or the HTTP verb used when making the request. If you're really keen, you could roll your own
Node was created with first-class native http server and client support. Wrapper libraries can smooth out some rough edges with the underlying api as well as make server-side js (Node) look/work similar to client-side js (Browser).
Undici is solid. Being the engine behind Node's fetch is huge. The performance gains are real and having it baked into core means no more dependency debates. Plus, it's got some great advanced features (connection pooling, streams) if you need to drop down from the fetch API. Best of both worlds.
It kills me that I keep seeing axios being used instead of fetch, it is like people don't care, copy-paste existing projects as starting point and that is it.
I have a "ascii.txt" file ready to copy/paste the "book emoji" block chars to prepend my logs. It makes logs less noisy. HN can't display them, so I'll have to link to page w/ them: https://www.piliapp.com/emojis/books/
I tried node:test and I feel this is very useful for tiny projects and library authors who need to cut down on 3rd party dependencies, but it's just too barebones for larger apps and node:assert is a bit of a toy, so at a minimum you want to pull in a more full-fledged assertion library. vitest "just works", however, and paves over a lot of TypeScript config malarkey. Jest collapsed under its own weight.
As someone who eschewed jest and others for years for the simplicity of mocha, I still appreciate the design decision of mocha to keep the assertions library separate from the test harness. Which is to point out that chai [1] is still a great assertions library and only an assertions library.
(I haven't had much problem with TypeScript config in node:test projects, but partly because "type": "module" and using various versions of "erasableSyntaxOnly" and its strict-flag and linter predecessors, some of which were good ideas in ancient mocha testing, too.)
Eh, the Node test stuff is pretty crappy, and the Node people aren't interested in improving it. Try it for a few weeks before diving headfirst into it, and you'll see what I mean (and then if you go to file about those issues, you'll see the Node team not care).
I just looked at the documentation and it seems there's some pretty robust mocking and even custom test reporters. Definitely sounds like a great addition. As you suggest, I'll temper my enthusiasm until I actually try it out.
Nice post! There's a lot of stuff here that I had no idea was in built-in already.
I tried making a standalone executable with the command provided, but it produced a .blob which I believe still requires the Node runtime to run. I was able to make a true executable with postject per the Node docs[1], but a simple Hello World resulted in a 110 MB binary. This is probably a drawback worth mentioning.
Also, seeing those arbitrary timeout limits I can't help but think of the guy in Antarctica who had major headaches about hardcoded timeouts.[2]
I have a blog post[1] and accompanying repo[2] that shows how to use SEA to build a binary (and compares it to bun and deno) and strip it down to 67mb (for me, depends on the size of your local node binary).
Yeah, many people here are saying this is AI written. Possibly entirely.
It says: "You can now bundle your Node.js application into a single executable file", but doesn't actually provide the command to create the binary. Something like:
The LLM made this sound so epic: "The node: prefix is more than just a convention—it’s a clear signal to both developers and tools that you’re importing Node.js built-ins rather than npm packages. This prevents potential conflicts and makes your code more explicit about its dependencies."
Agreed. It's surprising to see this sort of slop on the front page, but perhaps it's still worthwhile as a way to stimulate conversation in the comments here?
Here, I'd hazard that 15% of front page posts in July couldn't pass a "avoids well-known LLM shibboleths" check.
Yesterday night, about 30% of my TikTok for you page was racist and/or homophobic videos generated by Veo 3.
Last year I thought it'd be beaten back by social convention. (i.e. if you could showed it was LLM output, it'd make people look stupid, so there was a disincentive to do this)
The latest round of releases was smart enough, and has diffused enough, that seemingly we have reached a moment where most people don't know the latest round of "tells" and it passes their Turing test., so there's not enough shame attached to prevent it from becoming a substantial portion of content.
I commented something similar re: slop last week, but made the mistake of including a side thing about Markdown-formatting. Got downvoted through the floor and a mod spanking, because people bumrushed to say that was mean, they're a new user so we should be nicer, also the Markdown syntax on HN is hard, also it seems like English is their second language.
And the second half of the article was composed of entirely 4 item lists.
Matteo Collina says that the node fetch under the hood is the fetch from the undici node client [0]; and that also, because it needs to generate WHATWG web streams, it is inherently slower than the alternative — undici request [1].
I did some testing on an M3 Max Macbook Pro a couple of weeks ago. I compared the local server benchmark they have against a benchmark over the network. Undici appeared to perform best for local purposes, but Axios had better performance over the network.
I am not sure why that was exactly, but I have been using Undici with great success for the last year and a half regardless. It is certainly production ready, but often requires some thought about your use case if you're trying to squeeze out every drop of performance, as is usual.
I really wish ESM was easier to adopt. But we're halfway through 2025 and there are still compatibility issues with it. And it just gets even worse now that so many packages are going ESM only. You get stuck having to choose what to cut out. I write my code in TS using ESM syntax, but still compile down to CJS as the build target for my sanity.
In many ways, this debacle is reminiscent of the Python 2 to 3 cutover. I wish we had started with bidirectional import interop and dual module publications with graceful transitions instead of this cold turkey "new versions will only publish ESM" approach.
The "proper" place to solve this, is in the OS. Where it has been solved, including all the inevitable corner cases, already.
Why reinvent this wheel, adding complexity, bug-surface, maintenance burden and whatnot to your project? What problem dies it solve that hasn't been solved by other people?
Deployments that need to configure OSes in a particular way are difficult (the existence of docker, kubernetes, snap are symptoms of this difficulty). It requires a high level of privilege to do so. Upgrades and rollbacks are challenging, if ever done. OSes sometimes don't provide solution when we go beyond one hardware.
If "npm start" can restrain the permissions to what it should be for the given version of the code, I will use it and I'll be happy.
[0] https://www.karltarvas.com/macos-app-sandboxing-via-sandbox-...
Also modern software security is really taking a look at strengthening software against supply chain vulnerabilities. That looks less like traditional OS and more like a capabilities model where you start with a set of limited permissions and even within the same address space it’s difficult to obtain a new permission unless your explicitly given a handle to it (arguably that’s how all permissions should work top to bottom).
nothing. Except for "portability" arguments perhaps.
Java has had security managers and access restrictions built in but it never worked very well (and is quite cumbersome to use in practice). And there's been lots of bypasses over the years, and patch work fixes etc.
Tbh, the OS is the only real security you could trust, as it's as low a level as any application would typically go (unless you end up in driver/kernal space, like those anti-virus/anti-cheat/crowdstrike apps).
But platform vendors always want to NIH and make their platform slightly easier and still present the similar level of security.
This is my thought on using dotenv libraries. The app shouldn’t have to load environment variables, only read them. Using a dotenv function/plugin like in omz is far more preferable.
I've been trying to figure out a good way to do this for my Python projects for a couple of years now. I don't yet trust any of the solutions I've come up with - they are inconsistent with each other and feel very ironed to me making mistakes due to their inherent complexity and lack of documentation that I trust.
Whilst this is (effectively) an Argument From Authority, what makes you assume the Node team haven't considered this? They're famously conservative about implementing anything that adds indirection or layers. And they're very *nix focused.
I am pretty sure they've considered "I could just run this script under a different user"
(I would assume it's there because the Permissions API covers many resources and side effects, some of which would be difficult to reproduce across OSes, but I don't have the original proposal to look at and verify)
I often hear similar arguments for or against database level security rules. Row level security, for example, is a really powerful feature and in my opinion is worth using when you can. Using RLS doesn't mean you skip checking authorization rules at the API level though, you check on author in your business logic _and_ in the database.
For example, the problem of "one micro service won't connect to another" was traditionally an ops / environments / SRE problem. But now the app development team has to get involved, just in case someone's used one of these new restrictions. Or those other teams need to learn about node.
This is non consensual devops being forced upon us, where everyone has to learn everything.
That's a cool feature. Using jlink for creating custom JVMs does something similar.
That's a good feature. What you are saying is still true though, using the OS for that is the way to go.
PHP used to have (actually, still has) an "open_basedir" setting to restrict where a script could read or write, but people found out a number of ways to bypass that using symlinks and other shenanigans. It took a while for the devs to fix the known loopholes. Looks like node has been going through a similar process in the last couple of years.
Similarly, I won't be surprised if someone can use DNS tricks to bypass --allow-net restrictions in some way. Probably not worth a vulnerability in its own right, but it could be used as one of the steps in a targeted attack. So don't trust it too much, and always practice defense in depth!
In both Java and .NET VMs today, this entire facility is deprecated because they couldn't make it secure enough.
e.x. https://go.dev/blog/osroot
I don't understand this sort of complaint. Would you prefer that they didn't worked on this support ever? Exactly what's your point? Airing trust issues?
Curious what other folks think and if there are any other options? I feel like I've searched pretty exhaustively, and it's the only one I found that was both lightweight and had robust enough type safety.
I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.
I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.
[1] https://hono.dev/docs/guides/validation#zod-validator-middle...
[2] https://hono.dev/docs/guides/rpc#client
The following seems cleaner than either of your examples. But I'm sure I've missed the point.
I share this at the risk of embarrassing myself in the hope of being educated.So treating "get a response" and "get data from a response" separately works out well for us.
[1] convenient capability - otherwise you'd use XMLHttpRequest
2. You don't need to use axios. The main value was that it provides a unified API that could be used across runtimes and has many convenient abstractions. There were plenty of other lightweight HTTP libs that were more convenient than the stdlib 'http' module.
You can obviously do that with fetch but it is more fragmented and more boilerplate
Web parity was "always" going to happen, but the refusal to add ESM support, and then when they finally did, the refusal to have a transition plan for making ESM the default, and CJS the fallback, has been absolutely grating for the last many years.
Also, "fetch" is lousy naming considering most API calls are POST.
Deleted Comment
That's said there are npm packages that are ridiculously obsolete and overused.
Deleted Comment
`const { styleText } = require('node:util');`
Docs: https://nodejs.org/api/util.html#utilstyletextformat-text-op...
Using a library which handles that (an a thousand other quirks) makes much more sense
Also, I'm guessing if I pipe your logs to a file you'll still write escapes into it? Why not just make life easier?
1. Node has built in test support now: looks like I can drop jest!
2. Node has built in watch support now: looks like I can drop nodemon!
(I haven't had much problem with TypeScript config in node:test projects, but partly because "type": "module" and using various versions of "erasableSyntaxOnly" and its strict-flag and linter predecessors, some of which were good ideas in ancient mocha testing, too.)
[1] https://www.chaijs.com/
At the end it's just tests, the syntax might be more verbose but Llms write it anyway ;-)
I tried making a standalone executable with the command provided, but it produced a .blob which I believe still requires the Node runtime to run. I was able to make a true executable with postject per the Node docs[1], but a simple Hello World resulted in a 110 MB binary. This is probably a drawback worth mentioning.
Also, seeing those arbitrary timeout limits I can't help but think of the guy in Antarctica who had major headaches about hardcoded timeouts.[2]
[1]: https://nodejs.org/api/single-executable-applications.html
[2]: https://brr.fyi/posts/engineering-for-slow-internet
[1]: https://notes.billmill.org/programming/javascript/Making_a_s...
[2]: https://github.com/llimllib/node-esbuild-executable#making-a...
I hope you can appreciate how utterly insane this sounds to anyone outside of the JS world. Good on you for reducing the size, but my god…
It says: "You can now bundle your Node.js application into a single executable file", but doesn't actually provide the command to create the binary. Something like:
Look up dialectical hedging. Dead AI giveaway.
Deleted Comment
It does tell you that if even 95% of HN can't tell, then 99% of the public can't tell. Which is pretty incredible.
The forest is darkening, and quickly.
Here, I'd hazard that 15% of front page posts in July couldn't pass a "avoids well-known LLM shibboleths" check.
Yesterday night, about 30% of my TikTok for you page was racist and/or homophobic videos generated by Veo 3.
Last year I thought it'd be beaten back by social convention. (i.e. if you could showed it was LLM output, it'd make people look stupid, so there was a disincentive to do this)
The latest round of releases was smart enough, and has diffused enough, that seemingly we have reached a moment where most people don't know the latest round of "tells" and it passes their Turing test., so there's not enough shame attached to prevent it from becoming a substantial portion of content.
I commented something similar re: slop last week, but made the mistake of including a side thing about Markdown-formatting. Got downvoted through the floor and a mod spanking, because people bumrushed to say that was mean, they're a new user so we should be nicer, also the Markdown syntax on HN is hard, also it seems like English is their second language.
And the second half of the article was composed of entirely 4 item lists.
[0] - https://www.youtube.com/watch?v=cIyiDDts0lo
[1] - https://blog.platformatic.dev/http-fundamentals-understandin...
I did some testing on an M3 Max Macbook Pro a couple of weeks ago. I compared the local server benchmark they have against a benchmark over the network. Undici appeared to perform best for local purposes, but Axios had better performance over the network.
I am not sure why that was exactly, but I have been using Undici with great success for the last year and a half regardless. It is certainly production ready, but often requires some thought about your use case if you're trying to squeeze out every drop of performance, as is usual.
Dead Comment
In many ways, this debacle is reminiscent of the Python 2 to 3 cutover. I wish we had started with bidirectional import interop and dual module publications with graceful transitions instead of this cold turkey "new versions will only publish ESM" approach.
Hoisting/import order especially when trying to mock tests.
Whether or not to include extensions, and which extension to use, .js vs .ts.