Readit News logoReadit News
simonw · 25 days ago
Whoa, I didn't know about this:

  # Run with restricted file system access
  node --experimental-permission \
    --allow-fs-read=./data --allow-fs-write=./logs app.js
  
  # Network restrictions
  node --experimental-permission \
    --allow-net=api.example.com app.js
Looks like they were inspired by Deno. That's an excellent feature. https://docs.deno.com/runtime/fundamentals/security/#permiss...

berkes · 24 days ago
I very much dislike such features in a runtime or app.

The "proper" place to solve this, is in the OS. Where it has been solved, including all the inevitable corner cases, already.

Why reinvent this wheel, adding complexity, bug-surface, maintenance burden and whatnot to your project? What problem dies it solve that hasn't been solved by other people?

batmansmk · 24 days ago
For years, I heard it's better to use cron, because the problem was already solved the right way(tm). My experience with cron has been about a dozen difficult fixes in production of cron not running / not with the right permission / errors lost without being logged / ... Changing / upgrading OSes became a problem. I since switched to a small node script with a basic scheduler in it, I had ZERO issues in 7 years. My devs happily add entries in the scheduler without bothering me. We even added consistency checks, asserts, scheduled one time execution tasks, ... and now multi server scheduling.

Deployments that need to configure OSes in a particular way are difficult (the existence of docker, kubernetes, snap are symptoms of this difficulty). It requires a high level of privilege to do so. Upgrades and rollbacks are challenging, if ever done. OSes sometimes don't provide solution when we go beyond one hardware.

If "npm start" can restrain the permissions to what it should be for the given version of the code, I will use it and I'll be happy.

Etheryte · 24 days ago
This is a nice idea, but what do you do when the OS tooling is not that good? macOS is a good example, they have OS level sandboxing [0], but the docs are practically nonexistent and the only way to figure it out is to read a bunch of blog posts by people who struggled with it before you. Baking it into Node means that at least theoretically you get the same thing out of the box on every OS.

[0] https://www.karltarvas.com/macos-app-sandboxing-via-sandbox-...

vlovich123 · 24 days ago
Except, the OS hasn’t actually solved it. Any program you can run can access arbitrary files of yours and it’s quite difficult to actually control that access even if you want to limit the blast radius of your own software. Seriously - what software works you use? Go write eBPF to act as a mini adhoc hypervisor to enforce difficult to write policies via seLinux? That only even works if you’re the admin of the machine which isn’t necessarily the same person writing the software they want to code defensively.

Also modern software security is really taking a look at strengthening software against supply chain vulnerabilities. That looks less like traditional OS and more like a capabilities model where you start with a set of limited permissions and even within the same address space it’s difficult to obtain a new permission unless your explicitly given a handle to it (arguably that’s how all permissions should work top to bottom).

Cthulhu_ · 24 days ago
How would you do this in a native fashion? I mean I believe you (chroot jail I think it was?), but not everyone runs on *nix systems, and perhaps more importantly, not all Node developers know or want to know much about the underlying operating system. Which is to their detriment, of course, but a lot of people are "stuck" in their ecosystem. This is arguably even worse in the Java ecosystem, but it's considered a selling point (write once run anywhere on the JVM, etc).
chii · 24 days ago
> What problem does it solve that hasn't been solved by other people?

nothing. Except for "portability" arguments perhaps.

Java has had security managers and access restrictions built in but it never worked very well (and is quite cumbersome to use in practice). And there's been lots of bypasses over the years, and patch work fixes etc.

Tbh, the OS is the only real security you could trust, as it's as low a level as any application would typically go (unless you end up in driver/kernal space, like those anti-virus/anti-cheat/crowdstrike apps).

But platform vendors always want to NIH and make their platform slightly easier and still present the similar level of security.

hk1337 · 24 days ago
> The "proper" place to solve this, is in the OS.

This is my thought on using dotenv libraries. The app shouldn’t have to load environment variables, only read them. Using a dotenv function/plugin like in omz is far more preferable.

simonw · 24 days ago
How would you solve this at the OS level across Linux, macOS and Windows?

I've been trying to figure out a good way to do this for my Python projects for a couple of years now. I don't yet trust any of the solutions I've come up with - they are inconsistent with each other and feel very ironed to me making mistakes due to their inherent complexity and lack of documentation that I trust.

jbreckmckye · 24 days ago
> Why reinvent this wheel, adding complexity, bug-surface, maintenance burden and whatnot to your project? What problem dies it solve that hasn't been solved by other people?

Whilst this is (effectively) an Argument From Authority, what makes you assume the Node team haven't considered this? They're famously conservative about implementing anything that adds indirection or layers. And they're very *nix focused.

I am pretty sure they've considered "I could just run this script under a different user"

(I would assume it's there because the Permissions API covers many resources and side effects, some of which would be difficult to reproduce across OSes, but I don't have the original proposal to look at and verify)

_heimdall · 24 days ago
OS level checks will inevitably work differently on different OSes and different versions. Having a check like this in the app binary itself means you can have a standard implementation regardless of the OS running the app.

I often hear similar arguments for or against database level security rules. Row level security, for example, is a really powerful feature and in my opinion is worth using when you can. Using RLS doesn't mean you skip checking authorization rules at the API level though, you check on author in your business logic _and_ in the database.

spacebanana7 · 24 days ago
Putting network restrictions in the application layer also causes awkward issues for the org structures of many enterprises.

For example, the problem of "one micro service won't connect to another" was traditionally an ops / environments / SRE problem. But now the app development team has to get involved, just in case someone's used one of these new restrictions. Or those other teams need to learn about node.

This is non consensual devops being forced upon us, where everyone has to learn everything.

brulard · 24 days ago
How many apps do you think has properly set user and access rights only to what they need? In production? If even that percentage was high, how about developers machines, people that run some node scripts which might import whoever knows what? It is possible to have it running safely, but I doubt it's a high percentage of people. Feature like this can increase that percentage
quaunaut · 24 days ago
Genuine question, as I've not invested much into understanding this. What features of the OS would enable these kinds of network restrictions? Basic googling/asking AI points me in the direction of things that seem a lot more difficult in general, unless using something like AppArmor, at which point it seems like you're not quite in OS land anymore.
epolanski · 24 days ago
What's there to dislike? They don't replace the restrictions at OS level, they add to it.
cies · 24 days ago
In Deno you can make a runtime that cannot even access the filesystem.

That's a cool feature. Using jlink for creating custom JVMs does something similar.

That's a good feature. What you are saying is still true though, using the OS for that is the way to go.

afiori · 24 days ago
The pragmatic reason is that the runtime should have more permissions than the code, eg in node require('fs') likely read files in system folders
kijin · 24 days ago
Path restrictions look simple, but they're very difficult to implement correctly.

PHP used to have (actually, still has) an "open_basedir" setting to restrict where a script could read or write, but people found out a number of ways to bypass that using symlinks and other shenanigans. It took a while for the devs to fix the known loopholes. Looks like node has been going through a similar process in the last couple of years.

Similarly, I won't be surprised if someone can use DNS tricks to bypass --allow-net restrictions in some way. Probably not worth a vulnerability in its own right, but it could be used as one of the steps in a targeted attack. So don't trust it too much, and always practice defense in depth!

int_19h · 24 days ago
Last time a major runtime tried implementing such restrictions on VM level, it was .NET - and it took that idea from Java, which did it only 5 years earlier.

In both Java and .NET VMs today, this entire facility is deprecated because they couldn't make it secure enough.

arccy · 24 days ago
I believe that the various OSes have implemented appropriate syscalls such as openat to support it

e.x. https://go.dev/blog/osroot

tankenmate · 24 days ago
I wouldn't trust it to be done right. It's like a bank trusting that all their customers will do the right thing. If you want MAC (as opposed to DAC), do it in the kernel like it's supposed to be; use apparmor or selinux. And both of those methods will allow you to control way more than just which files you can read / write.
bombela · 24 days ago
Yeah but you see, this requires to be deployed along side the application somehow with the help of the ops team. While changing the command line is under control of the application developer.
tracker1 · 24 days ago
Just because you have a safe doesn't mean the lock on the front door is useless.
motorest · 24 days ago
> I wouldn't trust it to be done right.

I don't understand this sort of complaint. Would you prefer that they didn't worked on this support ever? Exactly what's your point? Airing trust issues?

captn3m0 · 24 days ago
Can't seem to find an official docs link for allow-net, only blog posts.
tchetwin · 24 days ago
https://github.com/nodejs/node/pull/58517 - I think the `semver-major` and the timing mean that it might not be available until v25, around October
farkin88 · 25 days ago
The killer upgrade here isn’t ESM. It’s Node baking fetch + AbortController into core. Dropping axios/node-fetch trimmed my Lambda bundle and shaved about 100 ms off cold-start latency. If you’re still npm i axios out of habit, 2025 Node is your cue to drop the training wheels.
andai · 24 days ago
16 years after launch, the JS runtime centered around network requests now supports network requests out of the box.
snickerdoodle12 · 24 days ago
Obviously it supported network requests, the fetch api didn't even exist back then, and XMLHttpRequest which was the standard at the time is insane.
jbreckmckye · 24 days ago
What a strange comment. You could always do network calls. Fetch is an API that has similar semantics across browser and server, using Promises.
pmbanugo · 24 days ago
tell me you're not a Node.js developer :)
epolanski · 24 days ago
Node always had a lower level http module.
exhaze · 25 days ago
Tangential, but thought I'd share since validation and API calls go hand-in-hand: I'm personally a fan of using `ts-rest` for the entire stack since it's the leanest of all the compile + runtime zod/json schema-based validation sets of libraries out there. It lets you plug in whatever HTTP client you want (personally, I use bun, or fastify in a node env). The added overhead is totally worth it (for me, anyway) for shifting basically all type safety correctness to compile time.

Curious what other folks think and if there are any other options? I feel like I've searched pretty exhaustively, and it's the only one I found that was both lightweight and had robust enough type safety.

jbryu · 25 days ago
Just last week I was about to integrate `ts-rest` into a project for the same reasons you mentioned above... before I realized they don't have express v5 support yet: https://github.com/ts-rest/ts-rest/issues/715

I think `ts-rest` is a great library, but the lack of maintenance didn't make me feel confident to invest, even if I wasn't using express. Have you ever considered building your own in-house solution? I wouldn't necessarily recommend this if you already have `ts-rest` setup and are happy with it, but rebuilding custom versions of 3rd party dependencies actually feels more feasible nowadays thanks to LLMs. I ended up building a stripped down version of `ts-rest` and am quite happy with it. Having full control/understanding of the internals feels very good and it surprisingly only took a few days. Claude helped immensely and filled a looot of knowledge gaps, namely with complicated Typescript types. I would also watch out for treeshaking and accidental client zod imports if you decide to go down this route.

I'm still a bit in shock that I was even able to do this, but yeah building something in-house is definitely a viable option in 2025.

WorldMaker · 24 days ago
I've been impressed with Hono's zod Validator [1] and the type-safe "RPC" clients [2] you can get from it. Most of my usage of Hono has been in Deno projects, but it seems like it has good support on Node and Bun, too.

[1] https://hono.dev/docs/guides/validation#zod-validator-middle...

[2] https://hono.dev/docs/guides/rpc#client

farkin88 · 25 days ago
Type safety for API calls is huge. I haven't used ts-rest but the compile-time validation approach sounds solid. Way better than runtime surprises. How's the experience in practice? Do you find the schema definition overhead worth it or does it feel heavy for simpler endpoints?
avandekleut · 25 days ago
Also want to shout out ts-rest. We have a typescript monorepo where the backend and frontend import the api contract from a shared package, making frontend integration both type-safe and dead simple.
bjacobso · 24 days ago
I migrated from ts-rest to Effect/HttpApi. It's an incredible ecosystem, and Effect/Schema has over taken my domain layer. Definitely a learning curve though.
cassepipe · 25 days ago
For what it's worth, happy user of ts-rest here. Best solution I landed upon so far.
tanduv · 25 days ago
I never really liked the syntax of fetch and the need to await for the response.json, implementing additional error handling -

  async function fetchDataWithAxios() {
    try {
      const response = await axios.get('https://jsonplaceholder.typicode.com/posts/1');
      console.log('Axios Data:', response.data);
    } catch (error) {
      console.error('Axios Error:', error);
    }
  }



  async function fetchDataWithFetch() {
    try {
      const response = await fetch('https://jsonplaceholder.typicode.com/posts/1');

      if (!response.ok) { // Check if the HTTP status is in the 200-299 range
        throw new Error(`HTTP error! status: ${response.status}`);
      }

      const data = await response.json(); // Parse the JSON response
      console.log('Fetch Data:', data);
    } catch (error) {
      console.error('Fetch Error:', error);
    }
  }

Cthulhu_ · 24 days ago
While true, in practice you'd only write this code once as a utility function; compare two extra bits of code in your own utility function vs loading 36 kB worth of JS.
farkin88 · 25 days ago
Yeah, that's the classic bundle size vs DX trade-off. Fetch definitely requires more boilerplate. The manual response.ok check and double await is annoying. For Lambda where I'm optimizing for cold starts, I'll deal with it, but for regular app dev where bundle size matters less, axios's cleaner API probably wins for me.
freeopinion · 24 days ago
I somehow don't get your point.

The following seems cleaner than either of your examples. But I'm sure I've missed the point.

  fetch(url).then(r=>r.ok ? r.json() : Promise.reject(r.status))
  .then(
    j=>console.log('Fetch Data:', j),
    e=>console.log('Fetch Error:', e)
  );
I share this at the risk of embarrassing myself in the hope of being educated.

fmorel · 24 days ago
Except you might want different error handling for different error codes. For example, our validation errors return a JSON object as well but with 422.

So treating "get a response" and "get data from a response" separately works out well for us.

stevage · 24 days ago
I usually write it like:

    const data = (await fetch(url)).then(r => r.json())

But it's very easy obviously to wrap the syntax into whatever ergonomics you like.

hliyan · 24 days ago
There has to be something wrong with a tech stack (Node + Lambda) that adds 100ms latency for some requests, just to gain the capability [1] to send out HTTP requests within an environment that almost entirely communicates via HTTP requests.

[1] convenient capability - otherwise you'd use XMLHttpRequest

bilalq · 24 days ago
1. This is not 100ms latency for requests. It's 100ms latency for the init of a process that loads this code. And this was specifically in the context of a Lambda function that may only have 128MB RAM and like 0.25vCPU. A hello world app written in Java that has zero imports and just prints to stdout would have higher init latency than this.

2. You don't need to use axios. The main value was that it provides a unified API that could be used across runtimes and has many convenient abstractions. There were plenty of other lightweight HTTP libs that were more convenient than the stdlib 'http' module.

yawnxyz · 25 days ago
node fetch is WAY better than axios (easier to use/understand, simpler); didn't really know people were still using axios
Raed667 · 25 days ago
I do miss the axios extensions tho, it was very easy to add rate-limits, throttling, retry strategies, cache, logging ..

You can obviously do that with fetch but it is more fragmented and more boilerplate

reactordev · 25 days ago
You still see axios used in amateur tutorials and stuff on dev.to and similar sites. There’s also a lot of legacy out there.
ramesh31 · 24 days ago
Interceptors (and extensions in general) are the killer feature for axios still. Fetch is great for scripts, but I wouldn't build an application on it entirely; you'll be rewriting a lot or piecing together other libs.
farkin88 · 25 days ago
Right?! I think a lot of devs got stuck in the axios habit from before Node 18 when fetch wasn't built-in. Plus axios has that batteries included feel with interceptors, auto-JSON parsing, etc. But for most use cases, native fetch + a few lines of wrapper code beats dragging in a whole dependency.
mcv · 25 days ago
This is all very good news. I just got an alert about a vulnerability in a dependency of axios (it's an older project). Getting rid of these dependencies is a much more attractive solution than merely upgrading them.
benoau · 25 days ago
axios got discontinued years ago I thought, nobody should still be using it!
looshch · 24 days ago
what about interceptors?
franciscop · 25 days ago
As a library author it's the opposite, while fetch() is amazing, ESM has been a painful but definitely worth upgrade. It has all the things the author describes.
farkin88 · 25 days ago
Interesting to get a library author's perspective. To be fair, you guys had to deal with the whole ecosystem shift: dual package hazards, CJS/ESM compatibility hell, tooling changes, etc so I can see how ESM would be the bigger story from your perspective.
TheRealPomax · 25 days ago
Those... are not mutually exclusive as killer upgrade. No longer having to use a nonsense CJS syntax is absolutely also a huge deal.

Web parity was "always" going to happen, but the refusal to add ESM support, and then when they finally did, the refusal to have a transition plan for making ESM the default, and CJS the fallback, has been absolutely grating for the last many years.

8n4vidtmkvmk · 24 days ago
Especially since it seems perfectly possible to support both simultaneously. Bun does it. If there's an edge case, I still haven't hit it.
moogly · 24 days ago
With node:fetch you're going to have to write a wrapper for error handling/logging/retries etc. in any app/service of size. After a while, we ended up with something axios/got-like anyway that we had to fix a bunch of bugs in.
larsnystrom · 24 days ago
And AFAIK there is still no upload progress with fetch.
pbreit · 25 days ago
It has always astonished me that platforms did not have first class, native "http client" support. Pretty much every project in the past 20 years has needed such a thing.

Also, "fetch" is lousy naming considering most API calls are POST.

rendall · 25 days ago
That's a category error. Fetch is just refers to making a request. POST is the method or the HTTP verb used when making the request. If you're really keen, you could roll your own

  const post = (url) => fetch(url, {method:"POST"})

catlifeonmars · 25 days ago
“Most” is doing a lot of heavy lifting here. I use plenty of APIs that are GET
tracker1 · 24 days ago
Node was created with first-class native http server and client support. Wrapper libraries can smooth out some rough edges with the underlying api as well as make server-side js (Node) look/work similar to client-side js (Browser).

Deleted Comment

vinnymac · 25 days ago
Undici in particular is very exciting as a built-in request library, https://undici.nodejs.org
farkin88 · 25 days ago
Undici is solid. Being the engine behind Node's fetch is huge. The performance gains are real and having it baked into core means no more dependency debates. Plus, it's got some great advanced features (connection pooling, streams) if you need to drop down from the fetch API. Best of both worlds.
jedwards1211 · 24 days ago
This has been the case for quite awhile, most of the things in this article aren’t brand new
pjmlp · 24 days ago
It kills me that I keep seeing axios being used instead of fetch, it is like people don't care, copy-paste existing projects as starting point and that is it.
bilekas · 24 days ago
Maybe I'm wrong and it's been updated but doesn't axios support progress indicators out of the box and just generally cleaner?

That's said there are npm packages that are ridiculously obsolete and overused.

synergy20 · 25 days ago
axios works for both node and browser in production code, not sure if fetch can do as much as axios in browser though

Deleted Comment

vinnymac · 25 days ago
You no longer need to install chalk or picocolors either, you can now style text yourself:

`const { styleText } = require('node:util');`

Docs: https://nodejs.org/api/util.html#utilstyletextformat-text-op...

austin-cheney · 24 days ago
I never needed those. I would just have an application wide object property like:

            text: {
                angry    : "\u001b[1m\u001b[31m",
                blue     : "\u001b[34m",
                bold     : "\u001b[1m",
                boldLine : "\u001b[1m\u001b[4m",
                clear    : "\u001b[24m\u001b[22m",
                cyan     : "\u001b[36m",
                green    : "\u001b[32m",
                noColor  : "\u001b[39m",
                none     : "\u001b[0m",
                purple   : "\u001b[35m",
                red      : "\u001b[31m",
                underline: "\u001b[4m",
                yellow   : "\u001b[33m"
            }
And then you can call that directly like:

    `${vars.text.green}whatever${vars.text.none}`;

xbbdbd · 24 days ago
This is the problem with people trying to be clever. Now you output escape sequences regardless of terminal setting.

Using a library which handles that (an a thousand other quirks) makes much more sense

tkzed49 · 24 days ago
I think the widely-implemented terminal escape sequences are well-known at this point, but I don't see why I'd want to copy this into every project.

Also, I'm guessing if I pipe your logs to a file you'll still write escapes into it? Why not just make life easier?

fuzzythinker · 24 days ago
I have a "ascii.txt" file ready to copy/paste the "book emoji" block chars to prepend my logs. It makes logs less noisy. HN can't display them, so I'll have to link to page w/ them: https://www.piliapp.com/emojis/books/
vinnymac · 23 days ago
To be fair, I do this when writing shell scripts by hand. But if it’s built in, why would I?
tyleo · 25 days ago
This is great. I learned several things reading this that I can immediately apply to my small personal projects.

1. Node has built in test support now: looks like I can drop jest!

2. Node has built in watch support now: looks like I can drop nodemon!

moogly · 24 days ago
I tried node:test and I feel this is very useful for tiny projects and library authors who need to cut down on 3rd party dependencies, but it's just too barebones for larger apps and node:assert is a bit of a toy, so at a minimum you want to pull in a more full-fledged assertion library. vitest "just works", however, and paves over a lot of TypeScript config malarkey. Jest collapsed under its own weight.
WorldMaker · 24 days ago
As someone who eschewed jest and others for years for the simplicity of mocha, I still appreciate the design decision of mocha to keep the assertions library separate from the test harness. Which is to point out that chai [1] is still a great assertions library and only an assertions library.

(I haven't had much problem with TypeScript config in node:test projects, but partly because "type": "module" and using various versions of "erasableSyntaxOnly" and its strict-flag and linter predecessors, some of which were good ideas in ancient mocha testing, too.)

[1] https://www.chaijs.com/

hungryhobbit · 25 days ago
Eh, the Node test stuff is pretty crappy, and the Node people aren't interested in improving it. Try it for a few weeks before diving headfirst into it, and you'll see what I mean (and then if you go to file about those issues, you'll see the Node team not care).
tejohnso · 24 days ago
I just looked at the documentation and it seems there's some pretty robust mocking and even custom test reporters. Definitely sounds like a great addition. As you suggest, I'll temper my enthusiasm until I actually try it out.
upcoming-sesame · 25 days ago
still I would rather use that than import mocha, chai, Sinon, istanbul.

At the end it's just tests, the syntax might be more verbose but Llms write it anyway ;-)

mcv · 24 days ago
Could you expand on the shortcomings of Node test compared to jest?
pavel_lishin · 25 days ago
I still like jest, if only because I can use `jest-extended`.
vinnymac · 25 days ago
If you haven't tried vitest I highly recommend giving it a go. It is compatible with `jest-extended` and most of the jest matcher libraries out there.
fleebee · 25 days ago
Nice post! There's a lot of stuff here that I had no idea was in built-in already.

I tried making a standalone executable with the command provided, but it produced a .blob which I believe still requires the Node runtime to run. I was able to make a true executable with postject per the Node docs[1], but a simple Hello World resulted in a 110 MB binary. This is probably a drawback worth mentioning.

Also, seeing those arbitrary timeout limits I can't help but think of the guy in Antarctica who had major headaches about hardcoded timeouts.[2]

[1]: https://nodejs.org/api/single-executable-applications.html

[2]: https://brr.fyi/posts/engineering-for-slow-internet

llimllib · 25 days ago
I have a blog post[1] and accompanying repo[2] that shows how to use SEA to build a binary (and compares it to bun and deno) and strip it down to 67mb (for me, depends on the size of your local node binary).

[1]: https://notes.billmill.org/programming/javascript/Making_a_s...

[2]: https://github.com/llimllib/node-esbuild-executable#making-a...

sgarland · 25 days ago
> 67 MB binary

I hope you can appreciate how utterly insane this sounds to anyone outside of the JS world. Good on you for reducing the size, but my god…

xdennis · 24 days ago
Yeah, many people here are saying this is AI written. Possibly entirely.

It says: "You can now bundle your Node.js application into a single executable file", but doesn't actually provide the command to create the binary. Something like:

    npx postject hello NODE_SEA_BLOB sea-prep.blob \
        --sentinel-fuse NODE_SEA_FUSE_fce680ab2cc467b6e072b8b5df1996b2

llamasushi · 21 days ago
Yeah, this one line gave it away for me: "you’re not just writing contemporary code—you’re building applications that are more maintainable..."

Look up dialectical hedging. Dead AI giveaway.

refulgentis · 25 days ago
The LLM made this sound so epic: "The node: prefix is more than just a convention—it’s a clear signal to both developers and tools that you’re importing Node.js built-ins rather than npm packages. This prevents potential conflicts and makes your code more explicit about its dependencies."
Hackbraten · 25 days ago
Also no longer having to use an IIFE for top-level await is allegedly a „game changer.“
wavemode · 25 days ago
so in other words, it's a convention

Deleted Comment

bashtoni · 25 days ago
Agreed. It's surprising to see this sort of slop on the front page, but perhaps it's still worthwhile as a way to stimulate conversation in the comments here?
jmkni · 25 days ago
I learned quite a few new things from this, I don't really care if OP filtered it through an LLM before publishing it
jjani · 24 days ago
I too find it unreadable, I guess that's the downside of working on this stuff every day, you get to really hate seeing it.

It does tell you that if even 95% of HN can't tell, then 99% of the public can't tell. Which is pretty incredible.

refulgentis · 25 days ago
I have an increasing feeling of doom re: this.

The forest is darkening, and quickly.

Here, I'd hazard that 15% of front page posts in July couldn't pass a "avoids well-known LLM shibboleths" check.

Yesterday night, about 30% of my TikTok for you page was racist and/or homophobic videos generated by Veo 3.

Last year I thought it'd be beaten back by social convention. (i.e. if you could showed it was LLM output, it'd make people look stupid, so there was a disincentive to do this)

The latest round of releases was smart enough, and has diffused enough, that seemingly we have reached a moment where most people don't know the latest round of "tells" and it passes their Turing test., so there's not enough shame attached to prevent it from becoming a substantial portion of content.

I commented something similar re: slop last week, but made the mistake of including a side thing about Markdown-formatting. Got downvoted through the floor and a mod spanking, because people bumrushed to say that was mean, they're a new user so we should be nicer, also the Markdown syntax on HN is hard, also it seems like English is their second language.

And the second half of the article was composed of entirely 4 item lists.

azangru · 25 days ago
Matteo Collina says that the node fetch under the hood is the fetch from the undici node client [0]; and that also, because it needs to generate WHATWG web streams, it is inherently slower than the alternative — undici request [1].

[0] - https://www.youtube.com/watch?v=cIyiDDts0lo

[1] - https://blog.platformatic.dev/http-fundamentals-understandin...

vinnymac · 25 days ago
If anyone is curious how they are measuring these are the benchmarks: https://github.com/nodejs/undici/blob/main/benchmarks/benchm...

I did some testing on an M3 Max Macbook Pro a couple of weeks ago. I compared the local server benchmark they have against a benchmark over the network. Undici appeared to perform best for local purposes, but Axios had better performance over the network.

I am not sure why that was exactly, but I have been using Undici with great success for the last year and a half regardless. It is certainly production ready, but often requires some thought about your use case if you're trying to squeeze out every drop of performance, as is usual.

Dead Comment

bilalq · 24 days ago
I really wish ESM was easier to adopt. But we're halfway through 2025 and there are still compatibility issues with it. And it just gets even worse now that so many packages are going ESM only. You get stuck having to choose what to cut out. I write my code in TS using ESM syntax, but still compile down to CJS as the build target for my sanity.

In many ways, this debacle is reminiscent of the Python 2 to 3 cutover. I wish we had started with bidirectional import interop and dual module publications with graceful transitions instead of this cold turkey "new versions will only publish ESM" approach.

mirkodrummer · 24 days ago
Bun proved both worlds can work together, the mess is all on the Node.js shoulder, and we, the devs, are blamed for not adopting
capt_obvious_77 · 24 days ago
Can you elaborate on the compatibility issues you ran into, with ESM, please? Are they related to specific libs or use-cases?
plopz · 24 days ago
The two I find most annoying

Hoisting/import order especially when trying to mock tests.

Whether or not to include extensions, and which extension to use, .js vs .ts.