This is great. I had an idea to use named iframes and targeted forms for simple, server-rendered pages with built-in style-scoped widgets, without leaning into complex JS client-side. But, I never simplified it well nor expressed a polished and elegant realization of that idea, as this htmz looks to me to be.
A reminder to never give up good ideas, focus on excellence, and focus on refinement to a completion of an idea, and communicate well!
Also the comments here:
- This is a great hack and shows how close the browser is to offering SPA natively.
- This is a glorious demonstration of someone really understanding the platform.
- Simple and powerful, as the vanilla web should be. Thank you for this (small) gem :)
- This is a neat hack, I like it :). Thanks for sharing.
are exactly what I hoped to hear reflected about my creation, and are totally on point for what this type of thing should be. Close to the web-grain, using the given material of the web in the best way possible. Fuck yeah! :)
Thank you for being a genius! :)
And for inspiring about what's possible! :)
P.S - also your communication and marketing skills are top notch! I think the way you have communicated this elegant technical thing, from the choice of name, API, examples, copy -- is just awesome. I learn from it! :)
I am a little bit confused because your comments seem to imply initially that htmz is written by someone other than you, and then later that you wrote htmz.
Who are you and what is your relationship with htmz and its creators? Please be honest and refrain from violating federal law and FTC guidelines in your response.
You seem to have misinterpreted "[those comments] are exactly what I hoped to hear reflected about my creation". They weren't saying "I enjoyed hearing those comments about htmz, which is a thing I created" - they were saying "those comments are what I _had_ hoped to hear about my (unreleased and unnamed) creation, which indicates that htmz is a good implementation of a similar idea that I had"
I don't think this person is implying that at all (nor do I think that anyone needs to be trotting out federal law and FTC guidelines here :).
"I had an idea to use named iframes [...] But, I never simplified it well nor expressed a polished and elegant realization of that idea, as this htmz looks to me to be. [...] the comments here [...] are exactly what I hoped to hear reflected about my creation, and are totally on point for what this type of thing should be."
Seems pretty clear to me. This person had a similar idea but didn't complete it, and finds the flavor of appreciative "nice hack" energy (as opposed to "this is enterprise" or "this is a revolution" energy, I guess) appropriate for the project and the type of feedback they had wanted to hear had they completed their project.
> Who are you and what is your relationship with htmz and its creators? Please be honest and refrain from violating federal law and FTC guidelines in your response.
My name is John Galt and I wrote this library. When you learn who I really am, then you'll understand.
That's a great hack and it shows how close the browser is to offering SPA natively.
Just a few attributes and we could avoid the iframe.
It's probably more useful to prove a point than an actual day to day tool. And the point seems to be: htmx is too much trouble for what it offers. We just need HTML native ajax.
I'm the creator of htmx and think this is a great library/snippet. Much closer to what htmx-like functionality in HTML would/should look like in that it is following existing norms (iframes, the target attribute) much more closely than htmx.
From a practical perspective, a lot of the bulk of htmx is bound up in things like history support, collecting inputs, a lot of callbacks/events to allow people to plug into things, etc. I expect a lot of htmx-like libraries will come out now that it has some traction: it's not a super complicated idea, and many of them will pick smaller, more targeted subsets of functionality to implement. That's a good thing: the ideas of hypermedia are more important than my particular implementation.
I am now officially emotionally invested in htmx so I have to justify investing my time/energy in htmx instead of something else, so all alternatives to htmx are stupid and their existence make me very angry.
Sorry if I came across as dismissive, htmx is a much needed counterpoint to React's dominance and a great contribution to the industry.
And I hope its core idea (Ajax from HTML with results loading inside a target element) will be adopted as a Web standard.
Hi Carson! I've been using htmx sprinkled with hyperscript and I find using your tools very enjoyable (though I found myself fighting a bit with hyperscript at the beginning but past that and once you get the mindset and the hs way, things are easier).
Thanks for these tools! I wanted to also take the opportunity to ask you something you either mentioned, commented or heard in a podcast.
You said that htmx might not be the tool to solve all problems (or certain kind of apps). Just asking because I think is also great to hear when not to use something to solve a problem. So, in your opinion what kind of webapps do you consider that maybe htmx is not a good fit? And in that case what alternate tools/approaches do you suggest? Once again thanks a lot for htmx & hyperscript!
Regarding the size I would guess that if htmz would be extended to have the same features as htmx, it would also be similar in size? Would it make sense to modularize htmx in order to only pay for what you really use to support adding features without necessarily increasing the downloaded size?
Kind of makes me wish there was a htmx-lite of sorts. Like an 80/20 solution at 20% of the size, which could be extended to the full blown package with extensions.
Have you thought about moving more features to extension, like was done with the web sockets feature?
Yes this was a response to htmx. It was a half-parody half-I wanna make it work project.
Like https://github.com/vilgacx/aki
I would fear if anyone wants to use this in production BUT I would love someone to get inspired and use the concepts rather than the actual code. Hmm maybe i should write a disclaimer...
It's funny, I stumbled on a similar use for iframes a few years ago that I did put into production. I needed to have a rather large SPA for employees hosted locally - that is, on a server on a local network in retail stores, not accessible from the web or unless you're on the same network as the server. The employee had to be able to load it on a tablet (but wouldn't necessarily know the server's local, dynamically allocated IP address without checking). And it had to be deployable without any complicated local DNS setups, router changes, etc.
I solved it by writing a discovery service... a wrapper that's accessible on a public (non-SSL) web page that basically opens a pile of hidden iframes and uses them to war dial local IP addresses until it finds the app and replaces the page's content with the winning frame's. Amazingly this janky thing has held up ;)
Love it! I think this idea has some legs in that a programmer can build their own f****k. Bundling only the pieces that they actually use. I don't see why it should not be used in a production environment...other than someone in the internet disapproves...a fear many of us suffer from. It's a simple idea & can be easily managed in a codebase.
In he spirit of breaking apart HTMX piece by piece, I created hyop (Hypermedia/Hydration Operation). It weighs in at 61 B but uses the "hyop" attribute. I know, I know, I'm bad...but at least I save bytes.
It sounds like 'probably, yes' to adding a disclaimer, if only because I rather took it at face-value seeing it posted here on QBN so bookmarked for investigation later... .
HTML native ajax is the right approach, and what the htmx devs fully support if I understand correctly, but I don't think this demonstrates htmx is too much trouble for what it offers. It offers considerably more than what's possible here, eg. DOM morphing, animations, graceful degradation, rich event model, server side events, etc.
You know... we've been doing something very similar 20 years ago for a stats dashboard web app - reloading only DIVs that need with new content server-generated. We didn't even bother to recreate the DOM, but directly innerHTML = content loaded.
Perl on the back-end and some very tiny JS on the front-end. Do I need to tell U that this worked as absolute charm and was blazing fast. The only considerable downside was that indeed lots of traffic was going back and forth. But then 20 years later latency is much lower, traffic much cheaper, CPUs also, and I am very happy to see more and more people realize this bare-bones approach was actually a good thing to consider (the author lists the downsides).
To me such approach is much more web-native in comparison to abomination UI frameworks, that try to reinvent presentation marginalizing the browser to be nothing more than a drawing surface. But guess what - their primary goal is to save on this network latency, that is anyway going down down down down with every year.
The htmlz/htmlx approach is indeed much simpler and easy to live with in a large project, it is really sad that we put so much logic in the front-end in recent years...
I think there's a pretty strong argument at this point for this kind of replacing DOM with a response behavior being part of the platform.
I think the first step would be an element that lets you load external content into the page declaratively. There's a spec issue open for this: https://github.com/whatwg/html/issues/2791
When I read your comment I thought "hang on, I remember this discussion from years ago". Looks like the issue dates back to 2017 indeed. I remember this idea being shelved because of concerns around the handling of things like scripts and stylesheets, especially in the context of the idea of HTML imports (which were the next big thing when Google's Web Components advocates harassed other JS frameworks for not being Polymer - until it turned out that HTML imports had to be shelved and Polymer ended up being shelved in favor of the next big thing Google pushed out).
iframe is that kind of element, if they hadn't added the brain dead srcdoc attribute to it to show inline content.
I think some of the proposals I see in that thread are kind of interesting, but to do it properly requires a new MIME type for html fragments, eg. text/html-fragment. This is arguably what htmx should also be using for fragments.
I do kind of like the idea of adding this as an attribute to some existing elements, and they show the inner content by default until the target is loaded. Basically like adding an iframe-like rendering mode to existing elements. Then you could implement htmx's hx-boost behaviour just by setting this mode on the body element and using the htmz trick of links naming the body as a target.
In 2001 or so, I was building an HTML based email client. We used a hidden iframe to execute JS to load data from the server and manipulate the DOM with the result. It was not quite as neat and elegant as this—the browser support was not there. However, the basic mechanism was the same.
It warms my heart to see the basic mechanism in such a compact package, without libraries upon libraries and machinations to make things work. This is probably perfect for 90% or so use cases where react or similar FE frameworks are used at the moment.
We later used to joke that we used Ajax before Ajax was a thing.
I think I posted it before, but I had SPA using Spring Webflow running in dom nodes (no iframes required) with animation event handling, csrf, all back around 2006 or so. The calling html was also pure save for a jQuery include at the top and my app include. The backend used JSP. No back button issues, no client or server state (I used a db instead of some options webflow gives you), it was a dream. Completely lost on the company I worked for at the time. I was up and running with a new user flow, in half a day or so. Static blocks suddenly would "do stuff" from the perspective of the business team in about half a day.
This is the problem with technology. When it works well and really solves the problem, it is invisible. No one gets promoted for invisible.
Omg! Same story for me! I was working in a billing company and we used an iframe that we'd reload with JS that would run to change the DOM. Around the same time as well!
Understanding or not, certain decisions like overriding the semantic meaning of a hash in a url doesn't seem to be working with the platform. A better version would be adding a target to a `data-` attribute.
But the hash doesn't have a semantic meaning when it is in an href or action attribute, because it is stripped out before the request is sent to the server. The hash only has a meaning when it's in the URL bar. That's kinda the point of this hack.
Further size reduction, you don't need the `this.` on the inline event listener, so it can be: `contentWindow.location.hash` and `contentDocument.body.childNodes` instead of `this.contentWindow.location.hash` or `this.contentDocument.body.childNodes`.
This will shave another 10 bytes off the snippet :D
Although location.hash defaults to the empty string '' when there is no hash, which gives a SyntaxError, so we still need the fallback selector to select none.
Given that this uses `target`, doesn't it mean that unlike htmx you can't easily make this gracefully degrade when JS isn't enabled?
And, yes, I know, saying "when JS isn't enabled" in 2024 is a bit like saying "when the user is on Mars and has a 10 minute RTT" but forgive me for being an idealist.
Yeah it breaks without JS. You could add the iframe behind JS, so the target would default to a new tab. But the server would still be designed to return HTML fragments. I never found a way for a server to check if the originating request is for an iframe or a new tab. It's not quite a graceful degradation.
> I never found a way for a server to check if the originating request is for an iframe or a new tab.
There is no such technique. One way to distinguish is to pick a URL convention and modify the URL (before the hash) of the iframe URL. For example, add ?iframe=true to the URL, and then have the server check for that. Perhaps more usefully you could include information about the parent URL, e.g. url += '?parent=${document.referrer}'. Or something.
You could intercept the clicks with JS and add a special header, like htmx does, to return fragments, otherwise fall back to full documents.
Edit: rather than a header, dynamically adding a query parameter to the URL of the element that was clicked would probably fit better with htmz's approach.
Not even just without JS. If you middle-click to open a link in a new tab you get just get content that was expected to be swapped in. I think that abusing links is a far bigger sin than adding a custom attribute.
This is why htmx sends requests by default with an HX-Request header so that the server can distinguish between them and serve different content if need be.
Could you describe your ideals for why websites should gracefully degrade without JS enabled? It’s not an unpopular view on HN, but from my perspective as a web developer, JS is a part of browser application just like HTML, and there’s no reason for the website to work if you’ve disabled a part of the browser.
I suspect “doesn’t have JavaScript” is being used as a proxy for a lot of other ideals that I do understand, like “should work on as many devices as possible” but that’s a rough correlation and doesn’t make the use of JS inherently bad.
So I've been in numerous situations where having JavaScript enabled was either undesirable or impossible, granted, it's my own fault for using strange platforms like a Nokia N900 or whatever, with strange web browsers. But it's still nice when interactive websites continue to work even in contexts where JavaScript isn't a thing. I always thought of JavaScript as something you use to "upgrade" the web experience, not to build it. Although obviously there are some things you simply can't do without JavaScript and for which there exists literally no alternative. There's also situations where JavaScript is a liability. See, for example, the Tor browser.
Especially my ideal is that all functionality which can work without JavaScript should work without JavaScript. So, for example, I am not expecting someone to implement drag-and-drop calendars without JS, but there's no reason why the editing function of a calendar item should fundamentally require JS.
That being said, I know this is an idealist position, most companies which work on developing web-applications simply don't care about these niche use-cases where JS isn't an option and as such won't design their web-applications to accommodate those use-cases to save on development costs and time. But, while I am not really a web-developer, whenever I do deal with the web, I usually take a plain-HTML/CSS first approach and add JavaScript later.
I browse the web without JS. It is a fast easy way to load websites. And some sites with heavy app interaction features need JS. And that is fine. It is the other sites that use JS to figure out if their users have read more than 2 articles that are the problem.
Degrade gracefully is a required development skill. Sites need to allow for their pages to work in limited fashion without JS. JS should only be a layer added for interactivity, animation, and app construction. Otherwise, workarounds are great.
Is there a way to make this gracefully work? YES!! Instead of using hash tag names, use '?id=example'. And let the script in frame figure out the real destination of the output. Otherwise, the page will load the full site. Also use script to add "target" attribute to links.
Because there's a case for a very useful Web without running a Turing-complete language on the visitor's end.
If you just want to consume text, images, audios and videos, follow links and fill in forms (and that's quite a lot and pretty awesome already), you shouldn't need JavaScript.
Chances are I’m on your website for information, mostly text content. Which really doesn’t require JavaScript.
So then, most JavaScript on the web is enabling revenue generation rather than improving the user experience. So yeah, disabling JS is a proxy for, “don’t waste my time.”
But I agree that it’s not inherently bad, but just mostly bad (for the user.)
A reason people might want to have JavaScript disabled, is because of the immense tracking possibilities that JavaScript has, which can't easily be safe-guarded against.
The people who do disable JavaScript completely are admittedly few and far between, but are, I would assume, more common among the Hacker News crowd.
If you would've told anyone in the year 2000 that it'd become standard practice to blindly execute programs sent to you by anonymous people you don't know you'd get a lecture on why that's stupid.
But in 2024 it's standard accepted practice. And that standard has made it so browser developers have to literally prevent the user themselves from having control over their browser because it's too dangerous to do otherwise.
The problem with the entire commercial web application ethos, despite it being a perfect fit for for-profit situations, is that it forces the rest of the web stack to gimp itself and centralize itself, CA TLS only, etc, just to keep the auto-code executing people secure. The one horribly insecure user behavior (auto executing random code) takes over from all other use cases and pushes them out.
So, we end up with very impressive browsers that are basically OSes but no longer functions as browsers. And that's a problem. Design your own sites so that they progressively enhance when JS is available. When work requires you to make bad JS sites, do so, but only in exchange for money.
Supporting as many devices as possible, breaking RESTful APIs, etc.
A JS engine pre-supposes many, many things (too many) about a client, stuff like implicit assumptions that "this device has enough power and bandwidth to process my <insert length> javascript, perform appropriate security checks on it, handle this fast enough to service other requests in a timely manner, and also not take over the user's entire computer".
Accessibility means you should presume the least number of things possible. There's no sound, no visuals, no powerful cpu (maybe there isn't even a cpu), the device is the only working device from 20 years ago performing some critical function (e.g. government poverty assistance), there's only a couple minutes of power left, the internet is not available currently or quickly, etc.
You should never assume you have JS, period, and features should gracefully degrade in the presence of JS engines.
It's a lot harder to create bitcoin miners or other malicious things without js so I keep scripts disabled by default and only enabe them where it makes sense.
Reimplementing browser functionality in JS also often breaks excpectations as things don't work quite the same for all browsers. It also means browser extensiosn are less likely to be able to deal with the content.
Essentially it's like wanting your documents in PDF format even though a executables are also part of the PC platform and you could just ship a custom exectuable that renders your document instead. To me JS is just as absurd.
It breaks without JS but many JS blocker extensions can be configured to always allow JS from the host serving the page. For example NoScript on my phone has the "Temporarily set top-level sites to TRUSTED" option.
With only 181 bytes it could even be included in the page. It's much less than the sum of the meta tags on many sites.
I had a dream yesterday, that scientists managed to create a new kind of EMP bomb. This bomb was unusual in that, by varying the level of EM discharge in the payload (dreamy-sciency explanation), it could target all hardware created after a certain point in time.
I had access to their facility (dreams being as dreams often are), and managed to set it for 1992, and right when I was about to press the button, I woke up.
GitHub itself used pjax heavily and I liked those interactions far more than the newer React ones, the HTML was much more semantic for one, with middle click always being respected.
A reminder to never give up good ideas, focus on excellence, and focus on refinement to a completion of an idea, and communicate well!
Also the comments here:
- This is a great hack and shows how close the browser is to offering SPA natively.
- This is a glorious demonstration of someone really understanding the platform.
- Simple and powerful, as the vanilla web should be. Thank you for this (small) gem :)
- This is a neat hack, I like it :). Thanks for sharing.
are exactly what I hoped to hear reflected about my creation, and are totally on point for what this type of thing should be. Close to the web-grain, using the given material of the web in the best way possible. Fuck yeah! :)
Thank you for being a genius! :)
And for inspiring about what's possible! :)
P.S - also your communication and marketing skills are top notch! I think the way you have communicated this elegant technical thing, from the choice of name, API, examples, copy -- is just awesome. I learn from it! :)
Who are you and what is your relationship with htmz and its creators? Please be honest and refrain from violating federal law and FTC guidelines in your response.
"I had an idea to use named iframes [...] But, I never simplified it well nor expressed a polished and elegant realization of that idea, as this htmz looks to me to be. [...] the comments here [...] are exactly what I hoped to hear reflected about my creation, and are totally on point for what this type of thing should be."
Seems pretty clear to me. This person had a similar idea but didn't complete it, and finds the flavor of appreciative "nice hack" energy (as opposed to "this is enterprise" or "this is a revolution" energy, I guess) appropriate for the project and the type of feedback they had wanted to hear had they completed their project.
My name is John Galt and I wrote this library. When you learn who I really am, then you'll understand.
Deleted Comment
Just a few attributes and we could avoid the iframe.
It's probably more useful to prove a point than an actual day to day tool. And the point seems to be: htmx is too much trouble for what it offers. We just need HTML native ajax.
From a practical perspective, a lot of the bulk of htmx is bound up in things like history support, collecting inputs, a lot of callbacks/events to allow people to plug into things, etc. I expect a lot of htmx-like libraries will come out now that it has some traction: it's not a super complicated idea, and many of them will pick smaller, more targeted subsets of functionality to implement. That's a good thing: the ideas of hypermedia are more important than my particular implementation.
I am now officially emotionally invested in htmx so I have to justify investing my time/energy in htmx instead of something else, so all alternatives to htmx are stupid and their existence make me very angry.
Thanks for these tools! I wanted to also take the opportunity to ask you something you either mentioned, commented or heard in a podcast.
You said that htmx might not be the tool to solve all problems (or certain kind of apps). Just asking because I think is also great to hear when not to use something to solve a problem. So, in your opinion what kind of webapps do you consider that maybe htmx is not a good fit? And in that case what alternate tools/approaches do you suggest? Once again thanks a lot for htmx & hyperscript!
I'm going to be doing a lot of web pages in HTMX in the next couple of years, and it will be much easier to develop/debug than javascript.
Have you thought about moving more features to extension, like was done with the web sockets feature?
I would fear if anyone wants to use this in production BUT I would love someone to get inspired and use the concepts rather than the actual code. Hmm maybe i should write a disclaimer...
I solved it by writing a discovery service... a wrapper that's accessible on a public (non-SSL) web page that basically opens a pile of hidden iframes and uses them to war dial local IP addresses until it finds the app and replaces the page's content with the winning frame's. Amazingly this janky thing has held up ;)
Anyway, nice work, and a cool idea!
In he spirit of breaking apart HTMX piece by piece, I created hyop (Hypermedia/Hydration Operation). It weighs in at 61 B but uses the "hyop" attribute. I know, I know, I'm bad...but at least I save bytes.
https://github.com/hyopjs/hyop
I'm going to use some of your concepts, with credit of course...like the snippetware idea among others.
Why? How would it be different from using htmx?
Perl on the back-end and some very tiny JS on the front-end. Do I need to tell U that this worked as absolute charm and was blazing fast. The only considerable downside was that indeed lots of traffic was going back and forth. But then 20 years later latency is much lower, traffic much cheaper, CPUs also, and I am very happy to see more and more people realize this bare-bones approach was actually a good thing to consider (the author lists the downsides).
To me such approach is much more web-native in comparison to abomination UI frameworks, that try to reinvent presentation marginalizing the browser to be nothing more than a drawing surface. But guess what - their primary goal is to save on this network latency, that is anyway going down down down down with every year.
The htmlz/htmlx approach is indeed much simpler and easy to live with in a large project, it is really sad that we put so much logic in the front-end in recent years...
I think the first step would be an element that lets you load external content into the page declaratively. There's a spec issue open for this: https://github.com/whatwg/html/issues/2791
And my custom element implementation of the idea: https://www.npmjs.com/package/html-include-element
Then HTML could support these elements being targets of links.
I think some of the proposals I see in that thread are kind of interesting, but to do it properly requires a new MIME type for html fragments, eg. text/html-fragment. This is arguably what htmx should also be using for fragments.
I do kind of like the idea of adding this as an attribute to some existing elements, and they show the inner content by default until the target is loaded. Basically like adding an iframe-like rendering mode to existing elements. Then you could implement htmx's hx-boost behaviour just by setting this mode on the body element and using the htmz trick of links naming the body as a target.
It warms my heart to see the basic mechanism in such a compact package, without libraries upon libraries and machinations to make things work. This is probably perfect for 90% or so use cases where react or similar FE frameworks are used at the moment.
We later used to joke that we used Ajax before Ajax was a thing.
This is the problem with technology. When it works well and really solves the problem, it is invisible. No one gets promoted for invisible.
Deleted Comment
I don't expect I would ever use it, but I think it's excellent.
It's a fun project overall, though.
This will shave another 10 bytes off the snippet :D
setTimeout(()=>document.querySelector(contentWindow.location.hash)?.replaceWith(...contentDocument.body.childNodes))
Although location.hash defaults to the empty string '' when there is no hash, which gives a SyntaxError, so we still need the fallback selector to select none.
And, yes, I know, saying "when JS isn't enabled" in 2024 is a bit like saying "when the user is on Mars and has a 10 minute RTT" but forgive me for being an idealist.
There is no such technique. One way to distinguish is to pick a URL convention and modify the URL (before the hash) of the iframe URL. For example, add ?iframe=true to the URL, and then have the server check for that. Perhaps more usefully you could include information about the parent URL, e.g. url += '?parent=${document.referrer}'. Or something.
Edit: rather than a header, dynamically adding a query parameter to the URL of the element that was clicked would probably fit better with htmz's approach.
https://addons.mozilla.org/en-US/firefox/addon/disable-javas...
Could you describe your ideals for why websites should gracefully degrade without JS enabled? It’s not an unpopular view on HN, but from my perspective as a web developer, JS is a part of browser application just like HTML, and there’s no reason for the website to work if you’ve disabled a part of the browser.
I suspect “doesn’t have JavaScript” is being used as a proxy for a lot of other ideals that I do understand, like “should work on as many devices as possible” but that’s a rough correlation and doesn’t make the use of JS inherently bad.
Especially my ideal is that all functionality which can work without JavaScript should work without JavaScript. So, for example, I am not expecting someone to implement drag-and-drop calendars without JS, but there's no reason why the editing function of a calendar item should fundamentally require JS.
That being said, I know this is an idealist position, most companies which work on developing web-applications simply don't care about these niche use-cases where JS isn't an option and as such won't design their web-applications to accommodate those use-cases to save on development costs and time. But, while I am not really a web-developer, whenever I do deal with the web, I usually take a plain-HTML/CSS first approach and add JavaScript later.
Degrade gracefully is a required development skill. Sites need to allow for their pages to work in limited fashion without JS. JS should only be a layer added for interactivity, animation, and app construction. Otherwise, workarounds are great.
Is there a way to make this gracefully work? YES!! Instead of using hash tag names, use '?id=example'. And let the script in frame figure out the real destination of the output. Otherwise, the page will load the full site. Also use script to add "target" attribute to links.
If you just want to consume text, images, audios and videos, follow links and fill in forms (and that's quite a lot and pretty awesome already), you shouldn't need JavaScript.
So then, most JavaScript on the web is enabling revenue generation rather than improving the user experience. So yeah, disabling JS is a proxy for, “don’t waste my time.”
But I agree that it’s not inherently bad, but just mostly bad (for the user.)
The people who do disable JavaScript completely are admittedly few and far between, but are, I would assume, more common among the Hacker News crowd.
But in 2024 it's standard accepted practice. And that standard has made it so browser developers have to literally prevent the user themselves from having control over their browser because it's too dangerous to do otherwise.
The problem with the entire commercial web application ethos, despite it being a perfect fit for for-profit situations, is that it forces the rest of the web stack to gimp itself and centralize itself, CA TLS only, etc, just to keep the auto-code executing people secure. The one horribly insecure user behavior (auto executing random code) takes over from all other use cases and pushes them out.
So, we end up with very impressive browsers that are basically OSes but no longer functions as browsers. And that's a problem. Design your own sites so that they progressively enhance when JS is available. When work requires you to make bad JS sites, do so, but only in exchange for money.
A JS engine pre-supposes many, many things (too many) about a client, stuff like implicit assumptions that "this device has enough power and bandwidth to process my <insert length> javascript, perform appropriate security checks on it, handle this fast enough to service other requests in a timely manner, and also not take over the user's entire computer".
Accessibility means you should presume the least number of things possible. There's no sound, no visuals, no powerful cpu (maybe there isn't even a cpu), the device is the only working device from 20 years ago performing some critical function (e.g. government poverty assistance), there's only a couple minutes of power left, the internet is not available currently or quickly, etc.
You should never assume you have JS, period, and features should gracefully degrade in the presence of JS engines.
Reimplementing browser functionality in JS also often breaks excpectations as things don't work quite the same for all browsers. It also means browser extensiosn are less likely to be able to deal with the content.
Essentially it's like wanting your documents in PDF format even though a executables are also part of the PC platform and you could just ship a custom exectuable that renders your document instead. To me JS is just as absurd.
Deleted Comment
Deleted Comment
With only 181 bytes it could even be included in the page. It's much less than the sum of the meta tags on many sites.
I had access to their facility (dreams being as dreams often are), and managed to set it for 1992, and right when I was about to press the button, I woke up.
It RUINED my day.
[1]: https://github.com/defunkt/jquery-pjax