Readit News logoReadit News
obpe · 2 years ago
It's kinda funny to me that many of the "pros" of this approach are the exact reasons so many abandoned MPAs in the first place.

For instance, a major selling point of Node was running JS on both the client and server so you can write the code once. It's a pretty shitty client experience if you have to do a network request for each and every validation of user input.

Also, there was a push to move the shitty code from the server to the client to free up server resources and prevent your servers from ruining the experience for everyone.

We moved away for MPAs because they were bloated, slow and difficult to work with. SPAs have definitely become what they sought to replace.

But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again. Nothing about this technology will stop that.

PaulHoule · 2 years ago
I remember that all the web shops in my town that did Ruby on Rails sites efficiently felt they had to switch to Angular about the same time and they never regained their footing in the Angular age although it seems they can finally get things sorta kinda done with React.

Client-side validation is used as an excuse for React but we were doing client-side validation in 1999 with plain ordinary Javascript. If the real problem was “not write the validation code twice” surely the answer would have been some kind of DSL that code-generated or interpreted the validation rules for the back end and front end, not the fantastically complex Rube Goldberg machine of the modern Javascript wait wait wait wait and wait some more to build machine and then users wait wait wait wait wait for React and 60,000 files worth of library code to load and then wait wait wait wait even more for completely inscrutable reasons later on. (e.g. amazing how long you have to wait for Windows to delete the files in your node_modules directory)

berkes · 2 years ago
Even worse: Client-side validation and server-side validation (and database integrity validation) are all their own domains! I call all of these "domain logic" or domain validation just to be sure.

Yes, they overlap. Sure, you'll need some repetition and maybe, indeed, some DSL or tooling to share some of the overlapping ones across the boundaries.

But no! They are not the same. A "this email is already in use" is serverside, (it depends on the case). A "this doesn't look like an email-address, did you mean gmail.com instead of gamil.com" is client side and a "unique-key-constraint: contactemail already used" is even more down.

My point is, that the more you sit down (with customers! domain experts!) and talk or think all this through, the less it's a technical problem that has to be solved with DSLs, SPAs, MPAs or "same language for backend and UI". And the more you (I) realize it really often hardly matters.

You quite probably don't even need that email-uniqueness validation at all. In any layer. If you just care to speak to the business.

v0idzer0 · 2 years ago
It really wasnt about client side validation or UX at all. You can have great UX with an MPA or SPA. Although I do think it’s slightly easier in an SPA if you have a complex client like a customizable dashboard.

Ultimately it’s about splitting your app into a server and client with a clear API bounday. Decoupling the client and server means they can be separate teams with clearly definied roles and responsibilities. This may be worse for small teams but is significantly better for large teams (like Facebook and Google who started these trends).

One example is your iOS app can hit the same API as your web app, since your server is no longer tightly coupled to html views. You can version your backend and upgrade your clients on their own timelines.

ratorx · 2 years ago
Or even skip the DSL and use JS for both client and server, just independently. Validation functions can/should be simple, pure JS that can be imported from both.
hdjjhhvvhga · 2 years ago
> felt they had to switch to Angular about the same time and they never regained their footing in the Angular age

And in this case what actually happened is exactly what we had expected would happen: tons of badly-written Angular apps than need to be maintained for foreseeable future because at this point nobody wants to rewrite them so they become Frankensteins nobody wants to deal with.

moritzwarhier · 2 years ago
> then wait wait wait wait even more for completely inscrutable reasons later on. (e.g. amazing how long you have to wait for Windows to delete the files in your node_modules directory)

As far as I know, windows explorer has been extremely slow for this kind of operation for ages. It's not even explainable by requiring a file list before starting the operation, I have no idea what it is about Windows explorer, it's just broken for such use cases.

Just recently, I had to look up how to write a robocopy script because simply copying a 60GB folder with many files from a local network drive was unbelievably slow (not to mention resuming failed operations). The purpose was exactly what I wrote: copy a folder in Windows explorer.

What does this have to do with React or JavaScript?

nsonha · 2 years ago
Can you argue a bit more genuinely and not pick on such a minor point as validation? I think parent mentioned other points? How about the logical shift to let client do client things, and server do server things? Server concatting html strings for bilions of users over and over again seems pretty stupid.
beezlewax · 2 years ago
But you can download another node package from npm to delete those other npm packages: npkill. For whatever reason this is as they say in the javascript world "blazingly fast"
cutler · 2 years ago
Wasn't Ember the idiomatic choice before React? I don't remember Angular being that popular with Rails devs generally.
RHSeeger · 2 years ago
> We moved away for MPAs because they were bloated, slow and difficult to work with. SPAs have definitely become what they sought to replace.

Plus we now get the benefit of people trying to "replace" built in browser functionality with custom code, either

The SPA broke it... Back button broken and a buggy custom implementation is there instead? Check.

or

They're changing things because they're already so far from default browser behavior, why not? ... Scrolling broken or janky because the developer decided it would be cool to replace it? Check.

There is a time and place for SPA (mail is a great example). But using them in places where the page reload would load in completely new content for most of the page anyways? That's paying a large cost for no practical benefit; and your users are paying some of that cost.

yellowapple · 2 years ago
> There is a time and place for SPA (mail is a great example). But using them in places where the page reload would load in completely new content for most of the page anyways? That's paying a large cost for no practical benefit; and your users are paying some of that cost.

Yep. It's bonkers to me that a page consisting mostly of text (say, a Twitter feed or a news article) takes even so much as a second (let alone multiple!) to load on any PC/tablet/smartphone manufactured within the last decade. That latency is squarely the fault of heavyweight SPA-enabling frameworks and their encouragement of replacing the browser's features with custom JS-driven versions.

On the other hand, having to navigate a needlessly-elongated history due to every little action producing a page load (and a new entry in my browser's history, meaning one more thing to click "Back" to skip over) is no less frustrating. Neither is wanting to reload a page only for the browser to throw up scary warnings about resending information simply because that page happened to result from some POST'd form submission.

Everything I've seen of HTMX makes it seem to be a nice middle-ground between full-MPA v. full-SPA: each "screen" is its own page (like an MPA), but said page is rich enough to avoid full-blown reloads (with all the history-mangling that entails) for every little action within that page (like an SPA). That it's able to gracefully downgrade back to an ordinary MPA should the backend support it and the client require it is icing on the cake.

I'm pretty averse to frontend development, especially when it involves anything beyond HTML and CSS, but HTMX makes it very tempting to shift that stance from absolute to conditional.

com2kid · 2 years ago
> The SPA broke it... Back button broken and a buggy custom implementation is there instead? Check.

MPAs break back buttons all the damn time, I'd say more often than SPAs do.

Remember the bad old days when websites would have giant text "DO NOT USE YOUR BROWSER BACK BUTTON"? That is because the server had lots of session state on it, and hitting the browser back button would make the browser and server be out of sync.

Or the old online purchase flows where going back to change the order details would completely break the world and you'd have to re-enter all your shipping info. SPAs solve that problem very well.

Let's think about it a different way.

If you are making a phone app, would you EVER design it so that the app downloads UI screens on demand as the user explores the app? That'd be insane.

robertoandred · 2 years ago
Neither broken back buttons nor messy scrolling are unique to SPAs. You're just talking about bad websites.
codeflo · 2 years ago
I mean, there's nothing about an SPA that forces you to break the back button, to the contrary, it's possible to have a very good navigation experience and working bookmarks. But it takes some thinking to get it right.
jfvinueza · 2 years ago
Mail is not a good example. Why would you like to read a collection of documents through A Single Page interface? Gmail was a fantastic improvement over Hotmail and Yahoo, and it provided UX innovations we still haven't caught up with, yes, but MPAs are naturally more suited for reading and composing them. Overriding perfectly clear HTML structure with javascript should be reserved for web experiences that are not documents: that is, videogames, editors, etc (Google *Maps* is a good example). The quality of the product usually depends on how it was implemented more than the underlying technology, but as I see it is: if it's a Document, if the solution has a clear paper-like analogue, HTML is usually the best way to code it, structure it, express it. Let a web page be a web page and let the user browse through it with a web browser. If it's not, well, alright, let's import OpenGL.
RHSeeger · 2 years ago
There's been a fair amount of discussion on this thread, which left me wanting to clarify my comments...

It is entirely possible to have a MPA application that makes calls to the back end to retrieve more data. Especially for things like a static page (cached) with some dynamic content on it. My problem is when people convert an entire site to a Single Page (SPA). When I click to go from the "home page" to a "subsection page", it makes sense to load the entire page. When I click to "see more results" for the list of items on a page, it seems reasonable to load them onto the page.

Side note: If I scroll down the page a few times and suddenly there's 8 items in the back queue, you're doing it wrong. That drives me bonkers.

detaro · 2 years ago
my favorite example is dev.to. A (web-)developer-centric site, open-source nowadays. In a similar discussion years ago it was praised as well-done SPA. Everytime the topic comes back up again I spend 5 minutes clicking around, it every time I find some breakage of a page being critically broken during a transition, not being the page the URL-bar says it is, ... because having a blogging site just be pages navigated by the browser was too easy.
fridgemaster · 2 years ago
I fail to see how HTMX could be the "future". It could have been something useful in the 2000s, back when browsers had trouble processing the many MBs of JS of a SPA. Nowadays SPA's run just fine, the average network bandwidth of a user is full-HD video tier, and even mobile microprocessors can crunch JS decently fast. There is no use case for HTMX. Fragmented state floating around in requests is also a big big problem.

The return of the "backend frontender" is also a non happening. The bar is now much higher in terms of UX and design, and for that you really need frontend specialists. Gone are the days when the backend guys could craft a few html templates and call it a day, knowing the design won't change much, and so they would be able to go back to DB work.

duxup · 2 years ago
I often work on an old ColdFusion application.

It's amusing that for a long time the response was "oh man that sounds terrible".

Now it is "oh hey that's server side rendered ... is it a new framework?".

The cycle continues. I end up writing all sorts of things and there are times when I'm working on one and think "this would be better as Y" and then on Y "oh man this should be Z". There are days where I just opt for using old ColdFusion... it is faster for somethings.

Really though there's so many advantages to different approaches, the important thing is to do the thing thoughtfully.

giraffe_lady · 2 years ago
I also switch back and forth between two large projects written in different decades and it definitely gives an interesting perspective on this. Basically every time I'm in php I go "oh yeah I see why we do react now" and every time I'm in react I go "oh right I see why php still exists."
pengaru · 2 years ago
> there are times when I'm working on one and think "this would be better as Y" and then on Y "oh man this should be Z".

How much of that is just a garden variety "grass is always greener on the other side" effect?

> the important thing is to do the thing thoughtfully.

And finish! Total losses are still total losses no matter how thoughtfully done.

ksec · 2 years ago
I often wonder if someday we could see WebObject or ColdFusion being open sourced.
no_wizard · 2 years ago
ActionScript is basically ES6 too isn't it?
zelphirkalt · 2 years ago
> For instance, a major selling point of Node was running JS on both the client and server so you can write the code once. It's a pretty shitty client experience if you have to do a network request for each and every validation of user input.

Node does not absolve from this. Any important verification still needs to be done on the server side, since any JS on the client side cannot be trusted to not be manipulated. JS on the client side was of course possible before NodeJS. NodeJS did not add anything there regarding where one must verify inputs. Relying on things being checked in the frontend/client-side is just writing insecure websites/apps.

> We moved away for MPAs because they were bloated, slow and difficult to work with. SPAs have definitely become what they sought to replace.

I would claim they became even more so than the thing they replaced. Basically most of any progress in bandwidth or ressources is eaten by more bloat.

obpe · 2 years ago
>Node does not absolve from this. Any important verification still needs to be done on the server side, since any JS on the client side cannot be trusted to not be manipulated. JS on the client side was of course possible before NodeJS. NodeJS did not add anything there regarding where one must verify inputs. Relying on things being checked in the frontend/client-side is just writing insecure websites/apps.

Yeah, that was my point. With Node you can write JS to validate on both the client and server. In the article, they suggest you can just do a server request whenever you need to validate user input.

>Basically most of any progress in bandwidth or ressources is eaten by more bloat.

In my experience, the bloat comes from Analytics and binary data (image/video) not functional code for the SPA. Unfortunately, the business keeps claiming it's "important" to them to have analytics... I don't see it but they pay my salary.

ladberg · 2 years ago
I feel like you misunderstood the OP, they are claiming that Node allows you to reuse the same code to do validation on both the client and the server. By definition that means they are also doing server-side validation, and they are not relying on it being checked on the frontend.
pphysch · 2 years ago
Client side validation is for UX.

Server side validation is for security, correctness, etc.

They are different features that require different code. Blending the two is asking for bugs and vulnerabilities and unnecessary toil.

The real reason that SPAs arose is user analytics.

emodendroket · 2 years ago
I don't understand why that should be the case. There are a lot of checks that end up needing to be repeated twice with no change in logic (e.g., username length needs to be validated on both ends).
blowski · 2 years ago
> The real reason that SPAs arose is user analytics.

Can you go into that a bit? I don't really understand what you mean.

tabtab · 2 years ago
But if you don't blend the two, then you have a DRY violation. Someone should only have a say a field (column) is required in one and only one place, for example. The framework should take care of the details of making sure both the client and the server check.

I myself would like to see a data-dictionary-driven app framework. Code annotations on "class models" are hard to read and too static.

gpapilion · 2 years ago
I actually tend to think of it to add feature degradation and handle micro service issues. It always seemed better to have the client manage that, and more graceful.
adrr · 2 years ago
I don't understand how spa is different than vanilla web app in terms of user analytics? A beacon is a beacon. Whether its img tag with a 1x1 transparent gif or an ajax call.

Also validation is usually built on both client and server for the same things. Like if you have a password complexity validation. Its both on UX and the server otherwise it will be a very terrible UX experience.

obpe · 2 years ago
I have never heard this before. Can you elaborate on the differences? What do you validate on the client side that you don't on the server and vice versa?
pier25 · 2 years ago
I agree but one important point to consider is the dev effort of making a proper SPA which is not a very common occurrence.

"The best SPA is better than the best MPA. The average SPA is worse than the average MPA."

https://nolanlawson.com/2022/06/27/spas-theory-versus-practi...

0cf8612b2e1e · 2 years ago
Can we even weight that statement? The average SPA is significantly worse than the average MPA. There is so much browser functionality that needs to be replicated in a SPA that few teams have the resources or talent to do a decent job.
seti0Cha · 2 years ago
The nice thing about htmx is it gives a middle ground between the two. Build with the simplicity of an MPA while getting a lot of the nice user experience of an SPA. Sure, you don't get all the power of having a full data model on the client side, but you really don't need that for most use cases.
lenkite · 2 years ago
Extend that statement with "Only True Web Gods can create the Best SPA".
simplotek · 2 years ago
> For instance, a major selling point of Node was running JS on both the client and server so you can write the code once.

What? No.

The whole point of Node was a) being able to leverage javascript's concurrency model to write async code in a trivial way, and b) the promise that developers would not be forced to onboard to entirely different tech stacks on frontend, backend, and even tooling.

There was no promise to write code once, anywhere. The promise was to write JavaScript anywhere.

callahad · 2 years ago
That's the reasoned take, and yet I have strong and distinct memories of Node being sold on the basis of shared code as early as 2011. Much of the interest (and investment) in Meteor was fueled by its promise of "isomorphic JavaScript."
amiga-workbench · 2 years ago
>For instance, a major selling point of Node was running JS on both the client and server so you can write the code once

I mean, I'm using Laravel Livewire quite heavily for forms, modals and search. So effectively I've eliminated the need for writing much front-end code. Everything that matters is handled on the server. This means the little Javascript I'm writing is relegated to frilly carousels and other trivial guff.

lucasyvas · 2 years ago
You're on the money with this assessment. It's all bandwagon hopping without any consideration for reality.

Also, all these things the author complains about are realities of native apps, which still exist in massive numbers especially on mobile! I appreciate that some folks only need to care about the web, but declaring an architectural pattern as superior - in what appears to be a total vacuum - is how we all collectively arrive at shitty architecture choices time and time again.

Unfortunately, you have to understand all the patterns and choose when each one is optimal. It's all trade-offs - HTMX is compelling, but basing your entire architectural mindset around a library/pattern tailored to one very specific type of client is frankly stupid.

megalord · 2 years ago
> to one very specific type of client is frankly stupid

However, I see this specific type of clients that need just basic web functionalities, e.g CRUD operations and build something basic more prevalent than those that need very instant in-app reactivity and animations and so on (React, and SPA ecosystem).

Nowadays that's exactly the opposite, every web developer assumes SPA as default option, even on these simple CRUD examples.

marcosdumay · 2 years ago
> But that isn't because of the technology

Technically, the technology support doing any of them right. On practice, doing good MPAs require offloading as much as you can into the mature and well developed platforms that handle them; while doing good SPAs require overriding the behavior of your immature and not thoroughly designed platforms on nearly every point and handling it right.

Technically, it's just a difference on platform maturity. Technically those things tend to correct themselves given some time.

On practice, almost no SPA has worked minimally well in more than a decade.

jonahx · 2 years ago
> But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again. Nothing about this technology will stop that.

While I am a fan of MPAs and htmx, and personally find the dev experience simpler, I cannot argue with this.

The high-order bit is always the dev's skill at managing complexity. We want so badly for this to be a technology problem, but it's fundamentally not. Which isn't to say that specific tech can't matter at all -- only that its effect is secondary to the human using the tech.

wrenky · 2 years ago
> , it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again

It brings a tear of joy to my eye honestly. The circle of life continues, and people always forget people are bad at programming (myself included).

danielvaughn · 2 years ago
100%. Saying that [technology x] will remove complexity is like saying that you've designed a house that can't get messy. All houses can be messy, all houses can be clean. It depends on the inhabitants.
elliottinvent · 2 years ago
True, but well designed houses that have natural places for the things a well functioning home needs, are far easier to keep clean and tidy.
randomNumber7 · 2 years ago
Yes, but some technologies make it easier (or harder) to keep everything clean.

Like in my opinion you can write clean code in C, but since you dont even have a string type it shepherds you into doing nasty stuff with char*... etc.

chasd00 · 2 years ago
I remember the hype about javascript on the server (node) being front-end devs didn't have to know/learn a different language to write backend code. Not so much writing code once but not having to write Javascript for client-side and then switch to something else to write the server-side.
erikerikson · 2 years ago
I remember it being both and then some...

[edit: both comprising shared code between client and server, as well as, reduced barrier to server-side contribution, and then some including but not limited to the value of the concurrency model, expansive (albeit noisy) library availability, ...]

com2kid · 2 years ago
> Also, there was a push to move the shitty code from the server to the client to free up server resources and prevent your servers from ruining the experience for everyone.

People forget how bad MPAs were, and how expensive/complicated they were to run.

Front end frameworks like svelte let you write nearly pure HTML and JS, and then the backend just supplies data.

Having the backend write HTML seems bonkers to me, instead of writing HTML on the client and debugging it, you get to write code that writes code that you then get to debug. Lovely!

Even more complex frameworks, like React, you have tools like JSX that map pretty directly to HTML, and in my experience a lot of the hard to debug problems come up with the framework tries to get smart and doesn't just stupidly pop out HTML.

roguas · 2 years ago
We decided for fun to do a small project in htmx (we had to pick something, one person opted strongly). Yeah, I was cringing and still am. I fully support frontend/backend split status quo.

For stuff that is uncomplicated I much prefer svelte as it still keeps the wall between frontend/backend but let's you do a lot of "yolo frontend" that is shortlived and gets fixed. I run small startup on the side" svelte fe + clojure be. It works great as I have different acceptance for crap in frontend (if I can fix something with style="", I do and I don't care). I often hotfix a lot of stuff in front where I can and just deploy to return later and find better solution that involves some changes in backend.

I can't imagine that for moving a button I would have to do deployment dance for whole app that in my case has 3 components(where one is distributed and requires strict backwards compat).

chubot · 2 years ago
Well at least the shitty MPAs will run on other people's servers, rather than shitty SPAs running on my phone and iPad

FWIW I turned off JavaScript on my iPad a couple years ago ... what a relief!

I have nothing against JS, but the sites just became unusably slow

foul · 2 years ago
That demonstration as per OP is dumb or targeted to React-ists. You can, with HTMX, do the classic AJAX submit with offline validation.

In the last years, for every layer of web development, what I saw was that a big smelly pile of problems with bad websites and webapps, be it MPA or SPA, was not a matter of bad developers on the product, but more a problem of bad, sometimes plain evil, developers on systems sold to developers to build their product upon. Boilerplate for apps, themes, ready-made app templates are largely garbage, bloat, and prone to supply chain attacks of any sort.

hombre_fatal · 2 years ago
> For instance, a major selling point of Node was running JS on both the client and server so you can write the code once.

(I'm not actually arguing with you, just thinking out loud)

This is often repeated but I don't think it even close to a primary reason.

The primary reason you build JS web clients is for the same reason you build any client: the client owns the whole client app state and experience.

It's only a fluke of the web that "MPA" even means anything. While it obviously has its benefits, we take for granted how weird it is for a server to send UI over the wire. I don't see why it would be the default to build things that way except for habit. It makes more sense to look at MPA as a certain flavor of optimization and trade-offs imo which is why defaulting to MPA vs SPA never made sense now that SPA client tooling has come such a long way.

For example, SPA gives you the ability to write your JS web client the same way you build any other client instead of this weird thing where a server sends an initial UI state over the wire and then you add JS to "hydrate" it, and then ensuring the server and client UIs are synchronized.

Htmx has similar downsides of MPAs since you need to be sure that every server endpoint sends an html fragment that syncs up to the rest of the client UI assumptions. Something as simple as changing a div's class name might incur html changes across many html-sending api endpoints.

Anyways, client development is hard. Turns out nothing was a panacea and it's all just trade-offs.

MetaWhirledPeas · 2 years ago
> all the devs writing shitty MPAs are now writing shitty SPAs

This pretty much sums it up. There is no right technology for the wrong developer.

It's not about what can get the job done, it's about the ergonomics. Which approach encourages good habits? Which approach causes the least amount of pain? Which approach makes sense for your application? It requires a brain, and all the stuff that makes up a good developer. You'll never get good output from a brainless developer.

croes · 2 years ago
>For instance, a major selling point of Node was running JS on both the client and server so you can write the code once.

You did write it once before too. With NodeJS you have Javascript on both sides, that's the selling point. You still have server and client code and you can write a MPA with NodeJS

Capricorn2481 · 2 years ago
> For instance, a major selling point of Node was running JS on both the client and server so you can write the code once. It's a pretty shitty client experience if you have to do a network request for each and every validation of user input.

These are two different things and I don't see how they're related. You don't need code sharing to do client side navigation. And you should always be validating on the backend anyway. Nothing is stopping an MPA from validating on the client, whether you can do code sharing or not.

Spivak · 2 years ago
> prevent your servers from ruining the experience for everyone.

This never panned out because people are too afraid to store meaningful state on the client. And you really can't because (reasonable) user expectations. Unlike a Word document people expect to be able to open word.com and have all their stuff and have n simultaneous clients open that don't step on one another.

So to actually do anything you need a network request but now it's disposable-stateful where the client kinda holds state but you can't really trust it and have to constantly refresh.

guggle · 2 years ago
> a major selling point of Node was running JS on both the client and server so you can write the code once

Yes... but some people like me just don't like JS so for us that was actually a rebutal.

kitsunesoba · 2 years ago
> But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again.

I think the root cause of this is lack of will/desire to spend time on the finer details, either on the part of management who wants it out the door the second it's technically functional or on the part of devs who completely lose interest the second that there's no "fun" work left.

Aeolun · 2 years ago
> SPAs have definitely become what they sought to replace.

Not sure about that. SPA’s load 4MB of code once, then only data.

Now look at a major news front page, which loads 10MB for every article.

bcrosby95 · 2 years ago
A pro can be a con, and vice versa. The reason why you move to a SPA might be the reason why you move away from it. The reason why you use sqlite early on might be the reason you move away from it later.

A black & white view of development and technology is easy but not quite correct. Technology decisions aren't "one size fits all".

onion2k · 2 years ago
But that isn't because of the technology, it's because all the devs writing shitty MPAs are now writing shitty SPAs. If this becomes popular, they will start writing shitty MPAs again. Nothing about this technology will stop that.

This is only sort of true. The problem can be mitigated to a large extent by frameworks; as the framework introduces more and more 'magic' the work that the developer has to do decreases, which in turn reduces the surface area of things that they can get wrong. A perfect framework would give the developer all the resources they need to build an app but wouldn't expose anything that they can screw up. I don't think that can exist, but it is definitely possible to reduce places where devs can go astray to a minimum.

And, obviously, that can be done on both the server and the client.

I strongly suspect that as serverside frameworks (including things that sit in the middle like Next) improve we will see people return to focusing on the wire transfer time as an area to optimize for, which will lead apps back to being more frontend than backend again. Web dev will probably oscillate back and forth forever. It's quite interesting how things change like that.

tomca32 · 2 years ago
Unfortunately, developers often write code in a framework they don't know well so they end up fighting the framework instead of using the niceties it provides. The end result being that the surface area of things that can go wrong actually increases.
CSSer · 2 years ago
That oscillation probably wouldn't happen if it were possible to be more humble about the scope of the solution and connection to commercial incentives. It's gotten to the point where a rite of passage for becoming a senior developer is waking up to the commercialization and misdirection.

You can see the cracks in Next.js. Vercel, Netlify et. al, are interested in capitalizing on the murkiness (the middle, as you put it) in this space. They promise static performance but then push you into server(less) compute so they can bill for it. This has a real toll on the average developer. In order for a feature to be a progressive enhancement, it must be optional. This is orthogonal to what is required for a PaaS to build a moat.

All many people need is a pure, incrementally deployed SSG with a robust CMS. That could exist as a separate commodity, and at some points in the history of this JAMStack/Headless/Decoupled saga it has come close (excluding very expensive solutions). It's most likely that we need web standards for this, even if it means ultimately being driven by commercial interests.

halfcat · 2 years ago
> a major selling point of Node was running JS on both the client and server so you can write the code once

But we don’t have JS devs.

We have a team of Python/PHP/Elixir/Ruby/whatever devs and are incredibly productive with our productivity stacks of Django/Laravel/Phoenix/Rails/whatever.

mixmastamyk · 2 years ago
> have to do a network request for each and every validation of user input.

HTML5 solved that to a first approximation client-side. Often later you'll need to reconcile with the database and security, so that will necessarily happen there. I don't see that being a big trade-off today.

SoftTalker · 2 years ago
Well by definition the "average" team is not capable of writing a "great" app. So it doesn't matter so much what the technology stack is -- most of what is produced is pretty shitty regardless.
mattgreenrocks · 2 years ago
This is the real problem, and why I'd argue we've made little real progress in tooling despite huge investment in it.

The web still requires too much code and concepts to be an enjoyable dev experience, much less one that you can hold in your head. Web frameworks don't really fix this, they just pile leaky abstractions on that require users to know the abstractions as well as the things they're supposed to abstract.

It seems like it is difficult to truly move webdev forward because you have to sell to people who have already bought into the inessential complexity of the web fully. The second you try to take part of that away from them, they get incensed and it triggers loss aversion.

sublinear · 2 years ago
> all the devs writing shitty MPAs are now writing shitty SPAs

drain the swamp man

foobarbecue · 2 years ago
welcome to City Web Design, can a take a order
brushfoot · 2 years ago
I use tech like HTMX because, as a team of one, I have no other choice.

I tried using Angular in 2019, and it nearly sank me. The dependency graph was so convoluted that updates were basically impossible. Having a separate API meant that I had to write everything twice. My productivity plummeted.

After that experience, I realized that what works for a front-end team may not work for me, and I went back to MPAs with JavaScript sprinkled in.

This year, I've looked at Node again now that frameworks like Next offer a middle ground with server-side rendering, but I'm still put off by the dependency graphs and tooling, which seems to be in a constant state of flux. It seems to offer great benefits for front-end teams that have the time to deal with it, but that's not me.

All this to say pick the right tool for the job. For me, and for teams going fuller stack as shops tighten their belts, that's tech like HTMX, sprinkled JavaScript, and sometimes lightweight frameworks like Alpine.

scoofy · 2 years ago
I use htmx on my current project, and it's like a dream. I'm happy to sacrifice a bit of bandwidth to be able to do all the heavy lifting in python. On top of that, it makes testing much much easier since it turns everything is GET and POST requests.

I'd add a couple features if I were working there (making css changes and multiple requests to multiple targets standard), but as it stands, it's a pleasure to work in.

elliottinvent · 2 years ago
Hop on the Discord, a very active and collaborative community: https://htmx.org/discord
niux · 2 years ago
What framework do you use for Python?
hirako2000 · 2 years ago
I have no love for unnecessarily bloated dependency graphs, but we can't have the cake and eat the cake too.

Next.js for example, comes packed with anything and everything one might need to build an app. Sitting on the promise of hyperproductivity with "simplicity". Plus, is made of single responsability principles set of modules, kind of necessary to build a solve-all needs framework.

And it does that.

A bit like Angular, set to solve everything front-side. With modules not entirely tightly coupled but sort of to get the full solution.

And it did that.

Then we have outliers like React, which stayed away from trying to solve too many things. But the developers have spoken, and soon enough it became packed in with other frameworks. Gatsby etc. And community "plug-ins" to do that thing that dev think should be part of the framework.

And they did that, solved most problems from authentication to animation, free and open source sir, so that developers can write 12 lines of code and ship 3 features per day in some non innovative way, but it works, deployed in the next 36 seconds, making the manager happy as he was wondering how to justify over 100k in compensation going to a young adult who dressed cool and seemed to type fast.

Oh no! dependency hell. I have to keep things maintained, I have to actually upgrade now, LTS expired, security audits on my back, got to even change my code that worked perfectly well and deal with "errors", I can't ship 3 features by the end of today.

We need a new framework!

BiteCode_dev · 2 years ago
Django comes with a lot: auth, caching, csrf protection, an orm, the admin, form workflow, templating, migrations, i18n, and yet doesn't come with thousands of deps.
marcosdumay · 2 years ago
Back in the late 90's and early 00's, armed with the experience of C, C++, Bash, and Perl, everybody knew it very clearly that "batteries included" is the correct way to create development tools.

I don't know about the current fashion of minimalism comes from. It doesn't bring simplicity.

justeleblanc · 2 years ago
So Next.js did everything right, but is built upon React that does too much. Okay?
ademup · 2 years ago
Your story sounds similar to mine, and your choice to use HTMX has me motivated to check it out. The sum total of my software supports 5 families' lifestyles entirely on LAMP MPAs with no frameworks at all. Thanks for posting.
jasfi · 2 years ago
I'm using React, and I feel like I can manage as a team of one. But React has a huge community, which means lots of libraries for just about anything you need.

I previously used HTMX for another project of mine, and it worked fine too. I did, however, feel limited compared to React because of what's available.

Deleted Comment

toyg · 2 years ago
Until those libraries rot, or some dependency breaks the mountain of hacks...
willio58 · 2 years ago
Angular is falling off hard in the frontend frameworks race. And I totally agree about how the boilerplate and other things about Angular feels bad to work with. Other frameworks are far easier to build with, to the point where a 1-person team can easily handle them. React is being challenged but still has the biggest community, it's a much better place to start than Angular when evaluating frameworks like this.

All that being said, I'm glad HTMX worked out for you!

fridgemaster · 2 years ago
Angular 2 works fine out of the box, and already provides a good architecture that noobs struggle to come up with in "freestyle" solutions like React. Angular's bi-directional binding is way superior and simpler to use vs React's mono-directional binding: you can just use bound variables, no need to do complicated setState or use abominations like Redux. Vue also has bi-directional binding. Essentially there are many alternatives that are superior to React, which is where it is mostly because of fame and popularity.
halfcat · 2 years ago
> React is being challenged but still has the biggest community

jQuery and PHP have entered the chat

ChikkaChiChi · 2 years ago
I've felt the same way and it's good to hear I'm not alone. I feel like log4j should have been enough of a jolt to push back on dependency hell enough that devs would start writing directly against codebases they can trace and understand. Maybe this is just a byproduct of larger teams not having to do their own DevOps.
MisterSandman · 2 years ago
Angular is nototiously bad for single developers. React is much better, and things like Remix and Gatsby are even better.
_heimdall · 2 years ago
I really can't recommend Gatsby to anyone at this point. The sale to Netlify was the final nail in the coffin of Gatsby, the entire business was sold off only for the perceived value of Valhalla
nickisnoble · 2 years ago
Svelte is even betterer
Bellend · 2 years ago
I'm a single developer and its fine. (5 years in).
stanmancan · 2 years ago
Have you taken a look at Elixir/Phoenix? I've recently made the switch and I find it incredibly productive as a solo developer.
fridgemaster · 2 years ago
Just pick a lightweight web framework, and freeze the dependencies. I don't see the problem.
recursivedoubts · 2 years ago
i am the creator of htmx, this is a great article that touches on a lot of the advantages of the hypermedia approach (two big ones: simplicity & it eliminates the two-codebase problem, which puts pressure on teams to adopt js on the backend even if it isn't the best server side option)

hypermedia isn't ideal for everything[1], but it is an interesting & useful technology and libraries like htmx make it much more relevant for modern development

we have a free book on practical hypermedia (a review of concepts, old web 1.0 style apps, modernized htmx-based apps, and mobile hypermedia based on hyperview[2]) available here:

https://hypermedia.systems

[1] - https://htmx.org/essays/when-to-use-hypermedia/

[2] - https://hyperview.org/

tkgally · 2 years ago
I didn’t know what HTMX was and couldn’t figure it out from the comments here, so I went to htmx.org. This is what I saw at the top of the landing page:

> introduction

> htmx gives you access to AJAX, CSS Transitions, WebSockets and Server Sent Events directly in HTML, using attributes, so you can build modern user interfaces with the simplicity and power of hypertext

> htmx is small (~14k min.gz’d), dependency-free, extendable, IE11 compatible & has reduced code base sizes by 67% when compared with react

This tells me what htmx does and what some of its properties are, but it doesn’t tell me what htmx is! You might want to borrow some text from your Documentation page and put something like the following at the top of your homepage:

“htmx is a dependency-free, browser-oriented javascript library that allows you to access modern browser features directly from HTML.”

fridgemaster · 2 years ago
>simplicity

Can be achieved in MPAs and SPAs alike. I'd also argue that having state floating around in HTTP requests is harder to reason about than having it contained in a single piece in the browser or in a server session. Granted this is not a problem of HTMX, but of hypermedia. There is a reason why HATEOAS is almost never observed in REST setups.

> two-codebase problem

This is a non-problem. In every part of a system, you want to use the right tool for the job. Web technologies are better for building UIs, if only by the sheer ammount of libraries and templates that already exist. The same splitting happens in the server side: you would have a DB server, and a web service, maybe a load balancer. You naturally have many parts in a system, each one being specialized in one thing, and you would pick the technologies that make the most sense for every one of them. I'd also argue that backend developers would have a hard time dealing with the never ending CSS re-styling and constant UI change requests of today. This is not 2004 where the backend guys could craft a quick html template in a few hours and went back to work in the DB unmolested. The design and UX bar is way higher now, and specialists are naturally required.

_heimdall · 2 years ago
> There is a reason why HATEOAS is almost never observed in REST setups.

I saw the HTMX creator floating around the thread so hopefully he can confirm, but my understanding is HATEOS is a specific implementation of a REpresentstional State Transfer API. JSON is often used for the API, HTMX uses HTML instead but it is indeed still a REST API transferring state across the wire.

My shift key really doesn't appreciate all these abbreviations

runlaszlorun · 2 years ago
Just started using HTMX on a new project and have been a big fan. I’d go so far as to say that it’s the best practical case for the theory of hypermedia in general. Like others have mentioned, this is the sort of thing that prob _should_ be in the HTML spec but, given what I’ve personally seen about the standards process, I have little expectation of seeing that. Thx again!
rmbyrro · 2 years ago
How would this be in HTML standard if it requires JS to work?
cogman10 · 2 years ago
It's not clear to me, but how and where is state managed?

In the OPs article, it looks like the only thing going over the line is UUIDs. How does the server know "this uuid refers to this element"? Does this require a sticky session between the browser and the backend? Are you pushing the state into a database or something? What does the multi-server backend end up looking like?

recursivedoubts · 2 years ago
pdonis · 2 years ago
The article under discussion here appears to be saying that HTMX can work without Javascript enabled. But HTMX itself is a Javascript library, correct? So how can it work without Javascript enabled?
recursivedoubts · 2 years ago
the article says it is possible to build web applications that use htmx for a smoother experience if javascript is enabled, but that properly falls back to vanilla HTML if js is not enabled

this is called progressive enhancement[1], and yes, htmx can be used in this manner although it requires some effort by the developer

unpoly, another hypermedia-oriented front end library, is more seamless in this regard and worth looking at

[1] - https://developer.mozilla.org/en-US/docs/Glossary/Progressiv...

elliottinvent · 2 years ago
I think the point is that by using HTMX your site can degrade gracefully for non-JS users.

A site for a project of mine [1] is built with HTMX and operates more or less the same for JS and no-JS users.

I’m aiming to add some bells and whistles for JS users but the version you see there is more or less the experience non-JS users gets too:

1. https://www.compactdata.org

mixmastamyk · 2 years ago
Adding the form element allows it to post to the server without javascript, just like olden times. Since the htmx header is not included, the backend was instructed to return a full page instead of a fragment.
the_gastropod · 2 years ago
H̶T̶M̶X̶ e̶m̶b̶r̶a̶c̶e̶s̶ t̶h̶e̶ i̶d̶e̶a̶ o̶f̶ p̶r̶o̶g̶r̶e̶s̶s̶i̶v̶e̶ e̶n̶h̶a̶n̶c̶e̶m̶e̶n̶t̶ [̶1̶]. Y̶o̶u̶ s̶t̶a̶r̶t̶ w̶i̶t̶h̶ a̶ f̶u̶n̶c̶t̶i̶o̶n̶a̶l̶, j̶a̶v̶a̶s̶c̶r̶i̶p̶t̶-̶f̶r̶e̶e̶, w̶e̶b̶ a̶p̶p̶l̶i̶c̶a̶t̶i̶o̶n̶. H̶T̶M̶X̶ l̶a̶y̶e̶r̶s̶ o̶n̶ t̶o̶p̶ o̶f̶ t̶h̶i̶s̶ t̶o̶ ̶m̶a̶k̶e̶ t̶h̶e̶ e̶x̶p̶e̶r̶i̶e̶n̶c̶e̶ b̶e̶t̶t̶e̶r̶, b̶u̶t̶ i̶t̶'s̶ e̶f̶f̶e̶c̶t̶i̶v̶e̶l̶y̶ o̶p̶t̶i̶o̶n̶a̶l̶.

[1] https://developer.mozilla.org/en-US/docs/Glossary/Progressiv...

Whoa... I was very slow apparently

149203 · 2 years ago
Every company I've been a part of has redesigned their front end at least once.

These redesigns would be a lot more difficult if we had to edit HTML on the client and the HTML that a server returns.

Also, HTMX is best styled with semantic classes. Which is a problem for companies using Tailwind and utility classes in their HTML. With class-heavy HTML it's nearly impossible to redesign in two different places. And performance suffers and returning larger chunks of HTML.

Despite all that, I want HTMX to be the standard way companies develop for the web. But these 2 problems need to be addressed first, I feel, before companies (like mine) take the leap.

booleandilemma · 2 years ago
I use htmx on my personal site and I love it so much. Thank you!
account-5 · 2 years ago
Complete novice here; what are the advantages of hyperview over something like flutter?

I looked at a bunch of frameworks before settling on dart/flutter for my own cross platform projects. I did look at htmx but since I wasn't really wanted to create a web app I moved on. But I like the idea of a true rest style of app.

recursivedoubts · 2 years ago
hyperview uses the hypermedia approach, which means the client and server are decoupled via the uniform interface

so you can, for example, deploy a new version of your mobile app without updating the client, a big advantage over needing users to update their mobile apps

benatkin · 2 years ago
It doesn't feel like hypermedia to me. It just feels like a vue-like language that is an internal DSL for HTML instead of an external DSL for HTML like svelte and handlebars.

Hypermedia advances would be microformats and RDF and the like. http://microformats.org/wiki/faqs-for-rdf

recursivedoubts · 2 years ago
it absolutely is hypermedia

we generalize HTML's hypermedia controls in the following way:

- any HTML element can become a hypermedia control

- any event can drive a hypermedia interaction

- any element can be the target of a hypermedia interaction (transclusion, a concept in hypermedia not implemented by HTML)

all server interactions are done in terms of hypermedia, just like w/links and forms

it also makes PUT, PATCH and DELETE available, which allows HTML to take advantage of the full range of HTTP actions

htmx is a completion of HTML as a hypermedia, this is its design goal

jonahx · 2 years ago
Out of curiosity, have you used hyperview? Do you consider it production ready?
redonkulus · 2 years ago
We've been using similar architecture at Yahoo for many years now. We tried to go all in on a React framework that worked on the server and client, but the client was extremely slow to bootstrap due to downloading/parsing lots of React components, then React needing to rehydrate all the data and re-render the client. Not to mention rendering an entire React app on the server is a huge bottleneck for performance (can't wait for Server Components / Suspense which are supposed to make this better ... aside: we had to make this architecture ourselves to split up one giant React render tree into multiple separate ones that we can then rehydrate and attach to on the client)

We've moved back to an MPA structure with decorated markup to add interactivity like scroll views, fetching data, tabs and other common UX use cases. If you view the source on yahoo.com and look for "wafer," you can see some examples of how this works. It helps to avoid bundle size bloat from having to download and compile tons of JS for functionality to work.

For a more complex, data-driven site, I still think the SPA architecture or "islands" approach is ideal instead of MPA. For our largely static site, going full MPA with a simple client-side library based on HTML decorations has worked really well for us.

vosper · 2 years ago
> We've been using similar architecture at Yahoo for many years now.

At all of Yahoo? I imagined such a big company would have a variety of front-end frameworks and patterns.

redonkulus · 2 years ago
Nope, not all. Yahoo homepage, News, Entertainment, Weather all use this architecture. Yahoo Mail uses a React/Reduct architecture on the client. Other Yahoo properties with more complex client-side UX requirements are using things like Svelte or React. It's not a one size fits all architecture at Yahoo, we let teams determine the right tools for the job.
thomond · 2 years ago
I had no idea Yahoo
pier25 · 2 years ago
> simple client-side library based on HTML decorations has worked really well for us

What library are you using?

redonkulus · 2 years ago
We developed an internal library, but there are similar libraries in open source (although I can't remember their names).
aidenn0 · 2 years ago
> Managing state on both the client and server

This is a necessity as long as latencies between the client and server are large enough to be perceptible to a human (i.e. almost always in a non-LAN environment).

[edit]

I also just noticed:

> ...these applications will be unusable & slow for those on older hardware or in locations with slow and unreliable internet connections.

The part about "slow and unreliable internet connections" is not specific to SPAs If anything a thick client provides opportunities to improve the experience for locations with slow and unreliable internet connections.

[edit2]

> If you wish to use something other than JavaScript or TypeScript, you must traverse the treacherous road of transpilation.

This is silly; I almost exclusively use compiled languages, so compilation is happening no matter what; targeting JS (or WASM) isn't that different from targeting a byte-code interpreter or hardware...

--

I like the idea of HTMX, but the first half of the article is a silly argument against SPAs. Was the author "cheating" in the second half by transpiling clojure to the JVM? Have they tested their TODO example on old hardware with an unreliable internet connection?

lolinder · 2 years ago
> This is silly; I almost exclusively use compiled languages, so compilation is happening no matter what; targeting JS (or WASM) isn't that different from targeting a byte-code interpreter or hardware...

I agree with everything else you said, but having followed the development of Kotlin/JS and WASM closely I have to disagree with this statement.

JavaScript is a very bad compilation target for any language that wasn't designed with JavaScript's semantics in mind. It can be made to work, but the result is enormous bundle sizes (even by JS standards), difficult sourcemaps, and terrible performance.

WASM has the potential to be great, but to get useful results it's not just a matter of changing the compilation target, there's a lot of work that has to be done to make the experience worthwhile. Rust's wasm_bindgen is a good example: a ton of work has gone into smooth JS interop and DOM manipulation, and all of that has to be done for each language you want to port.

Also, GC'd languages still have a pretty hard time with WASM.

8organicbits · 2 years ago
> a thick client provides opportunities to improve the experience for locations with slow and unreliable internet connections.

The word "slow" here is unclear. Thick clients work poorly on low bandwidth connections, as the first load takes too long to download the JS bundle. JS bundles can be crazy big and may get updated regularly. A user may give up waiting. Thin clients may load faster on low bandwidth connections as they can use less javascript (including zero javascript for sites that support progressive enhancement, my favorite as a NoScript user). Both thin and thick clients can use fairly minimal data transfer for follow-up actions. An HTMX patch can be pretty small, although I agree the equivalent JSON would be smaller.

If "slow" means high latency, then you're right, a thick client can let the user interact with local state and the latency is only a concern when state is being synchronized (possibly with a spinner, or in the background while the user does other things).

Unreliable internet is unclear to me. If the download of the JS bundle fails, then the thick client never loads. A long download time may increase the likelihood of that happening. Once both are loaded, the thick client wins as the user can work with local state. Both need to sync state sometimes. The thin client probably needs the user to initiate retry (a poor experience) and the thick client could support retry in the background (although many don't support this).

ivan_gammel · 2 years ago
Fully agree with this comment. Also, client and server state are different: on the client you need only session state relevant to user journey, on server you keep only persistent state and use REST level 3 for the rest.
michaelchisari · 2 years ago
Everybody's arguing about whether Htmx can do this or that, or how it handles complex use case x, but Htmx can do 90% of what people need in an extremely simple and straight-forward way. That means it (or at least its approach) won't disappear.

A highly complex stock-trading application should absolutely not be using Htmx.

But a configuration page? A blog? Any basic app that doesn't require real-time updates? Htmx makes much more sense for those than React. And those simple needs are a much bigger part of the internet than the Hacker News crowd realizes or wants to admit.

If I could make one argument against SPA's it's not that they don't have their use, they obviously do, it's that we're using them for too much and too often. At some point we decided everything had to be an SPA and it was only a matter of time before people sobered up and realized things went too far.

ktosobcy · 2 years ago
This!

It's like with static websites - we went from static to blogs rendered in php and then back to jekyll...

silver-arrow · 2 years ago
Exactly! Well said
mtlynch · 2 years ago
I really want to switch over to htmx, as I've moved away from SPAs frameworks, and I've been much happier. SPAs have so much abstraction, and modern, vanilla JavaScript is pretty decent to work with.

The thing that keeps holding me back from htmx is that it breaks Content Security Policy (CSP), which means you lose an effective protection against XSS.[0] When I last asked the maintainer about this, the response was that this was unlikely to ever change.[1]

Alpine.js, a similar project to htmx, claims to have a CSP-compatible version,[2] but it's not actually available in any official builds.

[0] https://htmx.org/docs/#security

[1] https://news.ycombinator.com/item?id=32158352

[2] https://alpinejs.dev/advanced/csp

[3] https://github.com/alpinejs/alpine/issues/237

BeefySwain · 2 years ago
I keep seeing people talk about this, can someone create a minimum example of what this exploit would look like?
robertoandred · 2 years ago
If you don't like abstraction, why would use something as abstracted and non-standard is htmx?
mtlynch · 2 years ago
It's a tradeoff, and either extreme has problems.

Too much abstraction (especially leaky abstraction the way web frameworks are) makes it difficult to reason about your application.

But if you optimize for absolute minimal abstraction, then you can get stuck with code that's very repetitive where it's hard to pick apart the business logic from all the boilerplate.

recursivedoubts · 2 years ago
htmx can work w/ a CSP, sans a few features (hx-on, event filters)
mtlynch · 2 years ago
My understanding based on the docs[0] is that htmx works with CSP, but it also drastically weakens its protection, as attackers who successfully inject JS into htmx attributes gain code execution that CSP would have normally prevented.

Am I misunderstanding? If I can use htmx without sacrificing the benefits of CSP, I'd really love to use htmx.

[0] https://htmx.org/docs/#security

jeremyjh · 2 years ago
Alpine is a lightweight client side framework, not really at all equivalent to htmx.
mtlynch · 2 years ago
I'm not sure what you mean. htmx and alpine.js are both client-side frameworks. To me, they seem to have similar goals and similar functionality.

What do you see as the difference?

dfabulich · 2 years ago
People were making this prediction ten years ago. It was wrong then, and it's wrong now.

This article makes its case about Htmx, but points out that its argument applies equally to Hotwired (formerly Turbolinks). Both Htmx and Hotwired/Turbolinks use custom HTML attributes with just a little bit of client-side JS to allow client-side requests to replace fragments of a page with HTML generated on the server side.

But Turbolinks is more than ten years old. React was born and rose to popularity during the age of Turbolinks. Turbolinks has already lost the war against React.

The biggest problem with Turbolinks/Htmx is that there's no good story for what happens when one component in a tree needs to update another component in the tree. (Especially if it's a "second cousin" component, where your parent component's parent component has subcomponents you want to update.)

EDIT: I know about multi-swap. https://htmx.org/extensions/multi-swap/ It's not good, because the onus is on the developer to compute which components to swap, on the server side, but the state you need is usually on the client. If you need multi-swap, you'll find it orders of magnitude easier to switch to a framework where the UI is a pure function of client-side state, like React or Svelte.

Furthermore, in Turbolinks/Htmx, it's impossible to implement "optimistic UI," where the user creates a TODO item on the client side and posts the data back to the server in the background. This means that the user always has to wait for a server round trip to create a TODO item, hurting the user experience. It's unacceptable on mobile web in particular.

When predicting the future, I always look to the State of JS survey https://2022.stateofjs.com/en-US/libraries/front-end-framewo... which asks participants which frameworks they've heard of, which ones they want to learn, which ones they're using, and, of the framework(s) they're using, whether they would use it again. This breaks down into Awareness, Usage, Interest, and Retention.

React is looking great on Usage, and still pretty good on Retention. Solid and Svelte are the upstarts, with low usage but very high interest and retention. Htmx doesn't even hit the charts.

The near future is React. The further future might be Svelte or Solid. The future is not Htmx.

jgoodhcg · 2 years ago
I've spent almost my entire career working on react based SPAs and react native mobile apps. I've just started playing around with HTMX.

> no good story for what happens when one component in a tree needs to update another component in the tree

HTMX has a decent answer to this. Any component can target replacement for any other component. So if the state of everything on the page changes then re-render the whole page, even if what the user clicked on is a button heavily nested.

> it's impossible to implement "optimistic UI," ... hurting the user experience

Do we actually need optimistic UI? Some apps need to work in offline mode sure, like offline maps or audiobooks or something. The HTMX author agrees, this is not the solution for that. Most of the stuff I have worked on though ... is useless without an internet connection.

In the case of "useless without internet connection" do we really need optimistic UI. The actual experience of htmx is incredibly fast. There is no overhead of all the SPA stuff. No virtual dom, hardly any js. It's basically the speed of the network. In my limited practice I've actually felt the need to add delays because the update happens _too fast_.

I'm still evaluating htmx but not for any of the reasons you've stated. My biggest concern is ... do I want my api to talk in html?

dfabulich · 2 years ago
> Do we actually need optimistic UI? Some apps need to work in offline mode sure, like offline maps or audiobooks or something. The HTMX author agrees, this is not the solution for that. Most of the stuff I have worked on though ... is useless without an internet connection.

> It's basically the speed of the network.

Does your stuff work on mobile web? Mobile web requests can easily take seconds, and on a dodgy connection, a single small request can often take 10+ seconds.

The difference between optimistic UI and non-optimistic UI on mobile web is the difference between an app that takes seconds to respond, on every click, and one that responds instantly to user gestures.

recursivedoubts · 2 years ago
> do I want my api to talk in html

https://htmx.org/essays/splitting-your-apis/

koromak · 2 years ago
My app would crash and burn without optimistic UI. For simple CRUD applications sure, but most products these days aren't CRUD apps anymore.
OliverM · 2 years ago
I've not used Htmx, but a cursory browse of their docs gives https://htmx.org/extensions/multi-swap/ which seems to solve exactly this problem. And thinking about it, what makes it as difficult as you say? If you've a js-library on the client you control you can definitely send payloads that library could interpret to replace multiple locations as needed. And if the client doesn't have js turned on the fallback to full-page responses solves the problem by default.

Of course, I've not used Turbolinks, so I don't know what issues applied there.

Edit: I'm not saying htmx is the future either. I'd love to see how they handle offline-first (if at all) or intermittent network connectivity. Currently most SPAs are bad at that too...

dfabulich · 2 years ago
I've edited my post to clarify.

Multi-swap is possible, but it's not good, because the onus is on the developer to compute which components to swap, on the server side, but the state you need is usually on the client.

If you need multi-swap, you'll find it orders of magnitude easier to switch to a framework where the UI is a pure function of client-side state, like React or Svelte.

recursivedoubts · 2 years ago
yawaramin · 2 years ago
> there's no good story for what happens when one component in a tree needs to update another component in the tree.

Huh, no one told me this before, so I've been very easily doing it with htmx's 'out of band swap' feature. If only I'd known before that it was impossible! ;-)

deltarholamda · 2 years ago
I guess it depends on what your definition of "the future" is.

If it's teams of 10X devs working around the world to make the next great Google-scale app, then yeah, maybe React or something like it is the future.

If it's a bunch of individual devs making small things that can be tied together over the old-school Internet, then something like HTMX moves that vision forward, out of a 90-00s page-link, page-link, form-submit flow.

Of course, the future will be a bit of both. For many of my various project ideas, something like React is serious overkill. Not even taking into account the steep learning curve and seemingly never-ending treadmill of keeping current.

geenat · 2 years ago
> it's impossible to implement "optimistic UI," where the user creates a TODO item on the client side and posts the data back to the server in the background.

Pretty common patterns for this- just use a sprinkle of client side JS (one of: hx-on, alpine, jquery, hyperscript, vanilla js, etc), then trigger an event for htmx to do its thing after awhile, or use the debounce feature if it's only a few seconds. Lots of options, actually.

React would have to eventually contact the server as well if we're talking about an equivalent app.

jbergens · 2 years ago
Of course there are some challenges and some use cases where Htmx is not the best solution but I think it can scale pretty far.

You can split a large app into pages and then each page only has to care about its own parts (sub components). If you want some component to be used on multiple pages you just create it with the server technology you use and include it. The other components on the page can easily target it. You may have some problem if you change a shared component in such a way that targeting stops working. You may be able to share the targeting code to make this easier.

wibblewobble124 · 2 years ago
we’re using htmx at work, migrating away from react. the technique we’re using is just rendering the whole page, e.g. we have a page where one side of the screen is a big form and the other side is a view on the same data but with a different UI, updating one updates the other. we’re using the morphdom swapping mode so only the things that changed are updated in-place. as a colleague commented after implementing this page, it was pretty much like react as far as “pure function of state.”
listenallyall · 2 years ago
Intentionally or not, this doesn't read like a cogent argument against the merits of HTMX (and isn't, since it is factually incorrect) but just as a person who is trying to convince him/herself that his/her professional skill set isn't starting to lose relevance.

From the February 31, 1998 Hacker News archives: "According to state of the web survey, Yahoo and Altavista are looking great on usage, Hotbot and AskJeeves are the upstarts. Google doesn't even hit the charts."

antoniuschan99 · 2 years ago
But isn't React going this route as well? There was some talk a week or so back with the React team talking about this direction.

Also, it seems so cyclic, isn't HTMX/Hotwire similar to Java JSP's which was how things were before SPA's got popular?

qgin · 2 years ago
It's interesting that this paradigm is especially popular on Hacker News. I see it pop up here pretty regularly and not many other places.
antoniuschan99 · 2 years ago
lol HN itself seems like it should be the poster child of htmx :P. It's literally just text.
BeefySwain · 2 years ago
The people using HTMX have never heard of stateofjs.com (though they are painfully aware of the state of js!)