The example URL here, though, is still not (helpfully) bookmarkable because the contents of page 2 will change as new items are added. To get truly bookmarkable list URLs, the best approach I've seen is ‘page starting from item X’, where X is an effectively-unique ID for the item (e.g. a primary key, or a timestamp to avoid exposing IDs).
Yeah, solving this edge case properly can add a lot of complexity (your solution has the same problem, no? deletes would mess it up as would updates, technically). I've seen people using long-lived "idempotency tokens" point to an event log for this but it's a bit nuts. Definitely worth considering not solving it, which might be a more intuitive UX anyway (e.g. for leaderboards).
He’s being downvoted because suggesting cursor pagination in an example describing sorting by price (descending) is plainly wrong. While neither is bookmarkable, cursor pagination is much worse.
The UX went from “show me _almost_ the most expensive items” to “show me everything less expensive than the last item on the page I was on previously — which may be stocked out, more expensive, or heavily discounted today”. The latter isn’t something you’d bookmark.
If you're bookmarking a directory, a list of things (e.g. the HN frontpage), you expect the content to change when opening the bookmark.
You bookmark a link to the directory so you don't forget the directory's entry URL.
The use case the author is talking about is a different one: You are configuring a complex item in a shop, and want to bookmark the URL so you can save it, recall it later, share this configuration with someone, or compare it with a different URL.
In this case, you also would expect little details to change (pricing, descriptions, photos) but the structure of the state should stay the same.
It's very frustrating when you share a link to a product detail page, only to discover that all your filters and configurations have been lost.
The data in a bookmark may change, but it should preserve some property of interest — otherwise why bookmark it?
Page 1 (a.k.a. the top few results with no pagination) has the property of being the selected top of HN, which is an interesting property in its own right, and what we're bookmarking. Page 2 doesn't have that property.
At some point I hope it becomes obvious that well-engineered SSR webapps on a modern internet connection are indistinguishable from a purely client side experience. We used this exact same technology over dialup modems and it worked well enough to get us to this point.
Being able to click a button and experience 0ms navigation is not something any customer has ever brought to my attention. It also doesn't help much in the more meaningful domains of business since you can't cheat god (information theory). If the data is so frequently out of sync that every interaction results in JSON payloads being exchanged, then why not just render the whole thing on the server in one go? This is where I can easily throw the latency arguments back in the complexity merchant's face - you've simply swept the synchronization problem under a rug to be dealt with later.
Yes, a well-engineered SSR webapp could be indistinguishable from an SPA. However, it is much harder to build a well engineered SSR with the tools we have. I haven't seen someone solve errors with form submissions and the back button well at the framework level. Post-Redirect-Get was awful. Trying to solve back buttons and wizards. Trying to solve modals. Is a modal a separate page with the rest in the back? What does closing a modal mean? What does a sidebar mean? How about closing it? Pretty soon, you're in half-an-SPA already.
And since you don't want a 2000 character URL, you're either storing half of the session on the server or having to build an abstraction with local storage. And since our frameworks didn't evolve to handle that, what is the purpose?
The key insight into the SPA is that you are writing a coherent client experience. No SSR framework figured out how to do this because they thought about pages rather than experiences.
Let me be clear: I am speaking about web applications. If you're providing information and only have a small number of customer interactions, an SSR is superior. CNN should not be an SPA.
> At some point I hope it becomes obvious that well-engineered SSR webapps on a modern internet connection are indistinguishable from a purely client side experience.
I dunno; other than the fact that there are some webapps that really are better done mostly client-side with routine JSON hydration (webmail, for example, or maps), my recent experimentation with making the backend serve only static files (html, css, etc) and dynamic data (JSON) turned out a lot better than a backend that generates HTML pages using templates.
Especially when I want to add MCP capabilities to a system, it becomes almost trivial in my framework, because the backend endpoints that serve dynamic data serve all the dynamic data as JSON. The backend really is nicer to work with than one that generates HTML.
I'm sure in a lot of cases, it's the f/end frameworks that leave a bad taste in your mouth, and truth be told, I don't really have an answer for that other than looking into creating a framework for front-end to replace the spaghetti-pattern that most front-ends have.
I'm not even sure if it is possible to have non-spaghetti logic in the front-end anymore - surely a framework that did that would have been adopted en-masse by now?
> Being able to click a button and experience 0ms navigation is not something any customer has ever brought to my attention
With modern CSS transitions, you can mostly fake this anyway. It's not like javascript apps actually achieve 0ms in practice - their main advantage is that they don't (always) cause layout/content flashes as things update
I actually started my own PHP based on C# called CHP for fun.
It runs atop whatever the current dotnet hosting service is (Kestrel?). It takes everything inside the "<? ?>" code blocks and inlines it into one big Main method, exposing a handful of shared public convenience methods (mostly around database access and easy cookie-based authentication), as well as the request and response objects.
Each request is JITed, then the assembly is cached in memory for future requests to the same path, and it will recompile sources that are newer than the cached assembly.
There is no routing other than dropping the .chp extension if you pass "-ne" into the arguments launching the server.
It's not very far along, and is completely pointless other than for the sake of building my own web language thingy for the first time since 2003.
Have you looked into the string interpolation & verbatim operators as a templating alternative? These can be combined to create complex, nested strings:
As a long time PHP developer, it never fails to amuse (amaze?) me the lengths people go to in order to get the things the browser will give you for free.
The most "special" code that I regularly come across is when a developer takes a JPG in blob storage -- already a public HTTPS URL -- then serves that in a "Web API" that converts it to base-64 encoded bytes inside JSON, sends it to client JavaScript, decodes it, and feeds it to an image in code.
Invariably, it's done with full buffering of the blob bytes in memory on both server and client, no streaming.
Bonus points are awarded for the use of TypeScript, compression (of already compressed JPGs, of course), and extensive unit and integration tests to try and iron out the bugs.
The browser gives you a full-blown programming language with a rich API, but it seems a lot of people avoid that in favor of smushing together a static view on the server side with little more than string interpolation.
I’ve been building web apps since the ’90s, and I never understood the appeal of PHP. It was always a terrible language, and there were usually better alternatives available.
Given you can run Doom on your fridge these days, it should be absolutely no surprise that you can already run PHP both in the browser and in Node [0].
It has been wild to realize I've now seen one full technology cycle of thin client to thick client to thin client again. Maybe PHP this time around will be able to be more robust with the lessons learned.
The JS world leaves me more and more perplexed.There's a similar rant about forms, but why is this so hard? Huge amount of dev time spent being able to execute asynchronous functions to the backend seamlessly yet pretty much every major framework is just rawdog the url string and deal with URLSearchParams object yourself.
Tanstack router[1] provides first class support for not only parsing params but giving you a typed URL helper, this should be the goal for the big meta frameworks but even tools like sveltekit that advertise themselves on simplicity and web standards have next to zero support.
I've seen even non JS frameworks, with like fifteen lines of documentation for half baked support of search params.
The industry would probably be better off if even a tenth of the effort that goes into doing literally anything to avoid learning the platform was spent making this (and post-redirect-get for forms) the path of least resistance for the 90% of the time search params are perfectly adequate.
I don't use HTMX but i do love how it and its community are pushing the rediscover of how much simpler things can be.
Nuqs[0] does a very good job at parsing and managing search params. It's a complex issue that involves serialization and deserialization, as well as throttling URL updates. It's a wonderful library. I agree, though, that it would be nice to see more native framework support for this.
Forms are also hard because they involve many different data-types, client-side state, (client?) and server validation, crossing the network boundary, contextual UI, and so on. These are not simple issues, no matter how much the average developer would love them to be. It's time we accept the problem domain as complex.
I will say that React Server Components are a huge step towards giving power back to the URL, while also allowing developers to access the full power of both the client and the server–but the community at large has deemed the mental model too complex. Notably, it enables you to build nuanced forms that work with or without javascript enabled, and handle crossing the boundary rather gracefully. After working with RSCs for several years now, I can't imagine going back. I've written several blog posts about them[1][2] and feel the community should invest more time into understanding their ideas.
I have a post in my drafts about how taking advantage of URL params properly (with or without RSCs) give our UIs object permanence. How we as web developers should be relying on them more and using it to reflect "client-side" state. Not always, but more often. But it's a hard post to finish as communicating and crystalizing these ideas are difficult. One day I'll get it out.
Don’t get me wrong, I never meant it was easy to solve, just that things could be better if search parameters didn’t somehow become this niche legacy thing with minimal appetite to fix.
Thanks for the point on RSC, probably the first argument I’ve heard that helps me contextualise why this extreme paradigm shift and tooling complexity is being pushed as the default.
> Tanstack router[1] provides first class support for not only parsing params but giving you a typed URL helper, this should be the goal for the big meta frameworks
Let's not pretend that the Tanstack solution would be good. For example, What if my form changes and a new field is added but someone still runs the old html/js and sends their form from the old code? Does Tanstack have any support to 1.) detect that situation 2.) analyze / monitor / log it (for easy debugging), 3.) automatically resolve it (if possible) and 4.) allow custom handling where automatic resolution isn't possible?
It doesn't look like it from the documentation.
Sorry, frustration is causing me to rant here, but it's a classical thing of the frontend-world and it causes so much frustration. In the backend-world, many (maybe even most) libraries/frameworks/protocols have builtin support for that. See graphql with it's default values and deprecation at least, see avro or protobuff with their support for versions, schema-history and even automatic migration.
When will I not have to deal with that by hand in my frontend-code anymore?
The same thing should happen that happens with Rails/Django and friends: nothing. Most frameworks only parse URL params, they don't check to see if the params are valid given your app logic.
That's your job. Frankly, anything more would be over kill. Why should my url param manager handle new or removed form fields?
So until about 2013? 2014? URL-driven state was just the way everything worked.
One of the major complaints of `cgi-bin` was that you had to manually add back to the URL to manage state (and of that time, there were a good number of cgi-bin applications that just didn't bother -- which unsurprisingly is how the SPAs worked at first until "URL Routing" took over).
But, all of this is literally just reinventing the wheel that's been there since the web began. The entire purpose of the web was to be able to link to a specific resource, action, or state without having to to anything other than share a URL.
What's wild is there are whole generations of programmers that started programming after the SPA world debuted and are now re-learning things that "were just the way things were" before 2013.
tbh I always found it interesting that CGI was dropped as a well supported technology from languages like Python. It was incredibly simple to implement and reason about (provided you actually understand HTTP, maybe that's the issue), and scaled well beyond what most internal enterprise apps I was working on at the time needed.
to be absolutely nerve-wracking. Not hard to do but it's just batshit crazy and breaks the whole idea of how web crawlers are supposed to work. On the other hand, we had trouble with people (who we know want to crawl us specifically) crawling a site where you visit
http://example.com/item/448828
and it loads an SPA which in turn fetches a well-structured JSON documents like
with no cache so it downloads megabytes of HTML, Javascript, Images and who knows what -- and if they want to deal with the content in a structured way and it put it in a database it's already in the exact format they want. But I guess it's easier to stand up a Rube Goldberg machine and write parsers when you could look at our site in the developers tools and figure out how it works in five minutes... and just load those JSON documents into a document database and be querying right out of the gate.
> treating URL parameters as your single source of truth... a URL like /?status=active&sortField=price&sortDir=desc&page=2 tells you everything about the current view
Hard disagree that there can be a single source of truth. There are (at least) 3 levels of state for parameter control, and I don't like when libraries think they can gloss over the differences or remove this nuance from the developer:
- The "in-progress" state of the UI widgets that someone is editing (from radio buttons to characters typed in a search box)
- The "committed" state that indicates the snapshot of those parameters that is actively desired to be loaded from the server; this may be debounced, or triggered by a Search button
- The "loaded" state that indicates what was most recently loaded from the server, and which (most likely) drives the data visualized in the non-parameter-controlling parts of the UI
What if someone types in a search bar but then hits "next page" - do we forget what they typed? What happens if you've just committed changes to your parameters, but data subsequently loaded from a prior commit? Do changes fire in sequence? Should they cancel prior requests or ignore their results? What happens if someone clicks a back button while requests are inflight, or while someone's typed uncommitted values into a pre-committed search bar? How do you visualize the loaded parameters as distinct from the in-progress parameters? What if some queries take orders of magnitude longer than others, and you want to provide guidance about this?
All of those questions and more will vary between applications. One size does not fit all.
If this comment resonates with you, choose and advocate for tooling that gives you the expressivity you feel in your gut that you'll need. Especially in a world of LLMs, terse syntax and implicit state management may not be worth losing that expressivity.
> All of those questions and more will vary between applications. One size does not fit all.
all of those come from the fundamental "requirement" set out earlier to have no in-page state, but still require the webpage to behave as tho it did.
If you remove this requirement, then it will be like how it was back in the 2000's era web pages! And the url does indeed contain the single source of truth - there are no inflight requests that are not also a full page reloads.
Yes the simple solution is obviously not perfect in edge cases. It's a tradeoff between simplicity and edge-case-perfectness.
In my opinion the higher priority task is to optimize the query in backend so that it can refresh quickly. If loading is quick enough then that edge case will be less likely to happen.
I had a similar strategy when building early web apps with jQuery and ExtJS (but using the URL hash before the History API was available). Just read from location.hash during page load and write to it when the form state changes.
For more complex state (like table layout), I used to save it as a JSON object, then compress and base64 encode it, and stick it in the URL hash. Gave my users a bookmarklet that would create a shortened URL (like https://example.com/url/1afe9) from my server if they needed to share it.
Why is the content changing between refreshes not "(helpfully) bookmarkable"?
The HN front page (ie. "page 1") does that but it's a very useful bookmark.
You bookmark a link to the directory so you don't forget the directory's entry URL.
The use case the author is talking about is a different one: You are configuring a complex item in a shop, and want to bookmark the URL so you can save it, recall it later, share this configuration with someone, or compare it with a different URL.
In this case, you also would expect little details to change (pricing, descriptions, photos) but the structure of the state should stay the same.
It's very frustrating when you share a link to a product detail page, only to discover that all your filters and configurations have been lost.
Page 1 (a.k.a. the top few results with no pagination) has the property of being the selected top of HN, which is an interesting property in its own right, and what we're bookmarking. Page 2 doesn't have that property.
He probably wants to freeze the state of the page. Maybe he should consider saving it via ctrl s
Being able to click a button and experience 0ms navigation is not something any customer has ever brought to my attention. It also doesn't help much in the more meaningful domains of business since you can't cheat god (information theory). If the data is so frequently out of sync that every interaction results in JSON payloads being exchanged, then why not just render the whole thing on the server in one go? This is where I can easily throw the latency arguments back in the complexity merchant's face - you've simply swept the synchronization problem under a rug to be dealt with later.
And since you don't want a 2000 character URL, you're either storing half of the session on the server or having to build an abstraction with local storage. And since our frameworks didn't evolve to handle that, what is the purpose?
The key insight into the SPA is that you are writing a coherent client experience. No SSR framework figured out how to do this because they thought about pages rather than experiences.
Let me be clear: I am speaking about web applications. If you're providing information and only have a small number of customer interactions, an SSR is superior. CNN should not be an SPA.
I dunno; other than the fact that there are some webapps that really are better done mostly client-side with routine JSON hydration (webmail, for example, or maps), my recent experimentation with making the backend serve only static files (html, css, etc) and dynamic data (JSON) turned out a lot better than a backend that generates HTML pages using templates.
Especially when I want to add MCP capabilities to a system, it becomes almost trivial in my framework, because the backend endpoints that serve dynamic data serve all the dynamic data as JSON. The backend really is nicer to work with than one that generates HTML.
I'm sure in a lot of cases, it's the f/end frameworks that leave a bad taste in your mouth, and truth be told, I don't really have an answer for that other than looking into creating a framework for front-end to replace the spaghetti-pattern that most front-ends have.
I'm not even sure if it is possible to have non-spaghetti logic in the front-end anymore - surely a framework that did that would have been adopted en-masse by now?
With modern CSS transitions, you can mostly fake this anyway. It's not like javascript apps actually achieve 0ms in practice - their main advantage is that they don't (always) cause layout/content flashes as things update
"it's too slow" is a thing a lot of customers have mentioned to me over the years.
Have you heard of these things called smartphones? I hear they're getting quite popular.
It runs atop whatever the current dotnet hosting service is (Kestrel?). It takes everything inside the "<? ?>" code blocks and inlines it into one big Main method, exposing a handful of shared public convenience methods (mostly around database access and easy cookie-based authentication), as well as the request and response objects.
Each request is JITed, then the assembly is cached in memory for future requests to the same path, and it will recompile sources that are newer than the cached assembly.
There is no routing other than dropping the .chp extension if you pass "-ne" into the arguments launching the server.
It's not very far along, and is completely pointless other than for the sake of building my own web language thingy for the first time since 2003.
This is how I've been building my .NET web apps for the last ~3 years. @+$ = PHP in C# as far as I'm concerned.
Invariably, it's done with full buffering of the blob bytes in memory on both server and client, no streaming.
Bonus points are awarded for the use of TypeScript, compression (of already compressed JPGs, of course), and extensive unit and integration tests to try and iron out the bugs.
It’s a chance to start all over yet again! Come on- we’re all up for that, we do it every few months!
Deleted Comment
https://blog.platformatic.dev/laravel-nodejs-php-in-watt-run...
Next.js is kind of bareable, as it uses the same approach, going back to the roots of web development, it is almost as doing JSPs all over again.
I’ve been building web apps since the ’90s, and I never understood the appeal of PHP. It was always a terrible language, and there were usually better alternatives available.
[0] https://github.com/asmblah/uniter
Tanstack router[1] provides first class support for not only parsing params but giving you a typed URL helper, this should be the goal for the big meta frameworks but even tools like sveltekit that advertise themselves on simplicity and web standards have next to zero support.
I've seen even non JS frameworks, with like fifteen lines of documentation for half baked support of search params.
The industry would probably be better off if even a tenth of the effort that goes into doing literally anything to avoid learning the platform was spent making this (and post-redirect-get for forms) the path of least resistance for the 90% of the time search params are perfectly adequate.
I don't use HTMX but i do love how it and its community are pushing the rediscover of how much simpler things can be.
[1] https://tanstack.com/router/latest/docs/framework/react/guid...
Forms are also hard because they involve many different data-types, client-side state, (client?) and server validation, crossing the network boundary, contextual UI, and so on. These are not simple issues, no matter how much the average developer would love them to be. It's time we accept the problem domain as complex.
I will say that React Server Components are a huge step towards giving power back to the URL, while also allowing developers to access the full power of both the client and the server–but the community at large has deemed the mental model too complex. Notably, it enables you to build nuanced forms that work with or without javascript enabled, and handle crossing the boundary rather gracefully. After working with RSCs for several years now, I can't imagine going back. I've written several blog posts about them[1][2] and feel the community should invest more time into understanding their ideas.
I have a post in my drafts about how taking advantage of URL params properly (with or without RSCs) give our UIs object permanence. How we as web developers should be relying on them more and using it to reflect "client-side" state. Not always, but more often. But it's a hard post to finish as communicating and crystalizing these ideas are difficult. One day I'll get it out.
[0] https://nuqs.47ng.com
[1] https://saewitz.com/server-components-give-you-optionality
[2] https://saewitz.com/the-mental-model-of-server-components
Thanks for the point on RSC, probably the first argument I’ve heard that helps me contextualise why this extreme paradigm shift and tooling complexity is being pushed as the default.
Let's not pretend that the Tanstack solution would be good. For example, What if my form changes and a new field is added but someone still runs the old html/js and sends their form from the old code? Does Tanstack have any support to 1.) detect that situation 2.) analyze / monitor / log it (for easy debugging), 3.) automatically resolve it (if possible) and 4.) allow custom handling where automatic resolution isn't possible?
It doesn't look like it from the documentation.
Sorry, frustration is causing me to rant here, but it's a classical thing of the frontend-world and it causes so much frustration. In the backend-world, many (maybe even most) libraries/frameworks/protocols have builtin support for that. See graphql with it's default values and deprecation at least, see avro or protobuff with their support for versions, schema-history and even automatic migration.
When will I not have to deal with that by hand in my frontend-code anymore?
That's your job. Frankly, anything more would be over kill. Why should my url param manager handle new or removed form fields?
One of the major complaints of `cgi-bin` was that you had to manually add back to the URL to manage state (and of that time, there were a good number of cgi-bin applications that just didn't bother -- which unsurprisingly is how the SPAs worked at first until "URL Routing" took over).
But, all of this is literally just reinventing the wheel that's been there since the web began. The entire purpose of the web was to be able to link to a specific resource, action, or state without having to to anything other than share a URL.
What's wild is there are whole generations of programmers that started programming after the SPA world debuted and are now re-learning things that "were just the way things were" before 2013.
https://www.lexo.ch/blog/2025/01/highlight-text-on-page-and-...
In any case, yeah, what was suggested in the submission is nothing esoteric, but I guess everything can be new to someone.
Hard disagree that there can be a single source of truth. There are (at least) 3 levels of state for parameter control, and I don't like when libraries think they can gloss over the differences or remove this nuance from the developer:
- The "in-progress" state of the UI widgets that someone is editing (from radio buttons to characters typed in a search box)
- The "committed" state that indicates the snapshot of those parameters that is actively desired to be loaded from the server; this may be debounced, or triggered by a Search button
- The "loaded" state that indicates what was most recently loaded from the server, and which (most likely) drives the data visualized in the non-parameter-controlling parts of the UI
What if someone types in a search bar but then hits "next page" - do we forget what they typed? What happens if you've just committed changes to your parameters, but data subsequently loaded from a prior commit? Do changes fire in sequence? Should they cancel prior requests or ignore their results? What happens if someone clicks a back button while requests are inflight, or while someone's typed uncommitted values into a pre-committed search bar? How do you visualize the loaded parameters as distinct from the in-progress parameters? What if some queries take orders of magnitude longer than others, and you want to provide guidance about this?
All of those questions and more will vary between applications. One size does not fit all.
If this comment resonates with you, choose and advocate for tooling that gives you the expressivity you feel in your gut that you'll need. Especially in a world of LLMs, terse syntax and implicit state management may not be worth losing that expressivity.
all of those come from the fundamental "requirement" set out earlier to have no in-page state, but still require the webpage to behave as tho it did.
If you remove this requirement, then it will be like how it was back in the 2000's era web pages! And the url does indeed contain the single source of truth - there are no inflight requests that are not also a full page reloads.
In my opinion the higher priority task is to optimize the query in backend so that it can refresh quickly. If loading is quick enough then that edge case will be less likely to happen.
For more complex state (like table layout), I used to save it as a JSON object, then compress and base64 encode it, and stick it in the URL hash. Gave my users a bookmarklet that would create a shortened URL (like https://example.com/url/1afe9) from my server if they needed to share it.