What upsets and concerns me the most is when I see poorly developed SPA on really important sites. For example, government service application websites. If reddit or nytimes has a bloated, intermittently failing SPA site, that's an annoyance. When it's a form to apply for unemployment, ACA health care, DMV, or other critical services, it's a critical failure. Especially since these services are most often used by exactly the population most impacted by bloated SPA (they tend to have slow or unreliable internet and slow computers, maybe even a cheap android phone is all they have).
Such sites should be using minimal or no JS. These aren't meant to be pretty interactive sites, they need to be solid bulletproof sites so people can get critical services. And I haven't even mentioned how SPA sites often lack any accessibility features (which is so much easier to implement if sticking to standard HTML+CSS and no/minimal JS).
Yeah what's weird is that there's is an entire generation of developers who think of SPA as the default.
They think that server side is slower because you have to send down more data, or you have to wait for the server to generate HTML.
Quite the contrary, it's slower to send down 1 MB or 10 MB of JavaScript to render a page, than to simply send down a 100 KB HTML page. Even if you need some JS also, browsers know how to render concurrently with downloads, as long as you provide enough HTML.
Rendering HTML on a server side Intel/AMD/whatever CPU is way faster than rendering it on a mobile device (and probably more efficient too).
Even if it weren't faster and more efficient, it would save battery power on the client.
And there is a ton of latency on the client side these days, ignoring network issues. There are ways of using the DOM that are expensive, and a lot of apps and frameworks seem to tickle those pathological cases. These 20-year-old browser codebases don't seem to be great workloads for even modern Android or iPhone devices.
---
edit: To be fair, I think what's driving this is that many sites have mobile apps and web apps now, and mobile apps are prioritized because they have more permissions on the device. (This is obvious when you look at what happened to Reddit, etc.)
It's indeed a more consistent architecture to do state management all on the client. Doing a mix of state on the server and state on the client is a recipe for confusion -- now you have to synchronize it.
Still there are plenty of apps that are website-only, like the government sites people are talking about. Those people appear to be copying the slow architecture of the dual mobile+web clients and getting a result that's worse.
Thinking about it a bit more, Reddit is a really good example because it launched and got popular before the iPhone or the App Store existed (2006 or so). It was a fast and minimal site back then, similar to what Google used to be.
Either the existence of the mobile app, or the drive to get people to install it, ruined the website. It's slower and has poorer usability than 5 years ago, and also 15 years ago.
Twitter is another company that existed before the App Store and the SPA trend. I noticed they recently turned off the ability to see the 280 chars you care about without having JavaScript on :-(
It's trivial to have a no-JS fallback, and they had it, but turned it off.
I've always taken "server side is slower" to mean "slower to host" in context of web pages. Sure it may blow chunks to parse 10 MB of JS to click a button and leave on the client side but that's 10 MB of static JS served via dirt cheap anycast style CDN and it doesn't matter if you have 1 client or 100,000 clients you could host it on a single server's worth of CDN which doesn't have to worry about any client state ever (and for the tiny percentage of client state you do need to manage it might be hosted on a much smaller solution that handles just that logic).
I'm more of the classic take that the "problem" is performance has continued to grow meaning we get more stuff made faster but the tradeoff is it runs at the same speed. Client side or server side there is no reason the app needs 10 MB of JS logic to do its job it just made it quicker and easier to deploy to have it use 10 MB of JS logic. For some things like required government services this is a real problem but for most things this is just reality - how fast a piece of software is isn't the only benchmark software is made against, often not even in the top 3 things it's checked against.
Take away React, Vue, Angular, or similar away from most current front end developers and there is panic. When I say panic I mean full insanity panic like abandoning the profession or completely going postal.
——
A simple checklist to provide superior front end applications:
* Don’t use this. You (general hypothetical you) probably don’t realize how easily you can live without it only because you have never tried. Doing so will dramatically shrink and untangle your spaghetti code.
* Don’t use addEventListener for assigning events. That method was added around the time of ES5 to minimize disruption so that marketers could add a bunch of advertising, spyware, and metric nonsense everywhere more easily without asking permission from developers. That method complicated code management, is potentially point of memory leaks, and performs more slowly.
* Don’t use querySelectors. They are a crutch popularized by the sizzle utility of jQuery. These are super epic slow and limit the creative expression of your developers because there is so much they can’t do compared to other means of accessing the DOM.
I now add ESLint rules to my code to automate enforcement of that tiny checklist.
I believe Svelte has the best outlook for developers wanting to build with JS as it doesn’t load an entire framework upfront (Angular/React/etc). In regards to server side languages being slow, as you pointed out, it’s really not the case; in my experience the culprit to slow applications is the database design.
> They think that server side is slower because you have to send down more data, or you have to wait for the server to generate HTML.
> Quite the contrary, it's slower to send down 1 MB or 10 MB of JavaScript to render a page, than to simply send down a 100 KB HTML page. Even if you need some JS also, browsers know how to render concurrently with downloads, as long as you provide enough HTML.
The argument I remember from years ago used "slower" as a bad simplification. What it actually meant was, doing the rendering server-side wasted CPU the server could be using to respond to another request. Instead, just send the data and distribute some of this processing to all your users by way of client-side rendering.
Also, back then bundling was a lot rarer than it is now, so the large libraries that made of most of those 10 MB javascript files would be separate files the browser can keep cached.
I am going to :+1: the UK census site - built on code developed by the gov.UK digital service, it has been apparently bulletproof at taking 20 million plus individual households through a moderately complex survey.
Not quite bulletproof. I found a couple of bugs while using it.
At the end, just before submission it showed the completed sections, and you could look at each section to see a tidy summary of the answers provided. Except for the section 1, where doing that jumped to the last of the section's questions instead.
I wanted to see one of the answers I had given in the section 1 to check before committing, and due to the missing summary page, tried stepping backwards and forwards through each question. All were shown, except the question I wanted to check (and had answered) was skipped.
Inspired to try things, I added a non-existent person to the household, then removed the non-existent person.
After that, when I stepped through all questions in section 1 it included the question and answer I'd been looking for, allowing me to confirm it was correct before submission.
gov.uk is such a glaring exception to government sites around the world I can only imagine there was some massive screw up where the developers were allowed to go off and build a fast, accessible and responsive site on their own without the requisite ten layers of committees, meetings and expensive external consultants. I hope there is a government inquiry to ensure this doesn't happen again and that taxpayers money is properly squandered on massive IT failures as per standard practice.
Yeah, I was impressed enough to fill out the feedback form at the end. It is probably the first time I've used a feedback form for compliments in my life.
Indeed - I used this site the other day. My reaction at the time was "This is the nicest web application I've used in decades." Web design that good is so rare now, it really stands out.
Why do these things even need to be an SPA? What function does that serve when the standardized and infinitely more compatible form-with-a-bit-of-javascript approach works just as well if not better?
I work on one such project and it absolutely drives me nuts -- it's a rails app, but the customer front-end (which is literally just a form to fill out) is a React SPA. There is nothing there that couldn't be done with Turbolinks and some light JS for validations/popups.
Because that's the mainstream, that's what's easy to procure, hire for, negotiate with vendors, that's the default, the preferred, the future proof, the supported.
And the tech doesn't really matter. I hate React with a passion, because Angular is so much more sane - in my experience. But it's fine. It's mature, it can be made to perform completely well.
The tech doesn't really matter. The people doesn't matter either. Even the costs doesn't matter as much as people think. What matters is political will, procurement culture, so systems and structures. This will influence (and bring) all the others in line.
How can a government contractor reasonably justify a price tag that's on par with fancy bloated non-government sites, for an no nonsense form + js approach?
This is a very simplistic characterization of what's happening.
First of all, for all the broken websites there are also a lot of websites that are not broken at all. It's also very easy to make a broken website using a completely server-side rendered website, and that actually happens often enough.
Second, SPA's decouple frontend and backend in a very strict way, which can bring enormous organizational benefits. Time-to-market is greatly improved, etc.
This whole "frontend vs backend" dialogue is basically white noise that completely misses the point. Use SPA or not, whatever, in the end it's just a tool to get the job done. Both are prone to errors when handled improperly.
A website that got it completely right is the Dutch corona dashboard called "Coronadashboard" created by the Dutch government: https://coronadashboard.rijksoverheid.nl. It's blazingly fast, extremely well-designed, looks great and the code is of exceptional quality. Also it's open-source, have a look at the code: https://github.com/minvws/nl-covid19-data-dashboard/.
The dashboard is completely written in Javascript. I truly believe a website with of such high quality would not be possible without frameworks such as React or Next.js (or whatever other framework and their respective tooling has to offer).
Closing note: let's try learn more from the websites that got it right than the ones that have failed. It's so easy to be critical, it's much harder to give some praise.
> A website that got it completely right is the Dutch corona dashboard called "Coronadashboard" created by the Dutch government
Not sure what everyone else here is using but you're right, at least for me the website is running buttery smooth in both Firefox and Chrome and the code is of exceptional quality.
> I truly believe a website with of such high quality would not be possible without frameworks such as React or Next.js (or whatever other framework and their respective tooling has to offer).
I agree. I wrote my first lines of HTML & CSS almost 20 years ago and back then JavaScript dev was nightmare. People wouldn't even have been able to create an interactive website like the Coronadashboard. (Of course we're not talking about static websites here – these were already relatively easy back then.) Nowadays, JavaScript dev admittedly still is a nightmare but there are at least tools like TypeScript, Angular, React and so on that make things a bit less painful and allow experienced web developers to create exceptional frontends. I'm putting "experienced" here because the frameworks still come with some pitfalls and bad practices are still very common. (I can't believe how many tutorials about using forms with React still recommend updating the state and re-rendering the entire form upon every.single.keystroke.)
If you are referring to a web app time to market I mostly see otherwise.
Here is an example: create a form to edit user settings. In Rails/Django it is pretty simple straightforward. But if you go React then you have an API and a component in React and have to think about routes and security of API and validations on both FE and BE.
There are advantages for SPA probably but time to market is not one of them in this case.
If you are referring to creating a mobile app and a webapp then maybe React with React Native is indeed faster. With this I agree. But I personally prefer to wait for Hotwire to launch their mobile framework and use that.
Why do you think it would not be possible to build a website of such quality without frameworks?
On a separate note, I think the dashboard is alright but I wouldn't call it excellent. It's a bit slow and some things, for example mousing over the map, are glitchy.
It’s because frontend devs are taught to use react, so suddenly every website unnecessarily becomes a react app. If every tutorial and class out there teaches people to use hammers (react) then people start using hammers to screw in a lightbulb or perform surgery.
I really believe it's a case of resume driven development. I'm sure there are counter-examples, but the whole move to SPAs, from my vantage point, has been driven by tech people and not leaders. Self-inflicted.
Why SPAs? They're good when you develop capabilities API-first. When we build features, we follow an approach of making the feature possible (customer support can exercise the feature with PostMan or cURL) then making it friendly with a UI.
There will be some tweaks and changes to the API to support the UI, but it's rarely drastic, and it ensures that every single capability we build out can be exercised by some other kind of program somewhere.
If you're building components of a larger system (which we do), SPAs and web components atop back-end APIs make sense. If you're building a one-off fill-out form kind of application... No those don't make sense. You don't even need JavaScript for those, if you degrade into just HTML + CSS for users that have shut off JS.
what concerns me more than that is how government sites require google and the like (captcha and analytics at the very least), giving them opportunity to hoover up all of our sensitive data in one convenient place.
I was surprised recently as I had to fill in a massive form based site for UK NHS mental health survey stuff. The site appeared as a flat background old style early 2000’s sort of thing with bits of comic sans in it. I nearly died when I first saw it expecting a shit show.
But it turned out to be responsive and fast. It worked perfectly from end to end and had little to no JavaScript. It was by far the best thing I’ve used for years. There were over 100 page transitions in total. It wasn’t an SPA but a classic web site with little or no intelligence. Seemed to be backed by python.
At least regarding the US federal government, it was a known issue that (a) the government wasn't willing to spend enough to hire quality software developers and (b) the government bureaucracy lacked a process to determine whether web software was "good." Specifically, no effective methods of software validation based on modern best practices (the processes they used were descended from the validation processes for material acquisition, which look extremely different from software engineering).
I'm not sure if the problems have been fixed, but they were both recognized and a process was put in place to address them after the healthcare.gov debacle.
> If reddit or nytimes has a bloated, intermittently failing SPA site, that's an annoyance. When it's a form to apply for unemployment, ACA health care, DMV, or other critical services, it's a critical failure.
Unfortunately the incentives are backwards. Typically governments have to choose the lowest bid. Those private companies you mentioned can make more complex tradeoffs.
Sometimes, of course (cough Florida unemployment and covid tracking sites), failure in performance is by design.
The New York Times used to be a mess on mobile where swiping back was a 5 second wait until the SPA synced. It is much better today, so either they ditched the SPA or have now fully simulated a browser’s back and forward behavior.
I went to go redesign my blog recently. I was intending to use the Go Template engine, but I'm not a frontend person so I wanted to use a frontend framework. I quickly discovered that most of even the purest of css frameworks did not have instructions for what I wanted to do. They all used Next.js, Webpack, etc and we're designed to be used in Vue or React. I love component frameworks for their simplicity, but everything that powers them is complex.
> Such sites should be using minimal or no JS. These aren't meant to be pretty interactive sites, they need to be solid bulletproof sites so people can get critical services
Government services were slow and unreliable before computers. The problems aren’t technological.
When I click on a link, or click to submit something, and I realize that it's NOT an SPA, my immediate thought is "this is going to be a nightmare."
When I click a button, and it has to make a request to load the next set of html, fully replace the page contents, and probably submitted a post request that will have issues restoring the state of my form if I go back, etc, I feel like I'm on a DMV site from the 90s. Navigating something that is meant to operate as a cohesive application by instead using a series of markup displays is only ever going to be hacky at best. I love using SPAs, because they're actually applications, rather than snapshotted frames of an application running on a remote server.
I exclusively write SPAs these days and I think they are vastly superior for many reasons. The problem is that you can build a shitty product with good tools.
One of my duties is performance improvement, so I’m very familiar with problematic architectures. You can have an instantly loading informational SPA... I tend to use Gatsby for that. For more interactive sites, I prefer vanilla React with lightweight libs, code splitting and sensible caching rules.
I do agree that poor design, reluctance to refactor and lib/tracking heavy apps are very problematic. Isn’t that something that’s always been a problem in Webdev?
I'd like to add a bit more nuance. Modern frontend development provides many opportunities for failure. These failures often make their way into production. I, personally, get great results with modern FE development. My users are happy. I am happy. It's all very successful. All the defenders of modern FE development will likely chime in with the same sentiment. I also get great results with C, which arguably provides even more catastrophic opportunities for failure. I wouldn't say "C is a failure", but I absolutely hate working in C, and I'm glad we've made improvements to systems programming. I think the point I'm moving toward is that we absolutely should strive to make better tools for delivering web content, but the reasons for failure are far more complex than the tooling itself. So, let's not frame things in black and white and call something that's been used with large amounts of success a failure. That's not very productive and is just flame bait. The real question is, "What's next?", how are we going to improve the current situation?
I love modern frontend development. I can build apps that scale easily to hundreds of thousands of users. They are fast where they need to be fast, and building components means complexity lives only where it's needed. Static parts are rendered statically, dynamic parts are rendered dynamically. I can write all code for the entire stack in Javascript. The entire workflow is streamlined in a simple way (webpack really isn't that hard to grok). There are templates for every project imaginable. I don't need an entire VPS for my web-apps, just a simple static hosting service. My apps run on every system and any browser back to IE9 without any issues whatsoever. No complex build tools or linking process or debugging weird architectures needed. Compatibility issues are solved automatically in any modern build pipeline by adding vendor prefixes. New code is transpiled to old code for older browsers. Debugging modern front-end code is a breeze, there are an infinite amount of tools both for styling and the code itself. My modern frontend app takes seconds to build; QT apps I built in the past needed up to twenty minutes, just to check a simple change. No need for that with hot reloading. My users are happy too: They don't have to download an app which might not work on their specific system. Linux, Windows and Mac users alike can use it on their computers, laptops, tablets and phones without any issue. They can use the default environment a browser provides for an even better user experience (zooming, Screenreaders, upscaling fonts, copying images and text, preferences for dark mode, theming their browsers, saving credentials and auto filling them, sharing stuff, etc.). Integrating stuff from other companies has never been easier: Paypal, Stripe, Apple Products all provide a well tested and documented library for every framework out there. There are open source packages for anything, it's a FOSS dream. Building prototypes for anything is insanely fast, thanks to modern debugging tools in Chrome and FireFox.
> build apps that scale easily to hundreds of thousands of users
I don't understand this.
Frontend dev is about writing a portable software to run on as many runtimes as there are users. There is literally nothing to prevent any frontend software, good or bad, to "scale", because scaling in terms of users is nonsical for frontend dev (unless you consider browser compatibility as scaling, to which statement I'm orthogonal).
Note: meanwhile, back end dev is about writing a program that accepts as many users as possible, which make front and back nicely yin-yang together. But maybe I'm overstating things here :shrug:
> Partner meetings are illuminating. We get a strong sense for how bad site performance is going to be based on the percentage of engineering leads, PMs, and decision makers carrying high-end phones which they primarily use in urban areas.
This entire read seems to be that things are much better for YOU, the front-end developer.
And yes, that makes a difference, you can deliver more features, faster, more reliably.
But, if those sites just bog down and take double-digit seconds to load, even when properly deployed on scaleable delivery architectures, with fiber-optic speeds, and CAD/gamer-level power machines, they are junk.
And I increasingly see exactly this junk been over the last few years. Even (and often especially) the major sites are worse than ever. For example, on the above setup, I've reverted Gmail to the HTML-only version to get some semblance of performance.
Sure, some of this could be related to Firefox' developments to isolate tabs and not reuse code across local containers and sessions, but expecting to get away with shipping steaming piles of cruft because you expect most of it will be pre-cached is no excuse.
Your site might have the look and features of a Ferrari, but if it has the weight of a loaded dump truck, it will still suck. If you are not testing and requiring good performance on a rural DSL line and mid-level laptop (or similar example of constrained performance), you are doing it wrong.
Is still hit or miss every once and while depending on the use case. For example debugging service workers is still a bit of a nightmare. And I've also had some issues debugging a chrome extension written with Vue CLI + the browser extension plugin.
I guess you are a talented developer working in the right environment. But the vast majority of sites outside there (not controlled by some tech behemoth) are usually not optimized, work on Chrome, usable only by abled people and a long etc etc
This. People see one shitty implementation of SPA and put every new tech stack in the same bucket. We live in a golden age of tooling and the front end is right at the front (pun intended).
Whenever this topic comes I only hear praise from devs. Devs only care about what they can do, never thinks from user's side. That's why we have shitty js infested sites, people who thinks all people have latest gadgets, sits in ac rooms, have unlimited fibre net just like them. Devs live in a bubble. It failed users.
There are many reasons why having SPA and rendering your site on the client is a bad idea.
First, you are basically breaking the concept of the web, a collection of documents, not a collection of code that must be executed to get a document. That has many bad effects.
Browsing is slower, you have to download the code of the whole application and wait for it to execute other API calls to the server before the page is usable. That can take a couple of seconds, or even more on slower connections. With the old pages rendered server side not only you didn't have this effect, but also the browser could start to render the page even it was not fully received (since HTML can be parsed streamed). Not everyone has a fast connection available at all time, and it's frustrating when you have only a 2G network available and you cannot do basically anything on the modern internet.
It's less secure, since you are forcing the user to execute some code on its machine just to look at an article in a blog. And JavaScript engines in browsers are one of the most common source of exploits, given their complexity. Also JavaScript can access information that can fingerprint your browser to track you, without particular permissions. Ideally in a sane world most of the websites don't require JavaScript, and the browser would show you a popup to allow the website to execute code (just like they as you access to the camera).
It breaks any other tools that are not a browser. In the old days you could download with wget entire sites on your hard driver to consult them offline, with "modern" SPA is impossible. Of course that implies that it breaks things like the Wayback machine and thus history is not preserved. And serach engines penalizes sites that are not a clean static HTML.
It's also less accessible, since most SPA don't respect the semantic of HTML, everything is a div in a div in a div. And of course you still need a browser, while with HTML documents you could process them with any software (why a blind person need to render the page in a browser? While a screen reader could have simply parsed the HTML of the page without rendering it on screen...). It breaks navigation in browser, the back button no longer works as you expect, reloading a page could have strange effects, and so on. I can't reliably look at the address bar to know the page I'm in.
Finally it's less robust. SPA are a single point of failure. If anything goes wrong the whole site stops working. While a bug on a page on a classical server side rendered website breaks only that particular page. Also error handling is not present on most SPA, for example what happens if an HTTP request fails? Who knows. Look at submitting a form, most of the time there is no feedback. On a classical server side rendered application if I submit a form I either get a response from the server, or the browser informs me that request had failed and I can submit the form again.
Interesting take that I more agree with than disagree - However I think
" there are an infinite amount of tools both for styling and the code itself."
Is actually one of the symptoms of the issue at large I think.
I understand some sites depend on more complex APIs and such - certainly fbook uses the complex stuff that these frameworks do..
However many people are using wordpress and similar tools simply to have a basic static few bits of info from a few pages display decently on various devices. I too am guilty of such a thing on more than one occasion.
For these cases I am moving towards converting the designs to static html and using a php based script from coffeecup to handle the contact form part that in the past I've lazily just added a two click plugin from wordpress to handle.
I feel that the CSS standards group really dropped the ball with not having better responsive menus 'built in' as a part of the problem that can be solved in the future. Now that grids are built in - the menus that bootstrap does auto-magically for most devices is the missing piece that keeps many sites from being just html/css.
I'd love to go back to netobjects fusion for design, but it has not kept up with the responsive needs of the web. I tried coffeecups site designer in beta and it wasn't for me. I've built dozens of sites using notepad++ and occasionally load up sublimetext for find/replace - but still feel that more visual / wysiwyg type stuff is greatly desired by much of the world.
Wordpress is going more in that design direction as the gutenburg blocks start to work with full site design and not just page content. And I still keep meaning to take the time to try out pinegrow site builder - as that might be the replacement for netobjects that I've longed for.
But it's not just me - there are plenty of people who could / would make a site and find things too complex today - about 7 years ago I found someone in the top 3 search results for a home service - and inquired about their designer / seo .. The guy who was there doing the work told me he made the page in microsoft publisher.
While I'm not advocating for the bloat that Msoft's frontpage pushed into the world, and I know the time of clearpixel alignment is a distant memory, even though we have an infinite amount of tools - it still seems the front end world is more complex than it needs to be.
It is better in some ways and worse in others. I hope CSS gets better menu options so there can be less pieces needed for a decent puzzle. I like non-js sites just fine, and building with less tools is okay too.
I agree entirely. The worst sites are imo the ones not using modern frontend technologies.
I think people are forgetting how bad it used to be. Loads of jquery spaghetti code everywhere rerendering the page 1000 times with a nested for loop.
Also, web applications have became so much more complex than they were (but still work!). Things like Figma would be unthinkable even a few years ago. And - even though it's running in a browser Figma feels far more responsive than Illustrator or Fireworks (RIP), plus it has a load of collaboration tools which these desktop apps don't have.
>Modern frontend development provides many opportunities for failure
I think this is the important thing here. Everything feels less stable, and more prone to breaking, on the modern web. You write some simple HTML, style it with CSS, and write vanilla JS for the parts that need it, and everything feels solid. You start a new project with a framework, and it seems like there is this whole area of your project that is essentially a black box, ready to break (or misbehave) any time.
At some point, there is an irreducible amount of complexity. At that point, adding things like over-wrought frameworks to “make things easier” for the developer ends up pushing the complexity around, like an air bubble trapped under a screen protector.
> You start a new project with a framework, and it seems like there is this whole area of your project that is essentially a black box, ready to break (or misbehave) any time.
Any framework you're not familiar with will feel like this. This isn't something unique to frontend frameworks.
This is a really rose tinted view of the world. Writing JavaScript works ok now the language and browser support has matured. 10 years ago it was a complete nightmare.
Regardless without a framework it ends up with you reinventing core framework features anyway. Yes, a framework driven app is more complex if the page/app is extremely simple. But if it is of any complexity a framework driven app is going to be far easier to maintain and reason about in 99% of situations.
This is partly true, but I would also like to state things have much improved. I can vividly recall the 2000s where practically every website would look and behave differently in different browsers. I recall a lot of debugging client-state using poorly scripted PHP websites. Tooling so much more advanced and things "breaking" have moved from the user to the developer. In the jQuery days (or even before that) it was very hard to find out if everything was working as it should. Early feedback is a good thing. Understanding a framework is part of understand what you are shipping to the customer or user. Sure, it might seem more complex, but at least complexity isn't pushed down to the browsers and users.
You write some simple HTML, style it with CSS, and write vanilla JS for the parts that need it, and everything feels solid.
Bootstrap + jQuery on the front end, Flask or similar with CherryPy on the backend. SQLite or Postgres if you need it. That will handle 99.9% of websites, it will be cheap and easy to develop and host, and deliver a great experience to the end user.
I think a lot of it had to do more with business requirements than art or science. The business folks stroll in and make these generic one-click deploy type things to get sites up and running quickly, at the expense of being vetted by good artists and scientists.
A good analogy I think is any person using MS publisher to create a book layout. Sure it can be done quickly because the software does the thinking; but should it? Good books have skilled typographers designing the proportions of the pages relative to text block, the fonts, et cetera. The end result is a book that one scarcely notices but feels very pleasant to hold and experience.
We need websites that are subservient to their content; sites we scarcely notice but thoroughly enjoy.
EDIT: one last point: I think part of the problem is also that browsers have an identity issue. E.G. they started as a static publishing platform, but now they are a full blown operating system with web assembly etc. As we know, anything that does too many things is inherently complex, and hence “bloated.”
Personally I think most of the "failures" out there are caused by organisational issues not technology issues. Namely, marketing departments given all the power over what the website does and what's bundled with it (trackers, a bunch of heavy/intrusive marketing tools), and optimisation that values increasing revenue above increasing actual usability. (Sometimes these align; often they don't).
You can make a SPA that's a joy to use and loads and runs extremely fast. Hardware has never been faster; browsers and application frameworks have never been as good as this (I'm talking about the web stack). It's really nothing to do with the tools or technologies that marketing insists it needs mixpanel, GTM, optimizely, hotjar, FB pixel, smooch, and god knows whatever else in order to do its job effectively.
Whats the business case for less JS? Recently the company I worked for started a project where it seemed pretty clear to me would be possible to develop with very little JS running on the frontend. But I couldn't convince anyone of it because no one saw enough benefit from doing it that way.
I think modern FE won over is primarily because it provides with Decorator OO, which has been recommended for GUI development including by the GoF for ages, which also matches perfectly with a tag-based declarative language such as HTML. As such, ditching the "HTML templates" paradigm was a no brainer for OO devs to start with. For many, "templates", their performance, their restricted mini language and all the boilerplate that comes with it, just had to die.
But then, with Angular/React/etc, you now not only have 1 project on your hands but 2 different projects that must be compatible: the backend and the frontend. These 2 different projects ought to be in 2 different languages, unless you are developing backend in JS too - maybe there's even an ORM that can generate migrations these days for NodeJS ! But that wasn't the case last time I checked, also loosing the ability to have server side rendering without the additional effort of deploying a rendering server with all that comes with it.
People ditched jQuery saying that vanilla JS was just fine, but it turned out not to be quite the case so there are still releases of jQuery, and at the same time NodeJS was released and npm and then Angular/React/etc which in my opinion created with two goals in mind 0. having OO components for GUI dev and 1. offer IoC to overcome the difficulty of dealing with custom component lifecycle like in jQuery which leaves you to monitor the DOM and instantiate / destroy each plugin by yourself. Idk if there are other reasons, but it seems to me that apart from that, that DHTML is still pretty much the same: you're adding some logic to change tags and attributes, anyway.
Today we have a chance to break these silos again with the Web Components and ESM browser feature and W3C standard because it does solves elegantly the problems that React/etc seemed to be designed for and do no impose a frontend framework, you can just load the script in your page and start using <custom-tag your-option=...>... The browser will manage the Component lifecycle efficiently, and devs can use it in their templates without having to load a framework so everybody wins.
This is also a chance to simplify our code, levitate by throwing away dependencies. Of course if you want to do full client side navigation with an API backend you still can, but you can make generic components that you will also be able to reuse in other projects that do not use a particular framework. You need no tool to make a web component, this is an example of a jquery plugin that was ported to a web component which has no dependency, with browser tests in Python: https://yourlabs.io/oss/autocomplete-light/-/blob/master/aut...
Webpack does a lot, still slow for development be we're seeing the light with Snowpack and esbuild, allowing to have webpack in production only (ie. to generate a bundle in Dockerfile) and benefit from actually instant reload thanks to ESM.
So if you go for web components and snowpack, you get an extremely lightweight toolkit which I love that will work for every page that's not an overly complicated web app. But then I thought I actually don't have so much frontend code, it would be nice to have it along with the backend code, so we went for a python->js transpiler to develop web components in pure python which also replace templates, it was surprisingly fast to implement: https://yourlabs.io/oss/ryzom#javascript
Is this improving the situation or not depends on your POV, heck I'd understand if you even hold it against me, but the frontend development tooling is still evolving for sure, and I can see how the browsers are doing efforts (except Safari) to simplify DHTML development, because:
“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies.” – C.A.R. Hoare, The 1980 ACM Turing Award Lecture
If you adopt microfrontends, you can ship your app as N number of smaller bundles, instead of one monolithic bundle. The Webpack performance in this setup brings me joy. It's not 10ms to bundle, but 2-3 seconds cold boot in dev mode is completely acceptable, with rebuilds nearly instantaneous.
This setup is described in more detail in single-spa's documentation.
Are they? I'm skeptical we can know that for any given website; specifically, accessibility is a huge pain point for so many sites, and it would seem difficult to know how many people are turned away from a site because it isn't set up to serve them.
Regardless of my tweet, that is a personal opinion based on my feelings, background and experiences, the discussions in the replies are interesting from the POV of understanding what different people think in the programming community. Some will say that critiques are only from old people, in the style "when I was young we had only zeroes! You now have ones and zeroes". Others say that the web is slow because of ADs and trackers and modern frameworks would otherwise be fast. There is who believes that the problem comes from big companies imposing new standards in order to turn the web into the new Java applets (a full fledged application platform), and so forth.
I read it and feel it's the same as how we perceive modern music. Everyone says music "used to be better" but that's just survival bias. Just like music, there was a LOT of trash web development back in the day as well. Sites built with tables in dreamweaver best viewed on netscape navigator at 800x600 with animated gifs bogging the download speed existed long ago.
What we have today are the same problems with a new wrapper, created by a not dissimilar group of people from the past.
>I read it and feel it's the same as how we perceive modern music. Everyone says music "used to be better" but that's just survival bias.
Well, if the top 100 tracks from, say, 1961 to 2021 progressively get less musically diverse, with simpler chords, less harmonies, less timbral variety, lesser melodies, more repeatition, less dynamics, less genre variety, more infantile lyrics (something that has been studied and measured several time, e.g: ), etc, then it's not some "survival bias".
I have a different opinion. When we moved from the old way to semantical HTML5 generated still server side but which the layout was defined by the finally mature CSS, with the dynamic parts handed by JavaScript and RPCs, everybody agreed it was better. Such web was faster, more compatible, simpler to parse, and so forth. I think that, if in the 20 years that followed, frontend development would progress in the same direction of improvements, everybody would agree again now.
And indeed everybody agrees that JavaScript itself is now better as it is the modern web APIs. If the rest is so controversial there must be a reason.
We had functional applications for forms and the like, in the 90s, running on machines with 8MB or 16MB of memory. They weren't as pretty, but they had simple development paradigms, VB and Delphi, easy to get started.
HTML has been the wrong place to make creating UIs (as opposed to marked up text) for such a long time. Things are getting better from multiple angles, but it's still very uneven.
OTOH, i think you can actually measure the quality and accessibility of tooling based on how abundant the long tail of mediocrity/low-quality output there is.
in music’s case, one of the replies to this comment describes decline in musical complexity/sophistication, which i’d personally attribute to democratization of the tools (which are also much more powerful, allowing kids w computers to do what took whole teams and studios full of equipment before).
so i think only seeing high quality UIs in the wild is more of a mixed bag than is intuitive to us — a world absent of shitty soundcloud rap is a world with worse music tooling.
I definitely agree with the music survivor bias thing. This is also very noticeable in the “computer generated imagery in movies looks artificial” meme: you just didn’t notice all the extremely effective and convincing CGI.
But I’m curious, what are the examples of great websites “back in the day” that have stood the test of time and would be considered good web development today?
> Everyone says music "used to be better" but that's just survival bias.
Not "just" survival bias.
I think it's quite reasonable to take a position that there was more innovation and creativity in pop music in 1950-2000.
As the genre has matured, popular/commercially successful music has depended more and more on fewer and fewer producers.
Indeed, there are fewer commercially successful artists: the US Billboard Hot 100 Top 20 this week contains 3 tracks by Justin Bieber, 2 by the Weeknd, and 2 by Drake. That would have been unheard of.
If ECMAScript, HTML, and the DOM didn't exist and you were asked to create a specification for applications where the client UI is remote, possibly very resource constrained with a connection to the back end that may be slow and only mostly reliable, what would you invent? Is there a better model already out there that isn't used because Javascript + HTML has sucked all of the oxygen out of the room?
Even Flex and ActionScript were better in many ways than the DOM.
My big problem with web application (in a true application sense - not things that are just hypertext and shouldn't be applications at all!) development is that the DOM+CSS model is not made for rich UI experiences. Basic paradigms from desktop applications like spreadsheet cell selection, draggable cards (think Vizio/UML), modals, and MDI / multi-document interfaces are non-standard and brutally challenging to construct in a reasonable way using the DOM.
What I'd invent would pretty much be Silverlight without the Microsoft, honestly - a typed UI framework built on a widget and widget-binding model which would allow a smooth mixture between OS-level widgets (and the accessibility affordances they provide) and hand-controlled rendering/drawing, with a stripped-down runtime enabling resource constrained clients to execute client-side code which would hide / paper over the resource constrained backend connection.
Anyway, I also think this is orthogonal to the argument in this thread, because I think that most of the conversation and the sentiment of the original tweet is to call out applications that SHOULD be hypertext, not applications. For applications that need to be applications, I think things have gotten better, not worse, although they're still pretty bad.
The reason it's so hard to come up with a feasible alternative at this point is that we have done ~15 years of browser wars with perhaps an average of 3k "core" contributors (browser company employees etc) since "we" moved into this direction... pretty much by chance.
My understanding from working at Opera at the time (2004 and onwards) is that the "senior" (experienced) people were busy implementing stuff in the browser engines and various GUI platforms. We hired very young (often like 17-18) and very smart people who had experience actually writing HTML/CSS/JS to work on developing web standards. They naturally typically had very little commercial software development experience.
After a while it kinda became a competition - which browser company's web standards people would be leading in terms of ideas/innovations. How many web APIs could browser company A do, vs browser company B. That's when the complexity really started accelerating. Then Safari and Chrome happened.
I wish we had spent more time working with these web standards people (we had so much experience building GUIs, for instance). They were really friendly and approachable, but we were all so busy with actually building the browsers, during these browser war times. It feels like a missed opportunity, in retrospect.
Java Web Start was a thing, and had some nice aspects to it. JVM startup time has been a problem, and the UI toolkits maybe had some issues? And of course the early JVMs were notoriously full of security holes, which is something browsers somehow managed to avoid.
But nevertheless I think Java/JAWS got basic stuff still right: run applications directly from network (with auto-updates), have security controlled sandbox, be fully cross-platform and portable, and have useful stuff built-in, and even had the PWA-like thing that you could have desktop shortcuts to JAWS applications.
Regarding startup time, I do wonder what would be the startup time of your typical Electron app on a late 90s-early 00s PC hardware. Somehow I'm imaging JVM might not be that bad in comparison...
I recently lost my internet provider, so while I wait for a new one I'm tethering from my phone. The Verizon unlimited data plan actually throttles me to modem speed, about 56k, so it's actually not unlimited. Anyway, it's been interesting to judge various websites by how fast they load at that speed. Hacker News comes right up. Twitter is acceptable. Facebook is terrible and often does not even load at all. The same for any site that uses React. I'm not sure why Facebook uses such a bloated system when they are trying to expand their user base into much of the world that does not have high speed internet.
I travel a lot and have gotten quite used to browsing on airplane WiFi, so a similar low-bandwidth experience (at times).
I'll add a huge culprit to the list: Medium. They have their own "clever" code to progressively load images and I find it absurdly frustrating (because in most scenarios, the images just don't ever load), so I end up with lots of design articles with blurry blobs of color instead of images.
There are so many ways to natively progressively load images that I'm not sure why they've chosen the most user-hostile one. You see blurry blobs of color in no particular order, no indication of which ones are loading, no way to force a particular image, etc. I find myself frustrated often and I end up abandoning most of the stories (or avoiding Medium altogether).
Isn’t progressive loading actually built-in to the JPEG standard? Like you get it for free if you encode it for progressive decode. Yet another “lets use JavaScript” waste of time. Developers gotta develop tho.
I remember tracking down corrupt entries in our DB. It was mostly one user introducing the inconsistencies. Turns out he would double-click on every button, and the browser would happily send-abort-send two requests every time. Sometimes these would race on the server.
We implemented disable-on-submit after that, and the inconsistencies went away. Other people would click again when the response didn't come fast enough, but that was rare to lead to corruption. Probably when their connection was lagging, they would click multiple times in frustration. But that one guy provoked enough destruction to make us notice and fix it for everybody!
I believe Twitter uses React? Although it’s possible that Twitter successfully served you Twitter-Lite while FB may have mistakenly sent the full version.
Funny, I just went through a similar experience and had to tether off a Verizon connection for almost two weeks.
I certainly felt the pain of a slow connection too and felt frustrated at how badly this affected the experience on so many sites.
Here’s an idea: web developers should test their sites on the slowest connection speed still commonly available (ie 3G) and make sure the experience is still acceptable. I know that webpagetest [1] allows you to do this and the results are illuminating.
I’m living overseas and this happens with my phone from the US. It’s unbelievable how many apps time out. Why an app has implemented a timeout is instead of relying on the network stack is beyond me.
If not that, maybe try PdaNet. It is $10-15 for a license and it can mask that you're tethering through a tunnel from desktop->phone, so everything gets counted as mobile data instead of hotspot data.
It appears to be an app for a phone? I have no throttling problem on the phone in case that wasn't clear, but I don't care to look at the small screen and want to use my desktop.
I'm not counting images and videos because all sites have that problem, but things like clicking on the notifications icon and then waiting for it too load and it never does so I have to give up and go back to my phone.
This topic seems to come up on Hacker News every few months or so. Not saying the post is necessarily wrong there, but it's certainly something people here love discussing nonetheless.
As for why front end development may be a bit of a mess here? Well, it's really a problem that doesn't have just one cause.
On the one hand, business pressures likely have a huge impact here. Companies love analytics, tracking, ads, etc, all of which contribute greatly to the issue of slow loading, broken sites. If you include everything and the kitchen sink, then things will clash or time out or break.
There's also an issue with businesses focusing on speed and working getting done quickly rather than well. If you're given unrealistic deadlines, and the feature set is changed midway through development because some manager/sales team thought of a shiny new idea that needs implementing... well it seems like optimisation, usability, etc often end up on the cutting room floor.
And yes there's a bit of envy from developers towards Facebook/Google/Apple/whatever there too, and a need to 'test out the shiny new toys' so they can potentially get hired there in future. I suspect CV padding is definitely why SPAs are way more common than they used to be, and why simple static or server rendered sites seem nearly nonexistent now.
I am not an expert chef. I can cook pretty well, and my guests enjoy my food. But it doesn’t compare to what a professional chef can do.
I am not an expert abdominal surgeon. If we are in Antarctica and your appendix becomes inflamed, I will try to save you by cutting it out. But you will probably die.
I am not an expert front-end developer. I make sites for myself, for others, and have even been paid for it a few times. But I know I am basically an amateur. However, my sites are far, far better than almost all sites in the wild, created by teams of full-time, professional front-end developers. They work identically in all browsers, are fast, easy to navigate, and people say they look good. They validate, too, except for a few nonstandard htmx attributes.
There is something very strange in the WWW, when a self-taught amateur like me can make a site that is better than the New York Times’.
Your sites are better at being tools ordinary human beings can use to their own benefit. They are (probably) terrible at generating money via ad revenue and tricking people into buying subscriptions that are then nearly impossible to cancel.
Making a site to share info and have fun is easy. Nearly anyone can do it. Making a site to actively exploit people in the most intense way possible while still being legal(ish) takes highly-trained experts.
Exactly. But I am using the measure of quality that I think is relevant. We call a chef good when we like the food, not when he finds a way to enrich a fast food chain.
There is something very apt in your first line: A lot of sites suffer from too many cooks in the kitchen.
If every department proves its worth by claiming space on the front page, customer experience will be the last concern. That's before someone notices how giving content away is not a monetization strategy. Then add the siren song of 'only one more script' for marketing data,even if nobody has a clue about what to do with that data.
It's like Million Dollar Homepage in a sense. Every stakeholder in the business wants their stuff in there somehow. Except we're gravitating toward the million Kilabyte Homepage.
not saying that you’re wrong, but do you have the same constraints as the “crappy” websites?
there is definitely an explosion of tooling, frameworks, etc, but past some obvious things, IMHO most websites are crippled by business people (that will likely not be using the website) make all sorts of technical decisions what must go in and how it’s supposed to work (ads and tracking crap is one of the things that jumps up).
No, I don’t have all the same constraints. For example, I would walk away if any of the organizations that I work with insisted on tracking visitors. I stopped taking paying customers years ago for this and similar reasons. But if a site is bad for the reader, it’s bad. The developer has failed, even if the customer got what it wanted.
The measure of what is better in all these companies is determined internally and generally by the business. So it ultimately serves the business and only serves the user indirectly—-if at all. Obviously you optimize for different things—-performance, reliability, simplicity perhaps. And the business has more stakeholders who want different things, more metrics, more integrations with their third-party tools, etc. That’s not to say your measure of better is wrong. It’s probably not and I probably agree with you! But it’s coming from a different place I think.
I appreciate your comment. But isn’t my conception of quality the only one that matters? If a doctor saves money for her employer by skipping some expensive test, and my health outcome is worse, her employer may be happy, but we don’t say that she is a good doctor.
It's kind of like saying that you as an amateur chef can out-cook the line cook at a cafeteria who prepares a thousand meals because your best-prepared meal is better than their offering. That actually may be true for that specific case. And sure, your site works great when a couple of people look at it. But what happens when you direct the entirety of the NYT's traffic to it to see how it does?
No, not at all. It is more like saying that I make better food than almost all restaurants in town. Which of course is not true, and that’s the point. Because I do make better websites than almost any I come across in the wild.
Your question about handling traffic is orthogonal to the topic of design. The answer is that any of my sites would do better, given the same server architecture, because I deliberately limit the amount that needs to be transferred for any particular page.
I’m an amateur builder but the work I’ve done on my house is better than most of the work I’ve seen done by various trades on my friend’s/family’s places.
Quality you get from strangers correlates with how easily the average customer can judge the work (food is easy to judge). Sometimes things are important enough to be regulated (healthcare) otherwise most markets are for lemons.
Easy: take the front page and remove every headline and introductory paragraph that, for some reason that boggles my mind, is repeated, sometimes more than once, sometimes more than twice, on various areas of the enormous page. Now you have a page with the same information that is lighter and easier to find things on. And less stupid.
Another example: sometimes there is a stock ticker near the top of the front page, and the number of digits it displays changes as the ticks go by. But they did it wrong, and when this happens the entire layout jumps. I learned how not to make this kind of mistake near the beginning of my self-education in amateur web design.
Yeah it's strange. For example, Facebook has billions of dollars, thousands of developers, and they themselves created the front end framework that they are using - yet Facebook's front end is slow and glitchy.
People trot this one out every now and again and what they mean is “front end used to be the easy part of the stack”. If you lived in the bad old days of jquery spaghetti and in-line php templates you’d realize what a bad take this is. Yes it used to be that anyone could jump into the front end. And the front end _sucked_ because _anyone could jump into the front end_ and so they did. There was no organization. No architecture. Nothing. Just a bunch of files that got harder and harder to maintain as the app increased in functionality and scope.
Modern FE is a discipline every bit as complex as anything that we face in the backend. It requires that we actually apply an architecture. Design our code so that we can respond to changes in our business requirements etc. Serious engineering rigor in other words. Hell I could make the argument that in some ways backend dev is much more straightforward.
I think it’s fair to say that we often put too much business logic into the front end. Fat clients are not something I agree with. But to say things used to be better is just flat out incorrect.
> I think it’s fair to say that we often put too much business logic into the front end.
As a back-end dev, this reminds me of the time I used our own product and saw a read-only field on the UI. It was some interesting bit of data that only existed in the database (we didn’t expose it in the API) and I brought it up how cool it was that we were doing that now. The front end dev said, “oh, it’s not from the database, we make several API calls to get the bits we need and then calculate it the same way we do on the backend”
I facepalmed. Like, just ask to expose that data, it’s a single line of code! Instead we made 7 API calls... :sigh:
Man I’d love it if our backend teams were that willing to make changes for us. So often I stumble across bizarre, complicated business logic in our various client code bases and ask the devs “why is this here? surely this is better put on the backend/already exists on the backed” and am greeted with “we agree, but when we asked the backend teams to make a change on their end they told us it would be done in the next year or two”.
> I facepalmed. Like, just ask to expose that data, it’s a single line of code! Instead we made 7 API calls... :sigh:
The reality here is that this person who worked in your company, along with everyone else who saw that code and deployed it, all thought it was easier to do what they did then ask you to expose that data with one line of code.
Yeah I generally think the UI should only manage stuff like that cosmetically. Stuff like form validation etc. The real work should happen on the backend and not care about UI stuff. So it just throws if the user tries to get cute. IMO that is a nice separation of concerns and keeps the client thin and presentational in nature.
HN seems to be pretty split on this issue. I concur with the linked tweet. Using the web beyond very simple pages like HN feels like wading through garbage.
Everything is slow, laggy, loads ridiculous amounts of stuff (I live in a country where traffic is very expensive at the moment, so this becomes more noticeable). Things are also flaky and need reloading whenever they get into broken states. The only way I use the web now is with uMatrix blocking everything, and then whitelisting stuff that pages want to run piecemeal. It's still terrible.
The larger a company is, the less usable its web stuff is also. Google Chat for example is unusable in very active rooms.
I hope that frameworks like Hotwire make server-side rendering "cool" again and that we can get out of this tar pit.
The reason I’m interested in Hotwire is because the Javascript frameworks are confusing. Every new Javascript framework I’ve looked into has been impossible to figure out. There are simply to many moving parts for me to be able to figure out where to start.
It may be a bad example, but it the latest I’ve attempted to use. It took me maybe an hour to get an Angular project started. It somehow managed to install 1000+ dependecies (a few of which seems deprecated) and I have no idea what I’m suppose to do next.
You get the feeling that frontend development is a stack of tools three levels deep and you’re not expected to understand how or why. It feels unstable. At this point I just avoid anything that requires npm.
That being said, I do see very nice project built using these tools.
It’s happening for a very particular reason imho. Frameworks like React create infinite ways one can structure and compose components. What once used to be a <Select> can now become
^ That’s where all the complexity is coming from. I will not even attempt to demonstrate my point by adding context and global stores into this.
Small bit of bitterness:
Then you make a nice little storybook component, and a jest snapshot to show that this is a nice ‘testable’ component, you know, like really dot your I’s and cross your T’s.
Back to my point:
This is powerful in the purest sense, as it’s super flexible, but also powerful in a way that can create insane amounts of complexity with different mindsets contributing. Not everyone sees a regular <DropDown>, some see all kinds of things.
I think a lot of the fragility (perhaps the better word is instability) comes from this power (chaotic, out of control power). I hope web components at the very least creates a standard list of UI components (the browser doesn’t even have a default modal yet, still rolling with alert(), and if you leave it to the wider community to make it, we will end up with <AbstractModal>).
My honest take is that the frontend development stack focuses almost entirely on the developer experience (mostly oriented towards shiny things), and the user experience is only a secondary effect of that.
That the developer experience also doesn't work is just an effect of the real-world, where people will end up using stuff outside of the small designated bucket of things that somebody attempts to keep compatible with each other.
I built a side project recently using Django/Hotwire. There's some JS, sure, but it's used where appropriate (media API stuff basically). Lighthouse gives 100% accessibility and best practices scores and performance comes in at over 90% on a good day, with performance issues mostly fixed with some database indexing and caching here and there (it runs on a single shoestring Digital Ocean droplet, so it's never going to be super fast or scalable without a bigger budget, but for the small traffic it gets it's fine). I feel I can reason about how it all works in my head, and fix bugs and add features quite easily. It was fun to build, and I was able to focus on interesting problems.
At the back of my mind is the feeling that somehow I'm doing it all wrong, and it should use a proper JS frontend framework like React or Vue that communicates with the backend with a proper REST API or better yet, GraphQL. I realize it's probably not the kind of project I should use to show off on my resume and that many will just consider it old school. At the same time though it does feel that maybe the industry took a wrong turn when it went all-in on SPAs.
To be honest, I think it's browser-specific. I use Brave, and everything is snappy. Occasionally, I use someone else's computer w/ stock Chrome or stock Safari, and it's a total shitshow. Decent ad / tracker blocking makes a massive difference. I don't think it's frameworks as much as it's all of the other analytics and bloat.
Your comment made me legit chuckle. Rather than "waiting" after each click, we get an instant 'page load', half of which can't be interacted with (looking at you, Amazon.com), and then a sea of loading spinners -- all under the guise of 'not waiting'. I'm not sure which of those I prefer, tbh.
I'm partial to good ol' fashion SSR sites these days, as I do most of my casual couch browsing on an old chromebook running Ubuntu. The number of heavy-weight SPAs that I just can't run on the hardware anymore seemingly climbs by the day. :(
There is - by definition - more overhead in client-side rendering. Think of it this way: The server needs to first serialise data, the client needs to deserialise it, then it needs to construct the DOM to update.
The server can just do the first step and things like Hotwire can add the dynamic bits you need - all overhead and extra processing is now gone.
I can list a lot of websites that don't render client-side that are fast, but very few that do which are.
is it painful for you to click on HN?
I find this website rapid fast.
Also because you dont have long running JS processes the RAM consumption is low and everything feels very quick, unlike most SPA garbage
Such sites should be using minimal or no JS. These aren't meant to be pretty interactive sites, they need to be solid bulletproof sites so people can get critical services. And I haven't even mentioned how SPA sites often lack any accessibility features (which is so much easier to implement if sticking to standard HTML+CSS and no/minimal JS).
They think that server side is slower because you have to send down more data, or you have to wait for the server to generate HTML.
Quite the contrary, it's slower to send down 1 MB or 10 MB of JavaScript to render a page, than to simply send down a 100 KB HTML page. Even if you need some JS also, browsers know how to render concurrently with downloads, as long as you provide enough HTML.
Rendering HTML on a server side Intel/AMD/whatever CPU is way faster than rendering it on a mobile device (and probably more efficient too).
Even if it weren't faster and more efficient, it would save battery power on the client.
And there is a ton of latency on the client side these days, ignoring network issues. There are ways of using the DOM that are expensive, and a lot of apps and frameworks seem to tickle those pathological cases. These 20-year-old browser codebases don't seem to be great workloads for even modern Android or iPhone devices.
---
edit: To be fair, I think what's driving this is that many sites have mobile apps and web apps now, and mobile apps are prioritized because they have more permissions on the device. (This is obvious when you look at what happened to Reddit, etc.)
It's indeed a more consistent architecture to do state management all on the client. Doing a mix of state on the server and state on the client is a recipe for confusion -- now you have to synchronize it.
Still there are plenty of apps that are website-only, like the government sites people are talking about. Those people appear to be copying the slow architecture of the dual mobile+web clients and getting a result that's worse.
Either the existence of the mobile app, or the drive to get people to install it, ruined the website. It's slower and has poorer usability than 5 years ago, and also 15 years ago.
Twitter is another company that existed before the App Store and the SPA trend. I noticed they recently turned off the ability to see the 280 chars you care about without having JavaScript on :-(
It's trivial to have a no-JS fallback, and they had it, but turned it off.
I'm more of the classic take that the "problem" is performance has continued to grow meaning we get more stuff made faster but the tradeoff is it runs at the same speed. Client side or server side there is no reason the app needs 10 MB of JS logic to do its job it just made it quicker and easier to deploy to have it use 10 MB of JS logic. For some things like required government services this is a real problem but for most things this is just reality - how fast a piece of software is isn't the only benchmark software is made against, often not even in the top 3 things it's checked against.
——
A simple checklist to provide superior front end applications:
* Don’t use this. You (general hypothetical you) probably don’t realize how easily you can live without it only because you have never tried. Doing so will dramatically shrink and untangle your spaghetti code.
* Don’t use addEventListener for assigning events. That method was added around the time of ES5 to minimize disruption so that marketers could add a bunch of advertising, spyware, and metric nonsense everywhere more easily without asking permission from developers. That method complicated code management, is potentially point of memory leaks, and performs more slowly.
* Don’t use querySelectors. They are a crutch popularized by the sizzle utility of jQuery. These are super epic slow and limit the creative expression of your developers because there is so much they can’t do compared to other means of accessing the DOM.
I now add ESLint rules to my code to automate enforcement of that tiny checklist.
> Quite the contrary, it's slower to send down 1 MB or 10 MB of JavaScript to render a page, than to simply send down a 100 KB HTML page. Even if you need some JS also, browsers know how to render concurrently with downloads, as long as you provide enough HTML.
The argument I remember from years ago used "slower" as a bad simplification. What it actually meant was, doing the rendering server-side wasted CPU the server could be using to respond to another request. Instead, just send the data and distribute some of this processing to all your users by way of client-side rendering.
Also, back then bundling was a lot rarer than it is now, so the large libraries that made of most of those 10 MB javascript files would be separate files the browser can keep cached.
sometimes it can be done right.
And it uses a framework :-)
Eg. It says ‘what is your date of birth?’ And then on the next page it says. ‘You are X years old. Is that correct?’
At the end, just before submission it showed the completed sections, and you could look at each section to see a tidy summary of the answers provided. Except for the section 1, where doing that jumped to the last of the section's questions instead.
I wanted to see one of the answers I had given in the section 1 to check before committing, and due to the missing summary page, tried stepping backwards and forwards through each question. All were shown, except the question I wanted to check (and had answered) was skipped.
Inspired to try things, I added a non-existent person to the household, then removed the non-existent person.
After that, when I stepped through all questions in section 1 it included the question and answer I'd been looking for, allowing me to confirm it was correct before submission.
I work on one such project and it absolutely drives me nuts -- it's a rails app, but the customer front-end (which is literally just a form to fill out) is a React SPA. There is nothing there that couldn't be done with Turbolinks and some light JS for validations/popups.
And the tech doesn't really matter. I hate React with a passion, because Angular is so much more sane - in my experience. But it's fine. It's mature, it can be made to perform completely well.
The tech doesn't really matter. The people doesn't matter either. Even the costs doesn't matter as much as people think. What matters is political will, procurement culture, so systems and structures. This will influence (and bring) all the others in line.
First of all, for all the broken websites there are also a lot of websites that are not broken at all. It's also very easy to make a broken website using a completely server-side rendered website, and that actually happens often enough.
Second, SPA's decouple frontend and backend in a very strict way, which can bring enormous organizational benefits. Time-to-market is greatly improved, etc.
This whole "frontend vs backend" dialogue is basically white noise that completely misses the point. Use SPA or not, whatever, in the end it's just a tool to get the job done. Both are prone to errors when handled improperly.
A website that got it completely right is the Dutch corona dashboard called "Coronadashboard" created by the Dutch government: https://coronadashboard.rijksoverheid.nl. It's blazingly fast, extremely well-designed, looks great and the code is of exceptional quality. Also it's open-source, have a look at the code: https://github.com/minvws/nl-covid19-data-dashboard/.
The dashboard is completely written in Javascript. I truly believe a website with of such high quality would not be possible without frameworks such as React or Next.js (or whatever other framework and their respective tooling has to offer).
Closing note: let's try learn more from the websites that got it right than the ones that have failed. It's so easy to be critical, it's much harder to give some praise.
Not sure what everyone else here is using but you're right, at least for me the website is running buttery smooth in both Firefox and Chrome and the code is of exceptional quality.
> I truly believe a website with of such high quality would not be possible without frameworks such as React or Next.js (or whatever other framework and their respective tooling has to offer).
I agree. I wrote my first lines of HTML & CSS almost 20 years ago and back then JavaScript dev was nightmare. People wouldn't even have been able to create an interactive website like the Coronadashboard. (Of course we're not talking about static websites here – these were already relatively easy back then.) Nowadays, JavaScript dev admittedly still is a nightmare but there are at least tools like TypeScript, Angular, React and so on that make things a bit less painful and allow experienced web developers to create exceptional frontends. I'm putting "experienced" here because the frameworks still come with some pitfalls and bad practices are still very common. (I can't believe how many tutorials about using forms with React still recommend updating the state and re-rendering the entire form upon every.single.keystroke.)
Can you please clarify this?
If you are referring to a web app time to market I mostly see otherwise.
Here is an example: create a form to edit user settings. In Rails/Django it is pretty simple straightforward. But if you go React then you have an API and a component in React and have to think about routes and security of API and validations on both FE and BE.
There are advantages for SPA probably but time to market is not one of them in this case.
If you are referring to creating a mobile app and a webapp then maybe React with React Native is indeed faster. With this I agree. But I personally prefer to wait for Hotwire to launch their mobile framework and use that.
I've experienced and have witnessed quite the opposite with SPAs
On a separate note, I think the dashboard is alright but I wouldn't call it excellent. It's a bit slow and some things, for example mousing over the map, are glitchy.
There will be some tweaks and changes to the API to support the UI, but it's rarely drastic, and it ensures that every single capability we build out can be exercised by some other kind of program somewhere.
If you're building components of a larger system (which we do), SPAs and web components atop back-end APIs make sense. If you're building a one-off fill-out form kind of application... No those don't make sense. You don't even need JavaScript for those, if you degrade into just HTML + CSS for users that have shut off JS.
I was surprised recently as I had to fill in a massive form based site for UK NHS mental health survey stuff. The site appeared as a flat background old style early 2000’s sort of thing with bits of comic sans in it. I nearly died when I first saw it expecting a shit show.
But it turned out to be responsive and fast. It worked perfectly from end to end and had little to no JavaScript. It was by far the best thing I’ve used for years. There were over 100 page transitions in total. It wasn’t an SPA but a classic web site with little or no intelligence. Seemed to be backed by python.
I want this back.
That's surprising - even external stuff usually has to follow the NHS's design system.
https://service-manual.nhs.uk/design-system
Right. It probably IS an old site and they've had years to iron-out bugs.
At the same time the real question is why lots of governments wouldn't need to reinvent everything. That is maybe the original sin.
I'm not sure if the problems have been fixed, but they were both recognized and a process was put in place to address them after the healthcare.gov debacle.
Unfortunately the incentives are backwards. Typically governments have to choose the lowest bid. Those private companies you mentioned can make more complex tradeoffs.
Sometimes, of course (cough Florida unemployment and covid tracking sites), failure in performance is by design.
Government services were slow and unreliable before computers. The problems aren’t technological.
Deleted Comment
When I click on a link, or click to submit something, and I realize that it's NOT an SPA, my immediate thought is "this is going to be a nightmare."
When I click a button, and it has to make a request to load the next set of html, fully replace the page contents, and probably submitted a post request that will have issues restoring the state of my form if I go back, etc, I feel like I'm on a DMV site from the 90s. Navigating something that is meant to operate as a cohesive application by instead using a series of markup displays is only ever going to be hacky at best. I love using SPAs, because they're actually applications, rather than snapshotted frames of an application running on a remote server.
One of my duties is performance improvement, so I’m very familiar with problematic architectures. You can have an instantly loading informational SPA... I tend to use Gatsby for that. For more interactive sites, I prefer vanilla React with lightweight libs, code splitting and sensible caching rules.
I do agree that poor design, reluctance to refactor and lib/tracking heavy apps are very problematic. Isn’t that something that’s always been a problem in Webdev?
It's much better than people make it out to be.
I don't understand this.
Frontend dev is about writing a portable software to run on as many runtimes as there are users. There is literally nothing to prevent any frontend software, good or bad, to "scale", because scaling in terms of users is nonsical for frontend dev (unless you consider browser compatibility as scaling, to which statement I'm orthogonal).
Note: meanwhile, back end dev is about writing a program that accepts as many users as possible, which make front and back nicely yin-yang together. But maybe I'm overstating things here :shrug:
https://infrequently.org/2021/03/the-performance-inequality-...
More context in older post:
https://infrequently.org/2017/10/can-you-afford-it-real-worl...
> Partner meetings are illuminating. We get a strong sense for how bad site performance is going to be based on the percentage of engineering leads, PMs, and decision makers carrying high-end phones which they primarily use in urban areas.
And yes, that makes a difference, you can deliver more features, faster, more reliably.
But, if those sites just bog down and take double-digit seconds to load, even when properly deployed on scaleable delivery architectures, with fiber-optic speeds, and CAD/gamer-level power machines, they are junk.
And I increasingly see exactly this junk been over the last few years. Even (and often especially) the major sites are worse than ever. For example, on the above setup, I've reverted Gmail to the HTML-only version to get some semblance of performance.
Sure, some of this could be related to Firefox' developments to isolate tabs and not reuse code across local containers and sessions, but expecting to get away with shipping steaming piles of cruft because you expect most of it will be pre-cached is no excuse.
Your site might have the look and features of a Ferrari, but if it has the weight of a loaded dump truck, it will still suck. If you are not testing and requiring good performance on a rural DSL line and mid-level laptop (or similar example of constrained performance), you are doing it wrong.
> Debugging modern front-end code is a breeze
Is still hit or miss every once and while depending on the use case. For example debugging service workers is still a bit of a nightmare. And I've also had some issues debugging a chrome extension written with Vue CLI + the browser extension plugin.
First, you are basically breaking the concept of the web, a collection of documents, not a collection of code that must be executed to get a document. That has many bad effects.
Browsing is slower, you have to download the code of the whole application and wait for it to execute other API calls to the server before the page is usable. That can take a couple of seconds, or even more on slower connections. With the old pages rendered server side not only you didn't have this effect, but also the browser could start to render the page even it was not fully received (since HTML can be parsed streamed). Not everyone has a fast connection available at all time, and it's frustrating when you have only a 2G network available and you cannot do basically anything on the modern internet.
It's less secure, since you are forcing the user to execute some code on its machine just to look at an article in a blog. And JavaScript engines in browsers are one of the most common source of exploits, given their complexity. Also JavaScript can access information that can fingerprint your browser to track you, without particular permissions. Ideally in a sane world most of the websites don't require JavaScript, and the browser would show you a popup to allow the website to execute code (just like they as you access to the camera).
It breaks any other tools that are not a browser. In the old days you could download with wget entire sites on your hard driver to consult them offline, with "modern" SPA is impossible. Of course that implies that it breaks things like the Wayback machine and thus history is not preserved. And serach engines penalizes sites that are not a clean static HTML.
It's also less accessible, since most SPA don't respect the semantic of HTML, everything is a div in a div in a div. And of course you still need a browser, while with HTML documents you could process them with any software (why a blind person need to render the page in a browser? While a screen reader could have simply parsed the HTML of the page without rendering it on screen...). It breaks navigation in browser, the back button no longer works as you expect, reloading a page could have strange effects, and so on. I can't reliably look at the address bar to know the page I'm in.
Finally it's less robust. SPA are a single point of failure. If anything goes wrong the whole site stops working. While a bug on a page on a classical server side rendered website breaks only that particular page. Also error handling is not present on most SPA, for example what happens if an HTTP request fails? Who knows. Look at submitting a form, most of the time there is no feedback. On a classical server side rendered application if I submit a form I either get a response from the server, or the browser informs me that request had failed and I can submit the form again.
Is actually one of the symptoms of the issue at large I think.
I understand some sites depend on more complex APIs and such - certainly fbook uses the complex stuff that these frameworks do..
However many people are using wordpress and similar tools simply to have a basic static few bits of info from a few pages display decently on various devices. I too am guilty of such a thing on more than one occasion.
For these cases I am moving towards converting the designs to static html and using a php based script from coffeecup to handle the contact form part that in the past I've lazily just added a two click plugin from wordpress to handle.
I feel that the CSS standards group really dropped the ball with not having better responsive menus 'built in' as a part of the problem that can be solved in the future. Now that grids are built in - the menus that bootstrap does auto-magically for most devices is the missing piece that keeps many sites from being just html/css.
I'd love to go back to netobjects fusion for design, but it has not kept up with the responsive needs of the web. I tried coffeecups site designer in beta and it wasn't for me. I've built dozens of sites using notepad++ and occasionally load up sublimetext for find/replace - but still feel that more visual / wysiwyg type stuff is greatly desired by much of the world.
Wordpress is going more in that design direction as the gutenburg blocks start to work with full site design and not just page content. And I still keep meaning to take the time to try out pinegrow site builder - as that might be the replacement for netobjects that I've longed for.
But it's not just me - there are plenty of people who could / would make a site and find things too complex today - about 7 years ago I found someone in the top 3 search results for a home service - and inquired about their designer / seo .. The guy who was there doing the work told me he made the page in microsoft publisher.
While I'm not advocating for the bloat that Msoft's frontpage pushed into the world, and I know the time of clearpixel alignment is a distant memory, even though we have an infinite amount of tools - it still seems the front end world is more complex than it needs to be.
It is better in some ways and worse in others. I hope CSS gets better menu options so there can be less pieces needed for a decent puzzle. I like non-js sites just fine, and building with less tools is okay too.
I think people are forgetting how bad it used to be. Loads of jquery spaghetti code everywhere rerendering the page 1000 times with a nested for loop.
Also, web applications have became so much more complex than they were (but still work!). Things like Figma would be unthinkable even a few years ago. And - even though it's running in a browser Figma feels far more responsive than Illustrator or Fireworks (RIP), plus it has a load of collaboration tools which these desktop apps don't have.
I think this is the important thing here. Everything feels less stable, and more prone to breaking, on the modern web. You write some simple HTML, style it with CSS, and write vanilla JS for the parts that need it, and everything feels solid. You start a new project with a framework, and it seems like there is this whole area of your project that is essentially a black box, ready to break (or misbehave) any time.
Any framework you're not familiar with will feel like this. This isn't something unique to frontend frameworks.
Regardless without a framework it ends up with you reinventing core framework features anyway. Yes, a framework driven app is more complex if the page/app is extremely simple. But if it is of any complexity a framework driven app is going to be far easier to maintain and reason about in 99% of situations.
Bootstrap + jQuery on the front end, Flask or similar with CherryPy on the backend. SQLite or Postgres if you need it. That will handle 99.9% of websites, it will be cheap and easy to develop and host, and deliver a great experience to the end user.
Now 1 component will be: server side html, client side html, css and js.
A good analogy I think is any person using MS publisher to create a book layout. Sure it can be done quickly because the software does the thinking; but should it? Good books have skilled typographers designing the proportions of the pages relative to text block, the fonts, et cetera. The end result is a book that one scarcely notices but feels very pleasant to hold and experience.
We need websites that are subservient to their content; sites we scarcely notice but thoroughly enjoy.
EDIT: one last point: I think part of the problem is also that browsers have an identity issue. E.G. they started as a static publishing platform, but now they are a full blown operating system with web assembly etc. As we know, anything that does too many things is inherently complex, and hence “bloated.”
You can make a SPA that's a joy to use and loads and runs extremely fast. Hardware has never been faster; browsers and application frameworks have never been as good as this (I'm talking about the web stack). It's really nothing to do with the tools or technologies that marketing insists it needs mixpanel, GTM, optimizely, hotjar, FB pixel, smooch, and god knows whatever else in order to do its job effectively.
Dead Comment
But then, with Angular/React/etc, you now not only have 1 project on your hands but 2 different projects that must be compatible: the backend and the frontend. These 2 different projects ought to be in 2 different languages, unless you are developing backend in JS too - maybe there's even an ORM that can generate migrations these days for NodeJS ! But that wasn't the case last time I checked, also loosing the ability to have server side rendering without the additional effort of deploying a rendering server with all that comes with it.
People ditched jQuery saying that vanilla JS was just fine, but it turned out not to be quite the case so there are still releases of jQuery, and at the same time NodeJS was released and npm and then Angular/React/etc which in my opinion created with two goals in mind 0. having OO components for GUI dev and 1. offer IoC to overcome the difficulty of dealing with custom component lifecycle like in jQuery which leaves you to monitor the DOM and instantiate / destroy each plugin by yourself. Idk if there are other reasons, but it seems to me that apart from that, that DHTML is still pretty much the same: you're adding some logic to change tags and attributes, anyway.
Today we have a chance to break these silos again with the Web Components and ESM browser feature and W3C standard because it does solves elegantly the problems that React/etc seemed to be designed for and do no impose a frontend framework, you can just load the script in your page and start using <custom-tag your-option=...>... The browser will manage the Component lifecycle efficiently, and devs can use it in their templates without having to load a framework so everybody wins.
This is also a chance to simplify our code, levitate by throwing away dependencies. Of course if you want to do full client side navigation with an API backend you still can, but you can make generic components that you will also be able to reuse in other projects that do not use a particular framework. You need no tool to make a web component, this is an example of a jquery plugin that was ported to a web component which has no dependency, with browser tests in Python: https://yourlabs.io/oss/autocomplete-light/-/blob/master/aut...
Webpack does a lot, still slow for development be we're seeing the light with Snowpack and esbuild, allowing to have webpack in production only (ie. to generate a bundle in Dockerfile) and benefit from actually instant reload thanks to ESM.
So if you go for web components and snowpack, you get an extremely lightweight toolkit which I love that will work for every page that's not an overly complicated web app. But then I thought I actually don't have so much frontend code, it would be nice to have it along with the backend code, so we went for a python->js transpiler to develop web components in pure python which also replace templates, it was surprisingly fast to implement: https://yourlabs.io/oss/ryzom#javascript
Is this improving the situation or not depends on your POV, heck I'd understand if you even hold it against me, but the frontend development tooling is still evolving for sure, and I can see how the browsers are doing efforts (except Safari) to simplify DHTML development, because:
“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies.” – C.A.R. Hoare, The 1980 ACM Turing Award Lecture
This setup is described in more detail in single-spa's documentation.
Are they? I'm skeptical we can know that for any given website; specifically, accessibility is a huge pain point for so many sites, and it would seem difficult to know how many people are turned away from a site because it isn't set up to serve them.
What we have today are the same problems with a new wrapper, created by a not dissimilar group of people from the past.
Well, if the top 100 tracks from, say, 1961 to 2021 progressively get less musically diverse, with simpler chords, less harmonies, less timbral variety, lesser melodies, more repeatition, less dynamics, less genre variety, more infantile lyrics (something that has been studied and measured several time, e.g: ), etc, then it's not some "survival bias".
https://www.researchgate.net/publication/346997561_Why_are_s...
https://newatlas.com/pop-music-trends/23535/
https://www.nature.com/articles/srep00521
https://www.res.org.uk/resources-page/economics-of-music-cha...
https://pudding.cool/2018/05/similarity/
And indeed everybody agrees that JavaScript itself is now better as it is the modern web APIs. If the rest is so controversial there must be a reason.
HTML has been the wrong place to make creating UIs (as opposed to marked up text) for such a long time. Things are getting better from multiple angles, but it's still very uneven.
in music’s case, one of the replies to this comment describes decline in musical complexity/sophistication, which i’d personally attribute to democratization of the tools (which are also much more powerful, allowing kids w computers to do what took whole teams and studios full of equipment before).
so i think only seeing high quality UIs in the wild is more of a mixed bag than is intuitive to us — a world absent of shitty soundcloud rap is a world with worse music tooling.
But I’m curious, what are the examples of great websites “back in the day” that have stood the test of time and would be considered good web development today?
Not "just" survival bias.
I think it's quite reasonable to take a position that there was more innovation and creativity in pop music in 1950-2000.
As the genre has matured, popular/commercially successful music has depended more and more on fewer and fewer producers.
Indeed, there are fewer commercially successful artists: the US Billboard Hot 100 Top 20 this week contains 3 tracks by Justin Bieber, 2 by the Weeknd, and 2 by Drake. That would have been unheard of.
My big problem with web application (in a true application sense - not things that are just hypertext and shouldn't be applications at all!) development is that the DOM+CSS model is not made for rich UI experiences. Basic paradigms from desktop applications like spreadsheet cell selection, draggable cards (think Vizio/UML), modals, and MDI / multi-document interfaces are non-standard and brutally challenging to construct in a reasonable way using the DOM.
What I'd invent would pretty much be Silverlight without the Microsoft, honestly - a typed UI framework built on a widget and widget-binding model which would allow a smooth mixture between OS-level widgets (and the accessibility affordances they provide) and hand-controlled rendering/drawing, with a stripped-down runtime enabling resource constrained clients to execute client-side code which would hide / paper over the resource constrained backend connection.
Anyway, I also think this is orthogonal to the argument in this thread, because I think that most of the conversation and the sentiment of the original tweet is to call out applications that SHOULD be hypertext, not applications. For applications that need to be applications, I think things have gotten better, not worse, although they're still pretty bad.
My understanding from working at Opera at the time (2004 and onwards) is that the "senior" (experienced) people were busy implementing stuff in the browser engines and various GUI platforms. We hired very young (often like 17-18) and very smart people who had experience actually writing HTML/CSS/JS to work on developing web standards. They naturally typically had very little commercial software development experience.
After a while it kinda became a competition - which browser company's web standards people would be leading in terms of ideas/innovations. How many web APIs could browser company A do, vs browser company B. That's when the complexity really started accelerating. Then Safari and Chrome happened.
I wish we had spent more time working with these web standards people (we had so much experience building GUIs, for instance). They were really friendly and approachable, but we were all so busy with actually building the browsers, during these browser war times. It feels like a missed opportunity, in retrospect.
But nevertheless I think Java/JAWS got basic stuff still right: run applications directly from network (with auto-updates), have security controlled sandbox, be fully cross-platform and portable, and have useful stuff built-in, and even had the PWA-like thing that you could have desktop shortcuts to JAWS applications.
Regarding startup time, I do wonder what would be the startup time of your typical Electron app on a late 90s-early 00s PC hardware. Somehow I'm imaging JVM might not be that bad in comparison...
The main downside, they only work on Windows, MS once had Silverlight for Mac but it wasn't particularly good.
This is pretty much objective truth in my mind. I might agree with people who hate the web if I always had to browse with ads enabled.
Everything else, including the aforementioned sites when ads are blocked, is pretty decent in my experience.
[0] https://android.stackexchange.com/questions/226580/how-is-ve...
I'll add a huge culprit to the list: Medium. They have their own "clever" code to progressively load images and I find it absurdly frustrating (because in most scenarios, the images just don't ever load), so I end up with lots of design articles with blurry blobs of color instead of images.
There are so many ways to natively progressively load images that I'm not sure why they've chosen the most user-hostile one. You see blurry blobs of color in no particular order, no indication of which ones are loading, no way to force a particular image, etc. I find myself frustrated often and I end up abandoning most of the stories (or avoiding Medium altogether).
A coworker and myself had the worst internet speeds in the company, but he recently got FTTH.
I went to replicate a bug, by clicking on a button quickly and excessively, and was able to add 5 duplicate entries into the DB.
The frontend dev could not replicate it until I suggested using the Chrome dev tools to simulate a slower connection.
I have Frontier DSL.
We implemented disable-on-submit after that, and the inconsistencies went away. Other people would click again when the response didn't come fast enough, but that was rare to lead to corruption. Probably when their connection was lagging, they would click multiple times in frustration. But that one guy provoked enough destruction to make us notice and fix it for everybody!
I certainly felt the pain of a slow connection too and felt frustrated at how badly this affected the experience on so many sites.
Here’s an idea: web developers should test their sites on the slowest connection speed still commonly available (ie 3G) and make sure the experience is still acceptable. I know that webpagetest [1] allows you to do this and the results are illuminating.
[1]: https://www.webpagetest.org/
https://engineering.fb.com/2015/10/27/networking-traffic/bui...
With all the images, videos on social media etc. it seems rather small in terms of bandwidth.
As for why front end development may be a bit of a mess here? Well, it's really a problem that doesn't have just one cause.
On the one hand, business pressures likely have a huge impact here. Companies love analytics, tracking, ads, etc, all of which contribute greatly to the issue of slow loading, broken sites. If you include everything and the kitchen sink, then things will clash or time out or break.
There's also an issue with businesses focusing on speed and working getting done quickly rather than well. If you're given unrealistic deadlines, and the feature set is changed midway through development because some manager/sales team thought of a shiny new idea that needs implementing... well it seems like optimisation, usability, etc often end up on the cutting room floor.
And yes there's a bit of envy from developers towards Facebook/Google/Apple/whatever there too, and a need to 'test out the shiny new toys' so they can potentially get hired there in future. I suspect CV padding is definitely why SPAs are way more common than they used to be, and why simple static or server rendered sites seem nearly nonexistent now.
Deleted Comment
Deleted Comment
I am not an expert abdominal surgeon. If we are in Antarctica and your appendix becomes inflamed, I will try to save you by cutting it out. But you will probably die.
I am not an expert front-end developer. I make sites for myself, for others, and have even been paid for it a few times. But I know I am basically an amateur. However, my sites are far, far better than almost all sites in the wild, created by teams of full-time, professional front-end developers. They work identically in all browsers, are fast, easy to navigate, and people say they look good. They validate, too, except for a few nonstandard htmx attributes.
There is something very strange in the WWW, when a self-taught amateur like me can make a site that is better than the New York Times’.
Making a site to share info and have fun is easy. Nearly anyone can do it. Making a site to actively exploit people in the most intense way possible while still being legal(ish) takes highly-trained experts.
If every department proves its worth by claiming space on the front page, customer experience will be the last concern. That's before someone notices how giving content away is not a monetization strategy. Then add the siren song of 'only one more script' for marketing data,even if nobody has a clue about what to do with that data.
there is definitely an explosion of tooling, frameworks, etc, but past some obvious things, IMHO most websites are crippled by business people (that will likely not be using the website) make all sorts of technical decisions what must go in and how it’s supposed to work (ads and tracking crap is one of the things that jumps up).
Here you go. You make sites for yourself. My own stuff is fucking fast as well.
It gets slow when you have to ward of a thousand idiotic requests and implement maybe 10 of those to shut people up.
While in my own world I’d leave it as it is.
Shitty product is primarily the result of shitty culture. It’s just that FE is visible and atrocious DB calls are not.
Your question about handling traffic is orthogonal to the topic of design. The answer is that any of my sites would do better, given the same server architecture, because I deliberately limit the amount that needs to be transferred for any particular page.
Quality you get from strangers correlates with how easily the average customer can judge the work (food is easy to judge). Sometimes things are important enough to be regulated (healthcare) otherwise most markets are for lemons.
Would love to see something that does everything the NYTimes does that is clearly and obviously "better."
Another example: sometimes there is a stock ticker near the top of the front page, and the number of digits it displays changes as the ticks go by. But they did it wrong, and when this happens the entire layout jumps. I learned how not to make this kind of mistake near the beginning of my self-education in amateur web design.
Modern FE is a discipline every bit as complex as anything that we face in the backend. It requires that we actually apply an architecture. Design our code so that we can respond to changes in our business requirements etc. Serious engineering rigor in other words. Hell I could make the argument that in some ways backend dev is much more straightforward.
I think it’s fair to say that we often put too much business logic into the front end. Fat clients are not something I agree with. But to say things used to be better is just flat out incorrect.
As a back-end dev, this reminds me of the time I used our own product and saw a read-only field on the UI. It was some interesting bit of data that only existed in the database (we didn’t expose it in the API) and I brought it up how cool it was that we were doing that now. The front end dev said, “oh, it’s not from the database, we make several API calls to get the bits we need and then calculate it the same way we do on the backend”
I facepalmed. Like, just ask to expose that data, it’s a single line of code! Instead we made 7 API calls... :sigh:
The reality here is that this person who worked in your company, along with everyone else who saw that code and deployed it, all thought it was easier to do what they did then ask you to expose that data with one line of code.
Everything is slow, laggy, loads ridiculous amounts of stuff (I live in a country where traffic is very expensive at the moment, so this becomes more noticeable). Things are also flaky and need reloading whenever they get into broken states. The only way I use the web now is with uMatrix blocking everything, and then whitelisting stuff that pages want to run piecemeal. It's still terrible.
The larger a company is, the less usable its web stuff is also. Google Chat for example is unusable in very active rooms.
I hope that frameworks like Hotwire make server-side rendering "cool" again and that we can get out of this tar pit.
It may be a bad example, but it the latest I’ve attempted to use. It took me maybe an hour to get an Angular project started. It somehow managed to install 1000+ dependecies (a few of which seems deprecated) and I have no idea what I’m suppose to do next.
You get the feeling that frontend development is a stack of tools three levels deep and you’re not expected to understand how or why. It feels unstable. At this point I just avoid anything that requires npm.
That being said, I do see very nice project built using these tools.
<DropDown>
And or
<EnhancedDropdown disableEnhance={isEnhanced}>
And or
<MultiSelectWithAutocompleteAndSearchBar onlySingleSelectionAllowed={true}>.
Then you can compose all of those together into:
<ThisComponentMakesSenseToOnlyMeDropDown show={isDropdown && !Carousel && showCarousel} totallyDifferentData={totallyDifferentData} enhanceWith={<EnhancedDropdown>} replaceWith={<CarouselWithNoDropDown> >
^ That’s where all the complexity is coming from. I will not even attempt to demonstrate my point by adding context and global stores into this.
Small bit of bitterness:
Then you make a nice little storybook component, and a jest snapshot to show that this is a nice ‘testable’ component, you know, like really dot your I’s and cross your T’s.
Back to my point:
This is powerful in the purest sense, as it’s super flexible, but also powerful in a way that can create insane amounts of complexity with different mindsets contributing. Not everyone sees a regular <DropDown>, some see all kinds of things.
I think a lot of the fragility (perhaps the better word is instability) comes from this power (chaotic, out of control power). I hope web components at the very least creates a standard list of UI components (the browser doesn’t even have a default modal yet, still rolling with alert(), and if you leave it to the wider community to make it, we will end up with <AbstractModal>).
That the developer experience also doesn't work is just an effect of the real-world, where people will end up using stuff outside of the small designated bucket of things that somebody attempts to keep compatible with each other.
At the back of my mind is the feeling that somehow I'm doing it all wrong, and it should use a proper JS frontend framework like React or Vue that communicates with the backend with a proper REST API or better yet, GraphQL. I realize it's probably not the kind of project I should use to show off on my resume and that many will just consider it old school. At the same time though it does feel that maybe the industry took a wrong turn when it went all-in on SPAs.
I'm partial to good ol' fashion SSR sites these days, as I do most of my casual couch browsing on an old chromebook running Ubuntu. The number of heavy-weight SPAs that I just can't run on the hardware anymore seemingly climbs by the day. :(
The server can just do the first step and things like Hotwire can add the dynamic bits you need - all overhead and extra processing is now gone.
I can list a lot of websites that don't render client-side that are fast, but very few that do which are.