Code splitting can be a nice optimization but it can also be a lot of effort for little gain as it is in our case. It is an optimization for the few times a user hits our app with a stale or empty cache. And we have no mobile users, this is an enterprise analytics app.
We do not grow organically by people stumbling on our app and thinking "wow that was fast". We go through months of enterprise sales process to ink a deal, then onboard maybe 20 key users at the company.
To put the effort into code splitting would be purely an exercise in keeping up with the new hotness. That's not to say we don't keep a close eye on the package size, just that it's not much of a great optimization for a regular user's experience in our case.
Also serving all assets from the same domain saved us some time in domain resolution.
The last part, absolutely yes. CDN's are obsolete, although they'll jump through all sorts of hoops to convince you that's not the case.
As long as your service uses HTTP/2 it's far more efficient from DNS and multiplexing/TCP/TLS handshake standpoint to serve from your own domain. And better for security most of the time since hardly anyone uses CSP and hash signatures for their third party scripts.
The original sell of CDN was that everybody would have the same libraries cached. With the massive poliferation of JS you would have to have a 100 gig cache for that to be remotely true.
A couple years back I moved a C# app to .NET core for the HTTP/2 support. I tried removing the ~4 external CDN dependencies just to see what happened. Load speed improved around 30% because no additional DNS lookups and TCP window stuff worked around by multiplexing.
An aside, try not to use multiple subdomains. They trigger DNS lookups and don't work as well with CORS. It's easy to accidentally trigger cors and a bunch of meaningless round trips by using different subdomains
I think you are conflating general Content Delivery Networks with “shares JavaScript library repositories that happen to use a CDN”. While the “it’s a shared repo of common JS files so it’s already cached” idea never really worked (you are right about the added cost of DNS + tcp/TLS) general Content Distribution Networks absolutely provide performance benefits by delivering your static (and optionally dynamic) content from edge nodes that are much geographically closer to the visitor. Usually these CDNs front the entire site origin, so you dont have extra dns/overhead for subdomains like shared JS repos.
(I work with many IR Top 100 retailers, and I’ve helped to build the dashboards comparing edge vs origin. It’s valuable even for sites where the majority of the visitors are in the US, and especially so if you have a substantial international audience)
Looking at our metrics, we have over a 95% cache hit ratio using CloudFront. We use it to store large JS libraries, think Excel, table plugins, HTML editors, and it works great. It keeps the pressure off our origin app server and we version each release of the vendor code, so it rarely changes. It also helps that our users share the same office space, so its always cached for the 20+ users in the office. The phrase "CDN's are obsolete" is not true, it is not a simple true/false statement but a complicated one.
HTTP/2 is not always faster depending on if your link connection has any loss to it (wifi, spotty 4g).
> Code splitting can be a nice optimization but it can also be a lot of effort for little gain as it is in our case.
We came to the same conclusion when investigating as well, but because of the structure of our site - we had just a handful of layout primitives that were reused across the site so page splitting had no benefits because every page used the same code anyway.
Still an optimisation to look into if you can benefit from it!
Ugh.. does anybody else have this feeling that no matter how fast JavaScript gets, average web app performance will not change at all? Kind of like with risk compensation principle - the safer your gear, the more risks you take on, we’re in the same spot with web/electron apps.
JavaScript vms got so much faster than 10 years ago and yet all the web sites are much worse. Memory and cpu hogs.
This is not something improving the vm can fix. There just is no competition so that customers could send a feedback signal saying that performance is unacceptable.
As VMs get faster, people will stuff more JS in pages, just because it's doable.
Some things were not even possible and were "unlocked" by browser optimizations.
For example: writing games or complex animations with JS and the DOM was nearly impossible (that gap was filled with flash). As browsers got faster, the need for flash just went away.
Also the more APIs the browsers ship, the more is possible to do with JS, and so the pages ship more JS to use these new featuers. [1]
For example: after browser notifications became broadly available, almost _all_ pages ship now a snippet of JS code to annoy the user with notifications. I'm sure such fads blow up the average size of bundled JS across all web pages.
Yep. Unless a company actively engages clients to get product feedback, it's unlikely that the end users even have the clout to complain about app performance until a vendor improves it.
There are times that I perceive the community trying to convince itself that web devs can have their cake and eat it too. That it's not just possible but easier to built performant, accessible and maintainable apps in React/Angular/Vue, when that's just not a universal truth. Sure, those tools may make certain aspects of development easier through the abstractions they leverage, but those also come with a cost (i.e. breaking API changes seem to be in vogue these days).
Ultimately, this part in parcel of the JS community. Aspects of the ecosystem feel so fragile (NPM, framework/lib churn, etc.) that it's easy to be cynical about web app performance.
Something I've wondered is if consumers have forgotten what fast, responsive UI feels like or that because so many people used bargain desktops/laptops, they never knew. Thus, slow visually cumbersome software is the norm for them...
Years ago when the debate was natively compiled apps vs java/c# apps, hardware was still slow enough that you could reasonably justify using c++ or c or something instead. Then, hardware got "fast enough" that, generally, that didn't matter.
We are fast approaching a time when hardware will not be able to save us so I believe we'll eventually see devs have to slow down and optimize better.
> Ugh.. does anybody else have this feeling that no matter how fast JavaScript gets, average web app performance will not change at all? Kind of like with risk compensation principle - the safer your gear, the more risks you take on, we’re in the same spot with web/electron apps.
> This is not something improving the vm can fix. There just is no competition so that customers could send a feedback signal saying that performance is unacceptable.
I think that's part of the reason Google made AMP: they don't allow arbitrary JavaScript scripts because it's not practical to get all websites and all developers to optimise JavaScript usage. Likewise, the amount of CSS code allowed is strictly limited.
Whenever I test website optimisation (basic websites, no fancy-shmancy stuff, not web-apps) the slowest parts, least accessible too, are advertising and tracking, and social.
Google tells me the code to add to my page (for G+, advertising, analytics), then things like Google Lighthouse tell me don't do X, Y, Z that are all done by the code Google told me to use.
Minimise request size, leverage browser caching, defer parsing, ... they don't even minify? At least they enable gzip, Amazon ads don't even do that.
Load show_ads.js asynchronously ... well why not tell me to do that up front rather than after the fact in PageSpeed?
What do you mean there's no competition? This feels like such a weird statement when there's competition in just about every market segment on the web. If you mean there's no competition where the differentiating feature between competitors is performance then well sure, that's a lot rarer.
For user-facing applications performance only really matters in human time, and is pretty far down on the list of how people choose software over things like features, price, ease of use, etc.. Until performance becomes a big problem because the app/site becomes unusable it's basically no problem.
Hackers might not like it but we're such a weird market to sell to. Not necessarily that performance is a weird preference but that by in large hackers are perfectly fine with woefully inefficient feature-packed apps for their "primary" apps like their IDE but then want super lightweight skeleton-featured experiences for everything secondary.
Performance is crucially important for a lot of applications. It profoundly affects the usability of a site if performance is too slow, and many studies over the years suggest direct losses on the bottom line as a result.
It may be a controversial opinion, but I think part of the problem today is that young web developers look to people who work at high-profile places like Facebook and Google as role models and for examples of best practice. And yet, a lot of Google's and Facebook's own web properties are... less than exemplary... in terms of performance, usability, design, and other factors that are important in most situations, and have been getting steadily worse rather than better over time. I humbly submit that the reasons for the runaway success of these giants have really very little to do with the quality of their sites any more, and that up to a point they can get away with things because of their dominant positions and lock-in effects that simply wouldn't be acceptable for most of the rest of us.
> For user-facing applications performance only really matters in human time, and is pretty far down on the list of how people choose software over things like features, price, ease of use, etc..
There was a study that has shown correlation between revenue drop and milliseconds that page takes to load. It’s not only hackers that respond to performance.
As for competition - due to network effects there are very few competitors to things like Facebook, Amazon and other big tech.
If these were federated services participating in communication built around some gigantic independent social graph - that would enable more competition in terms of UI performance, features, etc. Not going to happen anytime soon though.
I feel like this is similar to the induced demand argument: "If you provide more supply then people will just (mis)use it and you won't have excess anymore so it's not really worth it"
And the implicit response is that people using available resources for whatever they want is fundamentally a good thing.
In the context of computers, though, I want to decide what to spend the RAM on, it’s not great when random third parties decide to take advantage of my RAM just because it’s there.
If you look at why the bundles are so big, the frameworks are so large etc., you’ll realise it all comes down to fighting browser deficiencies:
- no declarative APIs for DOM updates
- no good APIs to do batch updates
- no native DOM diffing (so decisions on what to re-render have to be done in userland)
- no DOM lifecycle methods and hooks (you want to animate something before it’s removed from DOM, good luck)
- no built-in message queueing
- no built-in push and pull database/key-value store with sensible APIs (something akin to Datascript)
- no built-in observables and/or streaming
- no standard library to speak of
- no complex multi-layered layout which matters for things like animations (when animating a div doesn’t screw up the whole layout for the entire app/page)
etc. etc. etc.
As a result every single page/app has to carry around the whole world just to be barely useable.
> As a result every single page/app has to carry around the whole world just to be barely useable.
So pages before this javascript bloat became commonly accepted were unusable? Maybe at some point web developers have to accept that the web was simply not built for the things they're trying to do with it, and that comes with a cost. Maybe they should evaluating whether they _really_ need animations on everything, whether everything _has_ to be a SPA made in [current popular framework]. I'm not even saying these things are bad, they absolutely do have value, but that value has a tradeoff, and usually that tradeoff is placed on your customers.
> Maybe at some point web developers have to accept that the web was simply not built for the things they're trying to do with it, and that comes with a cost.
So what's your proposal? To have one browser for "web documents" and another just for those things built by web developers that you dismiss with straw men, yet still acknowledge as having value?
If they have value, but they incur a cost, shouldn't we be looking at why we have that cost and how to drive it down? That's exactly what the post you're replying to is trying to convey.
Yes, not everything has to be animated or an SPA, because some things are good enough as just simple "web documents". But others gain in usefulness, usability and cognition by being augmented (e.g. data visualisation, business apps, games, et al). Not every web content is created for the same purpose. We still end up with the same crippled DOM/JavaScript combo. That should be the focus of the conversation.
The parent’s point is that, if all these things were browser JS runtime features, there’d be no tradeoff to be made. They’d be “free.”
If every JS page in the world today includes the same line of code, isn’t it obviously the fault of browser makers for not making that line of code part of the JS prelude and thereby making it “free”?
We really do need animations on everything, SPAs, etc. Having a less pretty looking site makes us appear much less trustworthy to non-technical customers, who don't care that their browser was not originally created for our online storefront. I imagine that's the same for pretty much every other e-commerce site.
> So pages before this javascript bloat became commonly accepted were unusable?
They weren't unusable. They, too, were barely useable in any scenario outside of a static HTML page with static images. There's a reason why jQuery was (and probably still is) the most popular Javascript library. You don't have to go too far to see what people were doing before "js bloat", just look at ExtJS [1]. They would have loved to have the "js bloat" 10 years ago.
I agree that the browser has many deficiencies for the modern front end developer, but OTOH there is no real need to make huge bundles.
As an example look at Svelte. It compiles down to super efficient imperative code without sacrificing the dev experience. A hello world weights like 5kB gzipped I believe.
Actually, the minimal Hello world example in Svelte is 2.7 kB even before gzip. It's really awesome and I already used it in one production app, the results were pretty amazing performance wise.
Besides what snek mentioned the whole CSS Houdini effort is about providing better, finer grained tools to be able to do/control partial updates.
That directly of course doesn't help with bundle sizes, but it means bundles don't have to carry so much logic and optimizations because they can just use the already performant APIs.
Note that you don’t really have to “animate something before it’s removed from the DOM”.
A “disappearing” animation is an embellishment and a purely visual effect; therefore, it only depends on what the original object looked like, not the entire original object itself.
It is a lot simpler to reason about the state of your system if you delete items immediately and create proxies to handle special effects for deletion.
For example, to have an element “animate away”: create a new purely-visual element at an absolute location that looks like the original object, delete the original object immediately, and then let the proxy go away whenever it is done animating. This completely frees you from having to worry about the true lifetime of the original object.
> For example, to have an element “animate away”: create a new purely-visual element at an absolute location that looks like the original object, delete the original object immediately, and then let the proxy go away whenever it is done animating. This completely frees you from having to worry about the true lifetime of the original object.
The OPs point is that doing this is far more DOM-intensive, and leads to worse performance. It isn't a good solution.
> For example, to have an element “animate away”: create a new purely-visual element at an absolute location that looks like the original object, delete the original object immediately, and then let the proxy go away whenever it is done animating. This completely frees you from having to worry about the true lifetime of the original object.
Yeah, and to properly do that you need that very same JS bloat to:
- somehow observe a DOM element being destroyed (there are no lifecycle methods on DOM objects)
- somehow figure out if it's the object being destroyed, or it's parent, or grandparent, or...
- somehow quickly create a visual proxy instead of the object being destroyed, quickly substitute it in the DOM (avoiding repaints, reflows and jank). With all the correct things for it: size, position, scroll position, all internal representation (let's say we're animating a login form floding in on itself)
- somehow animate that proxy object
- and then remove that proxy object.
It's a lot simpler
By the way. If I remember correctly, just getting the position of an object causes a full-page reflow [1]
It's not "wrong" in the sense of being intrinsically unsafe, but IMHO it's a bad habit which can tempt to insecurity in the end.
When working with external files a programmer almost has to work with the framework, and once (s)he does that the framework - well, a modern framework - would take care of the security details and do it right.
When working inline it's too easy to add a script tag manually, and from there it's a bit too easy for someone in the team to miss something (write a js without the hash/nonce and not notice the warning) or talk him/herself down to lowering security ("importing this 3rd party js is too hard, lets use just a nonce and forget about hashes", "this policy is too constraining, it's just a SMALL script, no risk here").
When working in a team, it's much better to have a hard and fast rule which forces everyone to work right. There's really no reason to use inline when using external files works really well now - and is apparently better for responsiveness too.
Note however that there are interactions between using a nonce and caching which require caution (since nonce is supposed to be used only once but caching can work against that), so proper protection here has a cost in complexity and/or speed.
It's interesting that they benchmark Facebook and Reddit, two sites I find significantly more laggy than, say, Google. Are there choices in how we build sites that are more important than how V8 can optimize things?
Are you comparing the Facebook news feed to the Google home page? Is that a fair comparison, considering the news feed does a lot more than loading static assets? Unless you were comparing the News Feed to say, logged in Gmail.
- It's very easy to find JS programmers. There's a _lot_ of them. But programmers proficient with pure functional programming languages are harder to come by.
- JS is natively supported by browsers and it is pretty much guaranteed that the code you write is going to work for ever. Elm on the other side, who knows? It might lose steam and go into support mode, or drop support, or maybe they introduce breaking changes in a future version. JS is a much safer bet.
- Writing Elm does not guarantee a good polished product. You can write bad software in good languages and vice-versa.
That being said, it's only a risk. Maybe your team is more comfortable with Elm and they get more productive. Maybe the language design makes easier writing code with less defects. Maybe it actually gives you an edge.
It is hard to say if Elm brings value to your business. It most likely depends on what kind of a business that is (web agency than cranks 3 visit-card websites a day? Or is it one developing a complex app?)
In any case, managers get very nervous about languages that are not mainstream. And they do have good reasons.
Kolme's comment is a very good reason, as someone who picked a non-standard frontend language I'll add my 2 cetns. I'm working on a project that went from javascript to reasonML when we felt the lack of a really strong type system was making managing the complexity of our app (which is unusually complicated) unmanageable.
When I was evaluating Elm it was my favorite choice out of all the various strongly typed flavors of javascript, clean syntax, good abstractions, and very strong runtime guarantees. The thing that made us reject it, and will keep me from recommending it to others, was the change made, I believe in version 0.15 which, as I understand it, restricted FFIs so that only core maintainers could write them. I know several companies have frozen their version of Elm due to the change.
Particularly when working with external packages written in JS, not having an escape hatch is a huge vulnerability. We've already hit one instance since using Reason where if we could not drop down into pure JS we would have had to either rewrite one of our main dependencies or at least hard-fork it and re-write substantial portions of the application, a multi-month (maybe multi-year) slowdown that would have killed the project.
At this point I would not recommend Elm to anyone who is not willing to rewrite any pure javascript module they might want to use (this may fit the bill at some very large corporations).
Those languages are also subjectively much harder to learn. I liked Elm and really tried to learn it on the side, but even after a week or so of spending time with it I had a hard time wrapping my head around how to simple DOM updates.
Compared to that, React took me a day to get comfortable with. There is an argument to be made though, that behind that day there's potentially all those years of working with imperative languages which just isn't there for Elm.
In my situation we specifically avoided Elm due to the instability of their documentation. v0.18 docs were cleared off of their servers and replaced by v0.19, with unannounced breaking changes.
The (B?)DFL pushes that even though it's not hit a 1.x version that it's production ready and stable, in our experience that isn't the case :(
How does "use Elm" fixes the problems mentioned in the article? Elm is compiled to JS after all and has to be downloaded/parsed, which is what the article is talking about regarding cost.
After playing around with elixir and Phoenix, elm is the last point of the trident I've been meaning to touch. now if only I could find somewhere near me that uses any of those...
haha, programmers sure do love to virtue-signal with regard to JS. We get it, you're a sophisticated programmer that wouldn't be caught dead writing JS if you had a choice. It's a clever quip but not really contributing anything to the discussion of the topic at hand.
> it seems to suggest optimizing your site for V8 a bit too much.
+1. After seeing some opinions that Chrome is becoming the new IE over the past few weeks, I've switched to Firefox to do my small part in trying to prevent another browser monopoly.
Which parts of the article aren't applicable to Firefox? It seems like most of these idioms should give advantages to SpiderMonkey (if it's still called that) and V8.
Tip no. 1: Don't put any code on the page that doesn't need to be there, then any minute differences in the exact implementation likely won't add up to much.
We do not grow organically by people stumbling on our app and thinking "wow that was fast". We go through months of enterprise sales process to ink a deal, then onboard maybe 20 key users at the company.
To put the effort into code splitting would be purely an exercise in keeping up with the new hotness. That's not to say we don't keep a close eye on the package size, just that it's not much of a great optimization for a regular user's experience in our case.
Also serving all assets from the same domain saved us some time in domain resolution.
As long as your service uses HTTP/2 it's far more efficient from DNS and multiplexing/TCP/TLS handshake standpoint to serve from your own domain. And better for security most of the time since hardly anyone uses CSP and hash signatures for their third party scripts.
The original sell of CDN was that everybody would have the same libraries cached. With the massive poliferation of JS you would have to have a 100 gig cache for that to be remotely true.
A couple years back I moved a C# app to .NET core for the HTTP/2 support. I tried removing the ~4 external CDN dependencies just to see what happened. Load speed improved around 30% because no additional DNS lookups and TCP window stuff worked around by multiplexing.
An aside, try not to use multiple subdomains. They trigger DNS lookups and don't work as well with CORS. It's easy to accidentally trigger cors and a bunch of meaningless round trips by using different subdomains
(I work with many IR Top 100 retailers, and I’ve helped to build the dashboards comparing edge vs origin. It’s valuable even for sites where the majority of the visitors are in the US, and especially so if you have a substantial international audience)
HTTP/2 is not always faster depending on if your link connection has any loss to it (wifi, spotty 4g).
We came to the same conclusion when investigating as well, but because of the structure of our site - we had just a handful of layout primitives that were reused across the site so page splitting had no benefits because every page used the same code anyway.
Still an optimisation to look into if you can benefit from it!
JavaScript vms got so much faster than 10 years ago and yet all the web sites are much worse. Memory and cpu hogs.
This is not something improving the vm can fix. There just is no competition so that customers could send a feedback signal saying that performance is unacceptable.
Some things were not even possible and were "unlocked" by browser optimizations.
For example: writing games or complex animations with JS and the DOM was nearly impossible (that gap was filled with flash). As browsers got faster, the need for flash just went away.
Also the more APIs the browsers ship, the more is possible to do with JS, and so the pages ship more JS to use these new featuers. [1]
For example: after browser notifications became broadly available, almost _all_ pages ship now a snippet of JS code to annoy the user with notifications. I'm sure such fads blow up the average size of bundled JS across all web pages.
[1] Look at this! https://developer.mozilla.org/en-US/docs/Web/API
There are times that I perceive the community trying to convince itself that web devs can have their cake and eat it too. That it's not just possible but easier to built performant, accessible and maintainable apps in React/Angular/Vue, when that's just not a universal truth. Sure, those tools may make certain aspects of development easier through the abstractions they leverage, but those also come with a cost (i.e. breaking API changes seem to be in vogue these days).
Ultimately, this part in parcel of the JS community. Aspects of the ecosystem feel so fragile (NPM, framework/lib churn, etc.) that it's easy to be cynical about web app performance.
I’m not sure if those were peak UI performance. It wouldn’t surprise me.
I’m guessing at least a generation of people haven’t actually experienced reasonably responsive computing environments.
We are fast approaching a time when hardware will not be able to save us so I believe we'll eventually see devs have to slow down and optimize better.
> This is not something improving the vm can fix. There just is no competition so that customers could send a feedback signal saying that performance is unacceptable.
I think that's part of the reason Google made AMP: they don't allow arbitrary JavaScript scripts because it's not practical to get all websites and all developers to optimise JavaScript usage. Likewise, the amount of CSS code allowed is strictly limited.
Google tells me the code to add to my page (for G+, advertising, analytics), then things like Google Lighthouse tell me don't do X, Y, Z that are all done by the code Google told me to use.
Minimise request size, leverage browser caching, defer parsing, ... they don't even minify? At least they enable gzip, Amazon ads don't even do that.
Load show_ads.js asynchronously ... well why not tell me to do that up front rather than after the fact in PageSpeed?
Eg: https://gtmetrix.com/reports/alicious.com/RyJ575Jd
For user-facing applications performance only really matters in human time, and is pretty far down on the list of how people choose software over things like features, price, ease of use, etc.. Until performance becomes a big problem because the app/site becomes unusable it's basically no problem.
Hackers might not like it but we're such a weird market to sell to. Not necessarily that performance is a weird preference but that by in large hackers are perfectly fine with woefully inefficient feature-packed apps for their "primary" apps like their IDE but then want super lightweight skeleton-featured experiences for everything secondary.
well, even the most feature-bloated IDE loads faster than GMail on my computer (before anyone asks, I'm on gigabit fiber)
It may be a controversial opinion, but I think part of the problem today is that young web developers look to people who work at high-profile places like Facebook and Google as role models and for examples of best practice. And yet, a lot of Google's and Facebook's own web properties are... less than exemplary... in terms of performance, usability, design, and other factors that are important in most situations, and have been getting steadily worse rather than better over time. I humbly submit that the reasons for the runaway success of these giants have really very little to do with the quality of their sites any more, and that up to a point they can get away with things because of their dominant positions and lock-in effects that simply wouldn't be acceptable for most of the rest of us.
There was a study that has shown correlation between revenue drop and milliseconds that page takes to load. It’s not only hackers that respond to performance.
As for competition - due to network effects there are very few competitors to things like Facebook, Amazon and other big tech.
If these were federated services participating in communication built around some gigantic independent social graph - that would enable more competition in terms of UI performance, features, etc. Not going to happen anytime soon though.
And the implicit response is that people using available resources for whatever they want is fundamentally a good thing.
- no declarative APIs for DOM updates
- no good APIs to do batch updates
- no native DOM diffing (so decisions on what to re-render have to be done in userland)
- no DOM lifecycle methods and hooks (you want to animate something before it’s removed from DOM, good luck)
- no built-in message queueing
- no built-in push and pull database/key-value store with sensible APIs (something akin to Datascript)
- no built-in observables and/or streaming
- no standard library to speak of
- no complex multi-layered layout which matters for things like animations (when animating a div doesn’t screw up the whole layout for the entire app/page)
etc. etc. etc.
As a result every single page/app has to carry around the whole world just to be barely useable.
So pages before this javascript bloat became commonly accepted were unusable? Maybe at some point web developers have to accept that the web was simply not built for the things they're trying to do with it, and that comes with a cost. Maybe they should evaluating whether they _really_ need animations on everything, whether everything _has_ to be a SPA made in [current popular framework]. I'm not even saying these things are bad, they absolutely do have value, but that value has a tradeoff, and usually that tradeoff is placed on your customers.
So what's your proposal? To have one browser for "web documents" and another just for those things built by web developers that you dismiss with straw men, yet still acknowledge as having value?
If they have value, but they incur a cost, shouldn't we be looking at why we have that cost and how to drive it down? That's exactly what the post you're replying to is trying to convey.
Yes, not everything has to be animated or an SPA, because some things are good enough as just simple "web documents". But others gain in usefulness, usability and cognition by being augmented (e.g. data visualisation, business apps, games, et al). Not every web content is created for the same purpose. We still end up with the same crippled DOM/JavaScript combo. That should be the focus of the conversation.
If every JS page in the world today includes the same line of code, isn’t it obviously the fault of browser makers for not making that line of code part of the JS prelude and thereby making it “free”?
They weren't unusable. They, too, were barely useable in any scenario outside of a static HTML page with static images. There's a reason why jQuery was (and probably still is) the most popular Javascript library. You don't have to go too far to see what people were doing before "js bloat", just look at ExtJS [1]. They would have loved to have the "js bloat" 10 years ago.
[1] https://www.sencha.com/products/extjs/
https://github.com/WICG/display-locking
https://github.com/tc39/proposal-javascript-standard-library... (https://developers.google.com/web/updates/2019/03/kv-storage)
https://github.com/tc39/proposal-observable
...etc
How long have they been debating the Observable proposal? 4-5 years?
As an example look at Svelte. It compiles down to super efficient imperative code without sacrificing the dev experience. A hello world weights like 5kB gzipped I believe.
All of the other frameworks seems so convoluted and to me it looks like people are reinventing the wheel left and right.
That directly of course doesn't help with bundle sizes, but it means bundles don't have to carry so much logic and optimizations because they can just use the already performant APIs.
A “disappearing” animation is an embellishment and a purely visual effect; therefore, it only depends on what the original object looked like, not the entire original object itself.
It is a lot simpler to reason about the state of your system if you delete items immediately and create proxies to handle special effects for deletion.
For example, to have an element “animate away”: create a new purely-visual element at an absolute location that looks like the original object, delete the original object immediately, and then let the proxy go away whenever it is done animating. This completely frees you from having to worry about the true lifetime of the original object.
The OPs point is that doing this is far more DOM-intensive, and leads to worse performance. It isn't a good solution.
Yeah, and to properly do that you need that very same JS bloat to:
- somehow observe a DOM element being destroyed (there are no lifecycle methods on DOM objects)
- somehow figure out if it's the object being destroyed, or it's parent, or grandparent, or...
- somehow quickly create a visual proxy instead of the object being destroyed, quickly substitute it in the DOM (avoiding repaints, reflows and jank). With all the correct things for it: size, position, scroll position, all internal representation (let's say we're animating a login form floding in on itself)
- somehow animate that proxy object
- and then remove that proxy object.
It's a lot simpler
By the way. If I remember correctly, just getting the position of an object causes a full-page reflow [1]
[1] https://gist.github.com/paulirish/5d52fb081b3570c81e3a
Inline scripts prevent using CSP 'unsafe-inline' therefore increasing XSS risk. Performance is only a secondary problem here.
When working with external files a programmer almost has to work with the framework, and once (s)he does that the framework - well, a modern framework - would take care of the security details and do it right.
When working inline it's too easy to add a script tag manually, and from there it's a bit too easy for someone in the team to miss something (write a js without the hash/nonce and not notice the warning) or talk him/herself down to lowering security ("importing this 3rd party js is too hard, lets use just a nonce and forget about hashes", "this policy is too constraining, it's just a SMALL script, no risk here").
When working in a team, it's much better to have a hard and fast rule which forces everyone to work right. There's really no reason to use inline when using external files works really well now - and is apparently better for responsiveness too.
Note however that there are interactions between using a nonce and caching which require caution (since nonce is supposed to be used only once but caching can work against that), so proper protection here has a cost in complexity and/or speed.
(I think I know the answer, but even so I'm interested in what other people think.)
- It's very easy to find JS programmers. There's a _lot_ of them. But programmers proficient with pure functional programming languages are harder to come by.
- JS is natively supported by browsers and it is pretty much guaranteed that the code you write is going to work for ever. Elm on the other side, who knows? It might lose steam and go into support mode, or drop support, or maybe they introduce breaking changes in a future version. JS is a much safer bet.
- Writing Elm does not guarantee a good polished product. You can write bad software in good languages and vice-versa.
That being said, it's only a risk. Maybe your team is more comfortable with Elm and they get more productive. Maybe the language design makes easier writing code with less defects. Maybe it actually gives you an edge.
It is hard to say if Elm brings value to your business. It most likely depends on what kind of a business that is (web agency than cranks 3 visit-card websites a day? Or is it one developing a complex app?)
In any case, managers get very nervous about languages that are not mainstream. And they do have good reasons.
When I was evaluating Elm it was my favorite choice out of all the various strongly typed flavors of javascript, clean syntax, good abstractions, and very strong runtime guarantees. The thing that made us reject it, and will keep me from recommending it to others, was the change made, I believe in version 0.15 which, as I understand it, restricted FFIs so that only core maintainers could write them. I know several companies have frozen their version of Elm due to the change.
Particularly when working with external packages written in JS, not having an escape hatch is a huge vulnerability. We've already hit one instance since using Reason where if we could not drop down into pure JS we would have had to either rewrite one of our main dependencies or at least hard-fork it and re-write substantial portions of the application, a multi-month (maybe multi-year) slowdown that would have killed the project.
At this point I would not recommend Elm to anyone who is not willing to rewrite any pure javascript module they might want to use (this may fit the bill at some very large corporations).
Compared to that, React took me a day to get comfortable with. There is an argument to be made though, that behind that day there's potentially all those years of working with imperative languages which just isn't there for Elm.
The (B?)DFL pushes that even though it's not hit a 1.x version that it's production ready and stable, in our experience that isn't the case :(
https://news.ycombinator.com/newsguidelines.html
Also, the "LONG TASKS MONOPOLIZE THE MAIN THREAD. BREAK 'EM UP!" section header felt a bit on the nose.
+1. After seeing some opinions that Chrome is becoming the new IE over the past few weeks, I've switched to Firefox to do my small part in trying to prevent another browser monopoly.
also amusing seeing it come from Google.