This article says that using `transform` is faster than using `left` and `top` because `transform` is handled on-GPU, while `left` and `top` are not. This is a myth. I tried the demo page in the Firefox profiler; neither the optimized nor the unoptimized version missed frames. I tried it in the Chrome profiler; the unoptimized version missed frames, but the time was clearly labelled by the profiler as being GPU time, not reflow. Neither browser did reflows (or, all reflows were fast enough to not have any profiler-samples associated.)
The reality is that browsers contain large piles of heuristic optimizations which defy simple explanation. You simply have to profile and experiment, every time, separately.
Yep. At one point it was probably the case that it always caused reflows but browsers have had so much investment into optimizations that it probably realizes those layout changes don't require reflows and skips them.
Thanks for the input! Indeed, the reflows were incredibly fast because I was using position: absolute, which meant the squares were not affecting anything else in the dom, but just their position (so cheap operation). I will add a note on the article on that... also I am improving the 'bad' example so the shuffle button triggers reflow in a significant way.
1) I thought of giving an easier to read example. I just moved the example to react, so the snippets actually match exactly what's going on in the background.
2) It is true! Though, using shadows on the optimized code doesn't slow it down. I added more toggles to test same effects on transform and top/left implementations.
3) I think it's still interesting to start with some thought and then observe that in practice things are different really. In fact, thanks for all the feedback, as it made me go back and do more investigation.
If you don't mind you can give the article a second look now :)
This is both informative and kind of amazing that anything works at all. Talk to anyone who did graphics at the turn of the century and you'll hear about "racing the beam" which you had only 16.67 mS before vertical retrace took you back to 0,0. Why double and triple buffering was invented, and how "animation time" is skewed by "frame time" and if you want to keep your animations from jumping or jittering you really needed to know how many milliseconds between the last frame and the one your rendering will take so that all your assets are rendered where they should be at that time.
There is a lot of fun programming to be had in that space.
We still race the beam, only on separate command chains. Across threads. With 100x the vertices. It’s still fun. Light, shadows, volumes, SDRs, HDRIs, PBR, so much has been thoroughly researched and standardized. We even have realistic clouds with volume.
This is why I got into programming to begin with. Fun first, visual second, technical challenges third, money fourth, company last.
How do we even see anything on a browser?
How do pixels turn into shapes, color, and movement?
Every time we scroll, hover, or trigger an animation, the browser goes through a whole routine. It calculates styles, figures out layout, paints pixels, and puts everything together on screen. All of that happens in just a few milliseconds.
It’s kind of wild how much is happening behind what feels instant. And the way we write code can make that process either smooth and fluid or heavy and janky.
I wrote an article that walks through this step by step, with a small demo showing exactly how these browser processes work and how a few CSS choices can make a big difference.
A lot of these don’t happen on scroll and hover under normal circumstances. For example smooth scrolling on touchscreens is implemented by only re-running the compositor on each frame, and using the existing GPU-resident bitmap of the text being scrolled. That’s why non-passive onscroll callbacks make scrolling suck, especially on mobile.
What's not stated, is that we used to re-render the text at the bottom each time you scrolled up, and could still do it pretty fast (not quite in 16.67 milliseconds but we could have if computers had been today's speed), and in the meantime, we seem to have forgotten how to do that. Although we also have more pixels now, which probably changes things.
I think that many projects use wrong architecture, when it's a possibility for business code to block animations.
IMO all the "user" code must run in a dedicated thread, completely decoupled from the rendering loop. This code can publish changes to a scene tree, performing changes, starting animations, and so on, but these changes ultimately are asynchronous. You want to delete an element from a webpage, but it'll not be deleted at this JS line, it'll be deleted at the next frame, or may be after that, if rendering thread is a bit busy right now.
Animations must stay fluid and UI must react to the user input instantly. FPS must not drop.
Browser does it wrong. Android GUI API does it wrong. World of Warcraft addons do it wrong.
Which begs the question - if all of these projects got it "wrong", what's the chance that the "right" thing isn't right at all?
All animation is inherently discrete. No matter how many threads you have, there always has to be the last rendering thread, the thing that actually prepares the calls to the rendering backend. It always has to have frames, and in every frame, in the timestamp T, it will be interested in getting the world state in the timestamp T. So, the things that work on the world state - they have to prepare it as it was in T, not earlier, not later. You cannot completely decouple it.
In one of game projects that I worked on, a physics thread and a game thread actually were pretty decoupled, and what the game thread did was extrapolating the world state from the information provided by physics, because it knew not only the positions of physics objects, but also their velocities. Can we make every web developer to set velocities to the HTML elements explicitly? Probably not.
This has been tried, even in the browser. It just leads to buttons that do nothing when clicked, scrolling to unpopulated areas, infinite loading animations and other such artifacts.
It does not help, you have smooth animations but you feel you're disconnected from the program and trust it less. The UI code just needs to not take much time and offload background stuff to another thread, but not the UI logic itself. It also naturally synchronizes events.
And sometimes it's better to briefly block UI thread as the alternatives lead to worse user experience.
That's basically what React Native does/did and it's generally good but turns into a nightmare when you need to synchronize interactions between the two threads. 16ms is a long time - if your UI manipulations eat up most of that time then there's something wrong. Entire video games can run basically on one thread within that time and they do way more.
Multithreading in the browser kinda sucks though, it's too slow to share significant data between workers (threads), and if you try it with SharedArrayBuffer you eat the serialisation costs.
Not really something anyone can change at this point, given that the entire web API presumes an execution model where everything logically happens on the main thread (and code can and does expect to observe those state changes synchronously).
The reality is that browsers contain large piles of heuristic optimizations which defy simple explanation. You simply have to profile and experiment, every time, separately.
Dead Comment
1) The code that is running is not what's presented; it executes (non-transpiled) vanilla JS.* Why not just show that?
2) Removing the box shadow massively makes the two closer in performance.
3) The page could just be one sentence: "Reflowing the layout of a page is slower than moving a single item." GPU un-related.
---
*Code that actually is running:
```js
```1) I thought of giving an easier to read example. I just moved the example to react, so the snippets actually match exactly what's going on in the background.
2) It is true! Though, using shadows on the optimized code doesn't slow it down. I added more toggles to test same effects on transform and top/left implementations.
3) I think it's still interesting to start with some thought and then observe that in practice things are different really. In fact, thanks for all the feedback, as it made me go back and do more investigation.
If you don't mind you can give the article a second look now :)
The word you are looking for is "baloney". They are pronounced differently.
There is a lot of fun programming to be had in that space.
This is why I got into programming to begin with. Fun first, visual second, technical challenges third, money fourth, company last.
Every time we scroll, hover, or trigger an animation, the browser goes through a whole routine. It calculates styles, figures out layout, paints pixels, and puts everything together on screen. All of that happens in just a few milliseconds.
It’s kind of wild how much is happening behind what feels instant. And the way we write code can make that process either smooth and fluid or heavy and janky.
I wrote an article that walks through this step by step, with a small demo showing exactly how these browser processes work and how a few CSS choices can make a big difference.
IMO all the "user" code must run in a dedicated thread, completely decoupled from the rendering loop. This code can publish changes to a scene tree, performing changes, starting animations, and so on, but these changes ultimately are asynchronous. You want to delete an element from a webpage, but it'll not be deleted at this JS line, it'll be deleted at the next frame, or may be after that, if rendering thread is a bit busy right now.
Animations must stay fluid and UI must react to the user input instantly. FPS must not drop.
Browser does it wrong. Android GUI API does it wrong. World of Warcraft addons do it wrong.
All animation is inherently discrete. No matter how many threads you have, there always has to be the last rendering thread, the thing that actually prepares the calls to the rendering backend. It always has to have frames, and in every frame, in the timestamp T, it will be interested in getting the world state in the timestamp T. So, the things that work on the world state - they have to prepare it as it was in T, not earlier, not later. You cannot completely decouple it.
In one of game projects that I worked on, a physics thread and a game thread actually were pretty decoupled, and what the game thread did was extrapolating the world state from the information provided by physics, because it knew not only the positions of physics objects, but also their velocities. Can we make every web developer to set velocities to the HTML elements explicitly? Probably not.
It does not help, you have smooth animations but you feel you're disconnected from the program and trust it less. The UI code just needs to not take much time and offload background stuff to another thread, but not the UI logic itself. It also naturally synchronizes events.
And sometimes it's better to briefly block UI thread as the alternatives lead to worse user experience.
Random GUI apps aren’t incentivized enough and so garbage leaks through. I die a bit every time a random GUI app stutters drawing 2D boxes
Scrolling? Animations? LOL.