I think that browser developers are optimizing the wrong thing. Specifically: they optimize for execution speed while they better optimize for minimum memory usage instead. Let me explain why this is more important.
Let's say I am visiting a properly made website and it takes 10% of CPU to render. Even if browser devs make their browser twice faster, it will only save 5% of CPU time - and that would be completely unnoticeable. You might ask, what about modern websites, built with D*t compiled to webassembly, GPU acceleration, reactive frameworks, material design and capable to load the multi-core CPU at 100%? I am not using such sites so I don't care.
Now let's look at memory usage. Optimizing for speed usually causes increased memory consumption, and this increases the chance of invoking swapping. If the system starts swapping, it becomes orders of magnitude slower. No speed optimizations will matter in this case.
Therefore if you are targeting wide audience, and not only mac users, then you should be optimizing for memory usage. If the browser could use two times less memory while using twice amount of CPU time that would be perfect. Just think how many laptops with 2 or 4 Gb of RAM would become usable again.
> You might ask, what about modern websites, built with D*t compiled to webassembly, GPU acceleration, reactive frameworks, material design and capable to load the multi-core CPU at 100%? I am not using such sites so I don't care.
Does this just mean that browser developers are optimizing for the right thing, just not something that benefits you? Tons of people use these sites.
> If the system starts swapping, it becomes orders of magnitude slower.
Not really. Browsers try to keep stuff in swap that they probably won't need. Swapping doesn't become a problem until you're almost out of memory as well, and then you might get thrashing. But there's a wide range where CPU optimizations make sense. And such a large fraction of people have SSDs that even swap access can be pretty fast.
How do you know that isn't just measuring that most people keep to 5 tabs because otherwise everything grind to a halt. That's about how many tabs my wife's old Chromebook can handle before she has to start closing old tabs.
Well all know what the "solution" here is: to have multiple browser vendors, each optimising for something else. Imagine having a lightweight browser ran a bit slower but could run on a potato with 512MB or RAM. Or one that optimises for viewing static documents (you know - websites) and so uses barely any CPU when idle, but might not support all the crazy JS features.
Of course, having turned browsers into virtual machines, there isn't much specialisation that can be done without breaking things. Might it be time to create a subset of features that sites could limit themselves to and allow browsers to use a simpler and faster render pipeline? You know, like what we thought AMP was going to be before it turned out to have Google's monopolistic shit smeared all over it.
Of course, having turned browsers into virtual machines
I really wish browsers would take that one extra step and utilize their intimate knowledge of what memory allocations belong to which sites that are actually active and implement paging to disk instead of gobbling up ram like it is a infinite resource and expecting the general purpose OS to figure it all out.
The laptops with 2 or 4 Gb of RAM (e.g. Chromebooks) are also likely to have Celeron or MediaTek CPUs which might struggle much more than that, also unless you have multiple tabs open 4GB of RAM might not be that bad.
And given a choice between being able to open several tabs all of which are barely functional and being stuck on a single website at at time which runs smoothly I'd definitely choose the later (of course the tradeoff is probably not as straightforward, then again it's not completely obvious to me that optimizing for speed would necessarily result in higher memory consumption).
IMO even if they are optimizing for Macs, I'm not sure this approach would make sense (assuming the tradeoff between CPU performance and RAM usage exists) since they are much more likely to have less memory and better CPUs than PC laptops (e.g. you can probably easily get a Windows laptops with 32GB for $1000 or less).
If you improve CPU performance, you can compress memory with the leftover cycles. This is what I do on my RPi which allows me to run quite a few memory hungry processes at the cost of some CPU (which is fine, because those processes are mostly idle).
There are some truly memory-bound problems, but I believe the parent comment is correct on average. A lot of common speed issues can be helped by: adding a cache, adding an extra lookup index, memoising calculated values, adding a new denormalised data projection, etc. I think "usually" was a fair description.
I agree 100% -- and, as a front-end developer for more than half the time I've been writing code for web applications since 1995, I think my opinion should matter. I've seen the trends and can relate to exactly this point. Well-stated and, frankly, late to the public eye (but, that's my fault since I should've written something similar years ago).
Man, that math did not make any sense at all. A program can't use 10% of the CPU. It either uses the CPU or it doesn't. If a page renders with "5% CPU" measured over some interval that means it rendered twice as quickly, which is a substantial improvement.
They optimised mostly for startup time, not execution speed.
Java for example uses runtime VM information before it starts compiling classes to machine code. That means it's faster in the long run, but requires a 'warm up time'. Obviously a bit better for server side.
... and some of us disable swap to prolong laptop-motherboard-soldered-SSD lifetime, and Windows 10 regularly bluescreens when memory is exhausted and swap is disabled.
Actually memory isn't that much cheap, at least not in a third world country. As browsers are going to be used by everyone, memory optimization is necessary.
Here's a conundrum. Fabrice Bellard's QuickJS engine takes 3 minutes to run the test262 ECMAScript conformance suite. d8 takes 37 minutes to run test262 according to https://medium.com/compilers/testing-the-v8-javascript-engin... and it crashes for me in Chrome https://v8.github.io/test262/website/default.html Has anyone else observed this performance disparity? Could it really be possible that a JavaScript engine written by one guy is 10x faster for everyday code than the flagship product of a flagship company? Because if that's the case it'd make Fabrice Bellard the Han Solo of programmers.
I wrote a small lisp in Rust. I try to not do obviously stupid things, but it's just a basic parser + AST interpreter; its execution speed is slow compared to even a basic bytecode interpreter, and it's absurdly slow compared to a JIT. Yet it runs hello world in a fraction of a millisecond, whereas V8 needs tens of milliseconds.
In executing one computationally intensive program, V8 would be many orders of magnitude faster than my program. But in a test which largely consists of running tens of thousands of small, uninteresting test case programs, my silly interpreter would outperform V8 by orders of magnitude.
Essentially, performance is complicated, and improving throughput often has costs in other areas.
test262 is a conformance test and as such mostly runs small test-snippets once to verify, what counts here is setup, parsing and first-run execution.
While V8 has added a fast first-stage interpreter there are probably a ton of other overheads when starting a V8 context as well as "inefficiencies" related to preparing code for later JIT optimizations.
So for more CPU intensive code written in JS where the JIT activates the tables shifts radically, compare QuickJS with V8 (JIT-less) and V8 (JIT) on benchmarks, particularly Raytrace, Crypto and NavierStokes that should be pushing the computation performance. https://bellard.org/quickjs/bench.html
Are you sure they're running the same set of tests? There's over 10000 tests in the full set that webpage runs, but there are subsets and fabrice might be running one of the subsets.
FWIW test262 falls over partway through in Firefox and I have to kill the tab, though it doesn't crash. There are a bunch of test failures as well for things that are probably not implemented by anyone (I'm curious how many of the tests QuickJS actually passes)
My guess for any performance gap would be that the browser runner probably sets up an entirely separate execution context (iframe?) to run each test cleanly so they don't interfere with each other.
Look at DOM performance. Chrome has a ceiling of about 45m ops/s where FF max speed is dependent upon your ram and bus speed reaching beyond 4-5b ops/s. In both though querySelectors perform at about the same speeds as slow as 25000 ops/s.
I have written an OS GUI that executes in the browser. It loads, including full state restoration in about 120ms. I was recently interviewing with a search engine company, one of the big ones, where I could demonstrate that JavaScript tool can execute file system search much faster than the OS and produce better results. They seemed really impressed.
Despite all of this my biggest learning about performance is that mentioning performance during job interviews shows that you are incompatible with other JavaScript developers and will not be hired.
> Despite all of this my biggest learning about performance is that mentioning performance during job interviews shows that you are incompatible with other JavaScript developers and will not be hired.
As someone who done plenty of JS, cares about performance and also has handled hiring for JS positions in the past, I can tell you that this is generally not true. Caring about performance is not a reason to not get hired.
But it is possible to be "technically superior" in every conceivable way, but still not be a good hire. Why? Because the candidate might be missing vital soft skills or even not be very good at describing their thoughts, something that can slow down an entire team.
"Learning the wrong lesson" when things go wrong would also be something I'd consider high up for reasons to reject a candidate.
> "I was recently interviewing with a search engine company, one of the big ones, where I could demonstrate that JavaScript tool can execute file system search much faster than the OS and produce better results. They seemed really impressed."
How is this possible? OS should be using direct syscalls, any additional code you write should be pure overhead in theory, right?
Be curious to try Chrome again, but for a long time it’s felt bloated and slow. Been very happy with Safari, and particularly love the 2FA integration.
My biggest gripe is the lack of shared bookmarks and passwords between browsers. There are 3rd party extensions and what not to do some of this (eg 1password), but nothing beats the UX of true browser integration. I wish there was a single standard with pluggable backends so I had no switching costs. Quite frankly I’m surprised Firefox doesn’t just use the Mac keychain and share bookmarks with Safari in order to gain market share.
Chrome starts with different options for different users to try out new features. They state which features they ran the benchmarks with in the fine print below the article:
"Data source for Mac statistics: Speedometer 2.0 comparing Chrome 99.0.4812.0 --enable-features=CanvasOopRasterization --use-cmd-decoder=passthrough "
I get ~304 (all extensions disabled) but that's on an M1 Max. Other differences might be the amount of background apps/tasks - I usually keep mine pretty slim.
I have a friend who says: "when you invent more efficient lightbulbs, people do not consume less energy, they just get more light"
Every time we did a milestone performance improvement in our infrastructure, e.g. search used to take few seconds, we reduced it to few milliseconds. One year later our colleagues were doing machinegun-like queries and the search was back to take 1 second, and it is just a matter of time to go back to few seconds.
One thing that helped a lot was hard limits, e.g. InternetExplorer9 having hard cap on css size was literally the only thing that forced people not to push megabytes of css.
I wish Chrome does something similar, like 'you cant have more than 500kb of js code evaluated per page' or 'no more than 200kb css', it will do miracles in just one year, and I am willing to bet that we will have the same features we would without the limit.
EDIT: I did not mean to undervalue Chrome's 49% improvement in one year, which is just extraordinary work!
> I use Safari because of Chrome's memory bloat, Safari's text message MFA auto-fill features, and Safari's cross-platform (iOS/macOS) password manager.
Not OP but I have exclusively used Safari since the first version and recently switched to Firefox, and the password manager not being able to use iCloud Keychain is really a major bummer.
For such a long time Firefox didn't take being a native Mac OS citizen seriously, with scrolling not being natively implemented, form fields feeling "off" and in general just feeling like a Windows app in a Cocoa window.
Luckily those days are long gone now, and Firefox now feels much more like a native app, save for a couple of dialog boxes here and there. But having used iCloud Keychain for such a long time, and there being no option to import my 500+ passwords, it really has been a pain in the ass to switch browsers.
In the case of the MotionMark benchmark (more graphics/rendering focused), I have Safari beating Chrome's score by more than double (2703 vs 1152) on my M1 Max machine. Now I don't think Safari is 2x as fast as Chrome per se but it does explain how these speedometer tests can be so "close" yet Safari still feels faster.
I'm a little confused as to what the actual milestone is here? "We got 13% faster vs our last build" is just... well.. every day. I thought there was going to be some specific metric that they beat. It's good that browsers get faster at specific benchmarks, but what we basically always see is that the web gets more complex whilst browsers get faster (this is just an extension of the effect where software gets more complex as hardware gets faster and therefore the software you're using at any given time basically stays the same speed).
However, and I think this is important to bring up, browsers are basically the same speed and the reason to use one over the other is largely down to ergonomics and larger concerns. On the "larger concerns" side, Chrome is a failure. Chrome exists so the advertising company Google can track you and sell you targetted ads. It can do this in reasonable ways and it can do this in unreasonable ways, and with attacks on privacy like FLoC. Chrome is doing exactly what Internet Explorer was doing for microsoft in the 2000s and I think it's appropriate to call out that fact.
It’s more subtle and monumental than that. Before this JS was already executing at the same speed as Java except for in arithmetic. At this rate of improvement eventually JS will execute much faster than Java in all areas except arithmetic where it will achieve near parity.
I am not seeing any other programming language continuously improve their execution speed this much.
Let's say I am visiting a properly made website and it takes 10% of CPU to render. Even if browser devs make their browser twice faster, it will only save 5% of CPU time - and that would be completely unnoticeable. You might ask, what about modern websites, built with D*t compiled to webassembly, GPU acceleration, reactive frameworks, material design and capable to load the multi-core CPU at 100%? I am not using such sites so I don't care.
Now let's look at memory usage. Optimizing for speed usually causes increased memory consumption, and this increases the chance of invoking swapping. If the system starts swapping, it becomes orders of magnitude slower. No speed optimizations will matter in this case.
Therefore if you are targeting wide audience, and not only mac users, then you should be optimizing for memory usage. If the browser could use two times less memory while using twice amount of CPU time that would be perfect. Just think how many laptops with 2 or 4 Gb of RAM would become usable again.
> You might ask, what about modern websites [...] I am not using such sites so I don't care.
They might be optimizing the wrong thing for you, but the majority do use "modern sites"
Does this just mean that browser developers are optimizing for the right thing, just not something that benefits you? Tons of people use these sites.
> If the system starts swapping, it becomes orders of magnitude slower.
Not really. Browsers try to keep stuff in swap that they probably won't need. Swapping doesn't become a problem until you're almost out of memory as well, and then you might get thrashing. But there's a wide range where CPU optimizations make sense. And such a large fraction of people have SSDs that even swap access can be pretty fast.
As much as I want them to optimise for memory usage. Taking less CPU time for rendering is extremely important for battery.
Not to mention faster site is noticeable. And that is what sells. Chrome was faster than everything else when it launched. ( May be apart from Opera )
Of course, having turned browsers into virtual machines, there isn't much specialisation that can be done without breaking things. Might it be time to create a subset of features that sites could limit themselves to and allow browsers to use a simpler and faster render pipeline? You know, like what we thought AMP was going to be before it turned out to have Google's monopolistic shit smeared all over it.
And given a choice between being able to open several tabs all of which are barely functional and being stuck on a single website at at time which runs smoothly I'd definitely choose the later (of course the tradeoff is probably not as straightforward, then again it's not completely obvious to me that optimizing for speed would necessarily result in higher memory consumption).
IMO even if they are optimizing for Macs, I'm not sure this approach would make sense (assuming the tradeoff between CPU performance and RAM usage exists) since they are much more likely to have less memory and better CPUs than PC laptops (e.g. you can probably easily get a Windows laptops with 32GB for $1000 or less).
Lol, no?
If your benchmark is memory bound, reducing memory usage is probably the simplest way to make it faster.
Java for example uses runtime VM information before it starts compiling classes to machine code. That means it's faster in the long run, but requires a 'warm up time'. Obviously a bit better for server side.
Plus, most laptop manufacturers rip you off for every RAM spec bump. $400 - $500 to go from 16GB to 32GB?! They can f--k right off!
Not everyone is making six figure SV salaries to not flinch at these prices. That's why I love the framework laptop.
The former lasts over 2 weeks in S3, the latter not even a week.
For example in this microbenchmark, Chrome is 10x slower than both FF and Safari at one method.
https://jsbenchit.org/?src=cfcb916dd03df45952183e6484a14344
Here's another where in one case Firefox is 54x faster than Chrome
https://jsbenchit.org/?src=beb26575ad78caa99a2a8c45ce2b780f
In executing one computationally intensive program, V8 would be many orders of magnitude faster than my program. But in a test which largely consists of running tens of thousands of small, uninteresting test case programs, my silly interpreter would outperform V8 by orders of magnitude.
Essentially, performance is complicated, and improving throughput often has costs in other areas.
While V8 has added a fast first-stage interpreter there are probably a ton of other overheads when starting a V8 context as well as "inefficiencies" related to preparing code for later JIT optimizations.
So for more CPU intensive code written in JS where the JIT activates the tables shifts radically, compare QuickJS with V8 (JIT-less) and V8 (JIT) on benchmarks, particularly Raytrace, Crypto and NavierStokes that should be pushing the computation performance. https://bellard.org/quickjs/bench.html
FWIW test262 falls over partway through in Firefox and I have to kill the tab, though it doesn't crash. There are a bunch of test failures as well for things that are probably not implemented by anyone (I'm curious how many of the tests QuickJS actually passes)
My guess for any performance gap would be that the browser runner probably sets up an entirely separate execution context (iframe?) to run each test cleanly so they don't interfere with each other.
Deleted Comment
Look at DOM performance. Chrome has a ceiling of about 45m ops/s where FF max speed is dependent upon your ram and bus speed reaching beyond 4-5b ops/s. In both though querySelectors perform at about the same speeds as slow as 25000 ops/s.
I have written an OS GUI that executes in the browser. It loads, including full state restoration in about 120ms. I was recently interviewing with a search engine company, one of the big ones, where I could demonstrate that JavaScript tool can execute file system search much faster than the OS and produce better results. They seemed really impressed.
Despite all of this my biggest learning about performance is that mentioning performance during job interviews shows that you are incompatible with other JavaScript developers and will not be hired.
As someone who done plenty of JS, cares about performance and also has handled hiring for JS positions in the past, I can tell you that this is generally not true. Caring about performance is not a reason to not get hired.
But it is possible to be "technically superior" in every conceivable way, but still not be a good hire. Why? Because the candidate might be missing vital soft skills or even not be very good at describing their thoughts, something that can slow down an entire team.
"Learning the wrong lesson" when things go wrong would also be something I'd consider high up for reasons to reject a candidate.
What do you mean? How do you search the file system without calling into the OS?
Deleted Comment
My biggest gripe is the lack of shared bookmarks and passwords between browsers. There are 3rd party extensions and what not to do some of this (eg 1password), but nothing beats the UX of true browser integration. I wish there was a single standard with pluggable backends so I had no switching costs. Quite frankly I’m surprised Firefox doesn’t just use the Mac keychain and share bookmarks with Safari in order to gain market share.
- Chrome v99: 204
- Safari: 266
How come I fall so far short of the post's advertised fastest-of-any-browser 300?
Edit: Running in incognito got me a 251, so some of the slowdown must be from extensions.
Edit 2: Seems like 1password and uBlock Origin decrease the score by around 30 each, I got a 276 with both disabled.
"Data source for Mac statistics: Speedometer 2.0 comparing Chrome 99.0.4812.0 --enable-features=CanvasOopRasterization --use-cmd-decoder=passthrough "
Chrome v98: 290 Chrome v99: 316 Safari : 278
All were with incognito/private browsing to remove the effect of extensions
Deleted Comment
- Chrome v99: 168
- Firefox v97: 132
- Safari v15: 139
Of course I forgot to benchmark Chrome _before_ I updated it. :(
- Chrome 98: 157
- Chrome 99: 157
- Firefox 97: 104
- Edge 99: 146
Oh well.
Every time we did a milestone performance improvement in our infrastructure, e.g. search used to take few seconds, we reduced it to few milliseconds. One year later our colleagues were doing machinegun-like queries and the search was back to take 1 second, and it is just a matter of time to go back to few seconds.
One thing that helped a lot was hard limits, e.g. InternetExplorer9 having hard cap on css size was literally the only thing that forced people not to push megabytes of css.
I wish Chrome does something similar, like 'you cant have more than 500kb of js code evaluated per page' or 'no more than 200kb css', it will do miracles in just one year, and I am willing to bet that we will have the same features we would without the limit.
EDIT: I did not mean to undervalue Chrome's 49% improvement in one year, which is just extraordinary work!
The median is around 2MB, up from 1.5MB 5 years ago.
I use Safari because of Chrome's memory bloat, Safari's text message MFA auto-fill features, and Safari's cross-platform (iOS/macOS) password manager.
What do you think of Firefox? :)
For such a long time Firefox didn't take being a native Mac OS citizen seriously, with scrolling not being natively implemented, form fields feeling "off" and in general just feeling like a Windows app in a Cocoa window.
Luckily those days are long gone now, and Firefox now feels much more like a native app, save for a couple of dialog boxes here and there. But having used iCloud Keychain for such a long time, and there being no option to import my 500+ passwords, it really has been a pain in the ass to switch browsers.
EDIT: Firefox gets a 1336
However, and I think this is important to bring up, browsers are basically the same speed and the reason to use one over the other is largely down to ergonomics and larger concerns. On the "larger concerns" side, Chrome is a failure. Chrome exists so the advertising company Google can track you and sell you targetted ads. It can do this in reasonable ways and it can do this in unreasonable ways, and with attacks on privacy like FLoC. Chrome is doing exactly what Internet Explorer was doing for microsoft in the 2000s and I think it's appropriate to call out that fact.
I am not seeing any other programming language continuously improve their execution speed this much.