I’ve done a 4K UHD video editor which fits on a floppy disk (1.4Mb), with 30 OpenGL effects, 10 languages, audio pluggins, and is insanely responsive (https://raw.githubusercontent.com/smallstepforman/Medo/main/...), and programmed it in 2 years working 4-8 hours a week.
But it doesnt have a spell checker for 30 languages, it doesnt have a publish to social media button, it doesnt have a video tutorial, it doesnt have a collaberative network share mode, etc.
These features add bloat and latency. That is the root cause of software slowness today. Apps from 20 years ago are still usable today, and are blazingly fast.
It's also a native app running on a lean OS, not something written in JS running on what can be thought of as a VM (Electron.)
Another cause of software slowness is all the layers of abstractions. The author uses the Amiga as an example in the article. I also had a A600 and it was quite snappy because it did a lot less.
A video tutorial by itself doesn't add an bloat or latency. It can't. (Assuming the tutorial is hosted on eg Youtube or is just a video file that you deliver with the program.)
But you are right that you need to focus on latency, if you want to keep latency down. If you add all the other features, it's hard to keep that focus.
His app appears to be written for Haiku, which is a BeOS clone. It might not come with the right codecs or embedded WebView you'd need to run YouTube. And even if it did, booting a whole browser to display a video invariably will add latency.
Easy: it asks you the details of your accounts on first access, and stores them in its configuration. When launched the next time, it tries to log you in automatically. It also waits for the login to complete before drawing the button, so that it can gray it out if the login fails. Then it continuously checks the network and changes the button to reflect mutated conditions: connection lost, new messages/post...
can't speak to the OPs situation, but I remember a case in mobile apps where if you wanted to let people log in with facebook you needed to depend on the facebook sdk which necessarily blocked the cold start of the app while phoning home, for analytics purposes naturally
I think this became a discussion item because facebook servers briefly went down and a bunch of apps wouldn't open
If I'm remembering it wrong, consider this a hypothetical scenario :)
> That is the root cause of software slowness today.
No. The root cause is that there's certain speed above which average user doesn't perceive much improvement, so instead of optimizing the software to be as fast as possible, it's better to stick to that speed, put all the features we need, and minimize the expenditures.
Think about it this way: do you need your video games to be at 1000FPS, or is 120 enough and you'd rather see more graphics effects or lower price?
I'd rather have 1000FPS. 1000FPS is enough for motion quality that's good enough to be mistaken for real motion, and unlike realistic image quality, all it needs is one skilled programmer. Chasing realistic image quality is so expensive that you need to target the biggest market to stand any chance of making a profit. It inevitably leads to modern game design, where everything is homogenized and dumbed down for the masses, and full of anti-consumer practices like micro-transactions and games as a service.
"Apps from 20 years ago are still usuable today, and are blazingly fast."
To be clear, the hardware is faster today.
I am a daily user of "apps from 20 years ago".
I have no desire to let today's software developers negate the gains I should be receiving from the new hardware that I purchase. It is as if they think the hardware belongs to them and they can use the resources as they please. The resources belong to me, not to software developers.
It is like an expanding budget allocation that has no valid justification. In the year 199x/200x, the computer owner allocated program X that does job Y a certain amount of RAM, storage and CPU resources. Today, the computer owner has more RAM, storage and CPU available. Why should the computer owner allocate more resources to an "app" that wants to perform Y today than they did in 199x/200x. What if they do not want spell checker for 30 languages, publish to social media button, video tutorial and collaborative network share mode? The computer owner is not given a valid justification for the expansion nor the choice to say, "Upon careful consideration of your proposal for a 500% increase to the RAM, storage and CPU budget for app X to perform job Y, I have decided I will allocate the same amount of RAM, storage and CPU. The budget will not be increased. Thank you." As it happens that decision results in faster, more efficient completion of Y.
A big difference is that we do very little on our computers now that is actually local to the machine. A significant percentage of the “stupid slow” actions involve waiting for a server that is usually optimized for throughput not latency if it is optimized at all. A surprising number of server workloads also have high latency disk access even though almost all laptops are running SSDs. I think we could vastly improve the performance of local software if we used the network primarily for syncing data, but operated on it locally.
Even if all network activity was cached, modern software has such layered complexity vs older software that I wonder how much things would truly speed up.
It's difficult for me to gauge overall effect between network reduction (say, 1% speedup across 50 concurrent and 50 inter-dependent network calls) versus complexity reduction (1% speedup across 5 concurrent + 5 inter-dependent calls).
I don't know what software you're using, but the vast majority of software I use is SPAs (whether they need to or not), and need an internet connection to even load.
Some software, maybe. There's an awful lot of server-side logic in something like Slack, even for presentation-layer concerns that could be purely local.
I think they mean things like asynchronous updates of backend data that occasionally sync out of band. Most apps I’ve seen will make requests and not act “optimistically” on them, rather wait for the round trip to complete before continuing.
I think the reason we don’t see this much is because you have to model your entire app on this (with multiple layers of data state and primitive types that represent sync-able states) and most apps aren’t written with that much forethought.
My first distinct experience of WTF slowness was when Windows needed to verify Start Menu shortcuts to URLs or local network addresses.
One machine having a bad day could wreck the start menu for everyone on the local business LAN.
Nowadays, I still largely blame network. Seems as though everything is either verifying something over the internet before showing you the UI and/or posting telemetry on your action after you click something.
I think a lot of it is simply that systems (with the exception of games) are designed in traditional ways, which are not focused on responsiveness and frame rate for the end user.
There’s little reason most applications used by most people couldn’t do the same 100fps that almost all video games can achieve. It’s just not a development priority, and being slow and laggy doesn’t negatively affect software revenues that much.
It would be interesting to see what happens if a company like Apple took a hardline approach to application latencies the way they did with iOS system framerates and VR/AR passthrough latency. They have obviously already done this with the stack that supports the Apple Pencil, which benefits all iOS devices to some extent. It’s astounding that a $300 iPad can scroll around a map about an order of magnitude better than a brand new $50k Tesla’s console.
Most development teams just don’t really care about latency and perceived snappiness. Google does because they learned early on that even gains of a dozen milliseconds have direct and measurable effects on their ad revenue.
> It would be interesting to see what happens if a company like Apple took a hardline approach to application latencies
To a great extent they already do. Native Mac/iOS apps very rarely exhibit these problems (and many companies producing native Mac apps even advertise this advantage).
I don't think there's much they can do about the electron apps - they held the line a long time on iOS before letting those run in the first place.
These days I explicitly go out of my way to find and run native programs. Sure they usually require a purchase but (1) devs need to eat too (2) the software is usually higher quality and (3) it’s ultimately cheaper than buying more cool hardware.
If that fails I take whatever remaining Electron options, which is just Discord these days, and create a desktop web app via Safari or Orion. WebKit is much lighter on the system.
Now when I fire up the company machine, which is even more spec’d out, it can feel stupid slow at times.
> It would be interesting to see what happens if a company like Apple took a hardline approach to application latencies the way they did with iOS system framerates...
Any more info on this? Android vs iPhone, and Android TV vs Apple TV suggests Apple is doing something very right on these small devices.
If someone told me 25 years ago that we will write complex desktop applications like Teams using Javascript I would have been worried about his sanity.
25 years ago was 1999. The pioneer of writing desktop-like apps in JavaScript was the Microsoft Outlook team who created the XMLHTTPRequest API so they could implement a fast version of the web version of Outlook. That was in 2000. The real pioneer of this approach though was Oddpost, which created the first truly desktop-like webmail app using XMLHTTPRequest in 2002. It only ran in IE5.
Two years after that Joel Spolsky - former PM on the Excel team - wrote his famous essay "How Microsoft Lost The API War" which predicted everything would eventually be rewritten as web apps. Gmail had launched two months earlier.
So, even back then the trend was clear. The Windows team systematically dropped the ball / refused to believe the ball existed, and was outcompeted by much smaller teams writing browsers.
> The Windows team systematically dropped the ball / refused to believe the ball existed
Oh, they had a very good go at killing the web or trying to tie it to their existing APIs: the IE6/ActiveX era.
I'm not really sure what they could have done, though. Other than modernize away from WinForms properly; they have repeatedly dropped the ball on UI frameworks and failed to modernize all their own stuff.
Conversely, Apple appear to have "won" despite (because?) completely ditching all backwards compatibility for their APIs on multiple occasions.
25 years ago, Microsoft was already writing complex desktop apps using Javascript. That was a major feature of Windows 98, you could have a "HTML UI". (ofc they made it easy to call into COM for the busy stuff.) I've never used Teams, but maybe it just sucks for reasons having nothing do with UI programming?
And you would've been right. I'm still worried about their sanity to this day. Btw it's not just devs, most regular users I deal with on daily basis hate it (be it Teams or whatever), they just don't know why it's such a crap. And they don't have to.
I dunno how relevant that is: 1999 JS performance is closer to 2024 JS performance than 2024 JS performance is to 1999 C++.
IOW, just because 2024 JS might be faster by a factor of 2, doesn't mean it's faster at all[1] than the equivalent program written in native code.
[1] Maybe it is - I haven't checked benchmarks because programming language benchmarks have almost no correlation to reality when you use the language in a program, because the program is not doing 25k calls to the same function in a while loop.
To me, the problem isn't the language, but the use of the web layout renderer. Because it's the one everyone's familiar with, but also the most expensive computationally, and very vulnerable to having to re-layout all sorts of things.
It's not far off. We got some syntax niceties and 5 slightly different and conflicting ways to concatenate .js files from different directories a.k.a packages.
Apart from standardizing some things (doesn't hugely matter for electron since it's the only target) and getting async - what has really changed?
One of the basic causes for slowness is how the underlying language implementation does procedure calls.
Decades ago, I looked at how Microsoft handled procedure calls and returns of data. I found the same problem elsewhere.
The problem was related to copying of data. A single API call could involve (at the time) up to 100's of copy actions of the same value for each subsequent internal call. The actual processing of that data was minimal in the scheme of things.
This has been a problem amongst many others over the decades and will continue to be so for the foreseeable future.
Things that we did as a matter of course when resources were restricted have been dispensed with once those resource limits were exceeded. The lessons have subsequently been forgotten.
DBus is just as bad (its serialization scheme is bespoke and bloated), and people insist on putting its insanely inefficient overhead into everything so they can claim they "isolated" apps (as long as you ignore the gaping holes torn for and by dbus).
Stupid slowness keeps creeping in. About four months ago, "bash" on Linux started having visibly slow character echo. Some update to Ubuntu 20.04 LTS added something. No idea what. Spell check? LLM-based command completion? Phoning home?
I once had a fancy bash prompt that displayed git status. Turns out that git status hits the network to check the state of the remote repo, and if you are having DNS issues, that's a 30 second timeout every time the prompt is displayed.
In other words, it may not be the OS at fault here. Make sure you've double-checked all your prompt customisations and autocompletes
I don’t think it is true that git status does any remote repo access. It doesn’t match the functionality of git and git status, as git status shows the difference between the working copy and the current commit - all of which is local information. From the man page:
Displays paths that have differences between the index file and the current HEAD commit, paths that have differences between the working tree and the index file, and paths in the working tree that are not tracked by Git (and are not ignored by gitignore[5]).
Different story if the FS is a network FS of course.
Perhaps, rather than saying that older operating systems felt faster, and then getting into the weeds debating that (or quoting cold, hard numbers) we should change our terminology: I’d say that older operating systems felt ‘snappier’, which I think is quite different from ‘faster’.
(Side note: I would do unspeakable things for a macOS 9 GUI on a modern, lean OS, plus some cloud integration. That’s all I want, until I think of something else).
I feel insane reading that older systems felt faster/snappier. Especially before SSDs everything was super slow. You'd wait for a minute to boot the system, load up a game or anything other substantial. Then of course all of the app crashes, hang ups, blue screens, kernel panics or other mandatory restarts that basically are all gone now on every major OS that we're for some reason not counting.
It's super stable and fast now.
Old machines felt faster because the delays better matched our intuitive mental model of what should be difficult. Displaying a single letter when you type should be instant because that's conceptually a very simple task. Switching between open applications should be instant because you can already see them both loaded. Opening a new application can be slow, because that's equivalent to getting up and fetching a new tool from your tool box. If you wanted instant access to it you would have left it on your workbench (left it running).
But it doesnt have a spell checker for 30 languages, it doesnt have a publish to social media button, it doesnt have a video tutorial, it doesnt have a collaberative network share mode, etc.
These features add bloat and latency. That is the root cause of software slowness today. Apps from 20 years ago are still usable today, and are blazingly fast.
Another cause of software slowness is all the layers of abstractions. The author uses the Amiga as an example in the article. I also had a A600 and it was quite snappy because it did a lot less.
But you are right that you need to focus on latency, if you want to keep latency down. If you add all the other features, it's hard to keep that focus.
I think this became a discussion item because facebook servers briefly went down and a bunch of apps wouldn't open
If I'm remembering it wrong, consider this a hypothetical scenario :)
No. The root cause is that there's certain speed above which average user doesn't perceive much improvement, so instead of optimizing the software to be as fast as possible, it's better to stick to that speed, put all the features we need, and minimize the expenditures.
Think about it this way: do you need your video games to be at 1000FPS, or is 120 enough and you'd rather see more graphics effects or lower price?
To be clear, the hardware is faster today.
I am a daily user of "apps from 20 years ago".
I have no desire to let today's software developers negate the gains I should be receiving from the new hardware that I purchase. It is as if they think the hardware belongs to them and they can use the resources as they please. The resources belong to me, not to software developers.
It is like an expanding budget allocation that has no valid justification. In the year 199x/200x, the computer owner allocated program X that does job Y a certain amount of RAM, storage and CPU resources. Today, the computer owner has more RAM, storage and CPU available. Why should the computer owner allocate more resources to an "app" that wants to perform Y today than they did in 199x/200x. What if they do not want spell checker for 30 languages, publish to social media button, video tutorial and collaborative network share mode? The computer owner is not given a valid justification for the expansion nor the choice to say, "Upon careful consideration of your proposal for a 500% increase to the RAM, storage and CPU budget for app X to perform job Y, I have decided I will allocate the same amount of RAM, storage and CPU. The budget will not be increased. Thank you." As it happens that decision results in faster, more efficient completion of Y.
I mean, C is not my goto language by far, but handling a truckload of memory operations efficiently seems like an ideal case for it.
It's difficult for me to gauge overall effect between network reduction (say, 1% speedup across 50 concurrent and 50 inter-dependent network calls) versus complexity reduction (1% speedup across 5 concurrent + 5 inter-dependent calls).
Aside from niche tools like AI, that’s exactly how software works already
I think the reason we don’t see this much is because you have to model your entire app on this (with multiple layers of data state and primitive types that represent sync-able states) and most apps aren’t written with that much forethought.
One machine having a bad day could wreck the start menu for everyone on the local business LAN.
Nowadays, I still largely blame network. Seems as though everything is either verifying something over the internet before showing you the UI and/or posting telemetry on your action after you click something.
There’s little reason most applications used by most people couldn’t do the same 100fps that almost all video games can achieve. It’s just not a development priority, and being slow and laggy doesn’t negatively affect software revenues that much.
It would be interesting to see what happens if a company like Apple took a hardline approach to application latencies the way they did with iOS system framerates and VR/AR passthrough latency. They have obviously already done this with the stack that supports the Apple Pencil, which benefits all iOS devices to some extent. It’s astounding that a $300 iPad can scroll around a map about an order of magnitude better than a brand new $50k Tesla’s console.
Most development teams just don’t really care about latency and perceived snappiness. Google does because they learned early on that even gains of a dozen milliseconds have direct and measurable effects on their ad revenue.
To a great extent they already do. Native Mac/iOS apps very rarely exhibit these problems (and many companies producing native Mac apps even advertise this advantage).
I don't think there's much they can do about the electron apps - they held the line a long time on iOS before letting those run in the first place.
If that fails I take whatever remaining Electron options, which is just Discord these days, and create a desktop web app via Safari or Orion. WebKit is much lighter on the system.
Now when I fire up the company machine, which is even more spec’d out, it can feel stupid slow at times.
That's mostly only on their frontpage. Android isn't necessarily all that snappy, for example. Nor is GMail (app nor website).
Any more info on this? Android vs iPhone, and Android TV vs Apple TV suggests Apple is doing something very right on these small devices.
Two years after that Joel Spolsky - former PM on the Excel team - wrote his famous essay "How Microsoft Lost The API War" which predicted everything would eventually be rewritten as web apps. Gmail had launched two months earlier.
https://www.joelonsoftware.com/2004/06/13/how-microsoft-lost...
So, even back then the trend was clear. The Windows team systematically dropped the ball / refused to believe the ball existed, and was outcompeted by much smaller teams writing browsers.
Oh, they had a very good go at killing the web or trying to tie it to their existing APIs: the IE6/ActiveX era.
I'm not really sure what they could have done, though. Other than modernize away from WinForms properly; they have repeatedly dropped the ball on UI frameworks and failed to modernize all their own stuff.
Conversely, Apple appear to have "won" despite (because?) completely ditching all backwards compatibility for their APIs on multiple occasions.
Also, Teams isn't written in 1999 JavaScript.
I dunno how relevant that is: 1999 JS performance is closer to 2024 JS performance than 2024 JS performance is to 1999 C++.
IOW, just because 2024 JS might be faster by a factor of 2, doesn't mean it's faster at all[1] than the equivalent program written in native code.
[1] Maybe it is - I haven't checked benchmarks because programming language benchmarks have almost no correlation to reality when you use the language in a program, because the program is not doing 25k calls to the same function in a while loop.
Apart from standardizing some things (doesn't hugely matter for electron since it's the only target) and getting async - what has really changed?
Decades ago, I looked at how Microsoft handled procedure calls and returns of data. I found the same problem elsewhere.
The problem was related to copying of data. A single API call could involve (at the time) up to 100's of copy actions of the same value for each subsequent internal call. The actual processing of that data was minimal in the scheme of things.
This has been a problem amongst many others over the decades and will continue to be so for the foreseeable future.
Things that we did as a matter of course when resources were restricted have been dispensed with once those resource limits were exceeded. The lessons have subsequently been forgotten.
https://man.openbsd.org/ssh_config#ObscureKeystrokeTiming
https://news.ycombinator.com/item?id=37307708
(I hope it is, there's something ironic in John Nagle complaining about network-induced latency in interactive sessions.)
I'd wager some clunky autocomplete addon for bash is causing it.
In other words, it may not be the OS at fault here. Make sure you've double-checked all your prompt customisations and autocompletes
(Side note: I would do unspeakable things for a macOS 9 GUI on a modern, lean OS, plus some cloud integration. That’s all I want, until I think of something else).