Fully agree with the premise. Most youngish users have never even experienced true performance. Our software stacks are shit and rotten to the core, it's an embarrassment. We'll have the smartest people in the world enabling chip design approaching the atomic level, and then there's us software "engineers" pissing it away with an inefficiency factor of 10 million %.
We are very, very bad at what we do, yet somehow get richly rewarded for it.
We've even invented a new performance problem: intermittent performance. Performance isn't just poor, it's also extremely variable due to distributed computing, lamba, whichever. So users can't even learn the performance pattern.
Where chip designers move heaven and earth the move compute and data as closely together as is physically possible, leave it to us geniuses to tear them apart as far as we can. Also, leave it to us to completely ignore parallel computing so that your 16 cores are doing fuck all.
You may now comment on why our practices are fully justified.
The reason is that the money people don't want to pay for it. We absolutely have the coding talent to make efficient, maintainable code, what we don't have are the available payroll hours.
10x every project timeline and it's fixed, simple as.
Granted, the big downside is, you have to keep your talent motivated and on-task 10x as long, that's like turning a quarter horse into a plough horse, it's not likely to happen quickly, if at all. You'd really need to start over with the kids who are in highschool now writing calculator apps in python by making them re-write them in C and grade them on how few lines they use.
ie, it's a pipe dream, and will continue to be until we run out of hardware capability, which has been "soon" for the last 30 years, so don't hold your breath.
> 10x every project timeline and it's fixed, simple as.
I'm skeptical. I've seen too many developers write poorly performing code purely out of indifference. If you gave them more time they'd have a lot more Reddit karma but I'd still be finding N+1 problems in every other code review.
Labour productivity vs wages doubled since the 1970s and the trend seems to be continuing. It was about 150% in 2000 so we can use Excel as as the benchmark.
This means that an accountant today can wait for Excel to load for 2 whole hours of their 8 hour shift, and still be as productive as an accountant from 20 years ago!
Isn't that amazing! Technology is so cool, and our metrics for defining economic success are incredible.
Yeah, agreed ... but[1], in purely financial terms, we are leaving money on the table, right?
Just because an accountant can burn two hours a day waiting for the computer, doesn't mean that we should burn those two hours.
I think that most of the tech stack is there technology-wise, just not aesthetic wise.
If you are happy with how UI widgets looked an behaved on Windows 7, you can have sub-millisecond startup times now for many apps. Trouble is, people would rather wait and look at pretty things than have an uninterrupted workflow.
[1] You knew there was gonna be a "but", right? Why else would I respond?
People give push back when I tell them they should drop PHP for Go or Python for Rust.
It doesn't matter that it would be better for everyone and the planet. It's a prisoners dilemma. I only get rewarded and promoted for shipping stuff and meeting deadlines even if the products are slow.
Thank God for open source. Programmers produce amazing libraries, frameworks, languages and systems when business demands and salary is out of the picture.
Meanwhile some people have been using C# and Java for decades which perform better than Go on many benchmarks and their software is still slow as shit.
I spent years designing and implementing a new data management system where speed was its main advantage. I painstakingly wrote performant code that relied on efficient algorithms that could manage large amounts of data (structured or unstructured). I tried to take advantage of all the hardware capabilities (caching, multiple threads, etc.).
I mistakenly thought that people might flock to it once I was able to demonstrate that it was significantly faster than other systems. I couldn't have been more wrong. Even database queries that were several times faster than other systems got a big yawn by people who saw it demonstrated. No one seemed interested in why it was so fast.
But people will jump on the latest 'fashionable' application, platform, or framework often no matter how slow and inefficient it is.
The wonderful thing is that the hardware performance is there if you just dare to toss the stack. One of my long-standing projects is to build a bare metal (at least as far as userspace is considered) programming language called Virgil. It can generate really tiny binaries and doesn't rely on hardly any other software (not even C!). As LLVM and V8 and even Go get piggier and piggier, it feels more magical every day.
The most significant feature of Virgil that is not in other mainstream languages is the concept of initialization time. To avoid the need for a large runtime system that dynamically manages heap memory and performs garbage collection, Virgil does not allow applications to allocate memory from the heap at runtime. Instead, the Virgil compiler allows the application to run initialization routines at compilation time, while the program is being compiled. These initialization routines allow the program to pre-allocate and initialize all data structures that it will need at runtime
Isn't LLVM being a piggy good? As in only pay a hefty computational time price once upfront to stuff as many optimizations as you can in your binary, and make it fast forever after.
I'm sure you could do this in a browser if there was actually desire. A browser may be many levels of abstraction up from a native app but it is still fairly easy to make apps that respond instantly (except maybe startup time). It's amazing how much extra current apps are doing to slow it down.
> Most youngish users have never even experienced true performance
Most of them have never experienced the level of instability and insecurity of past computing either. Those improvements aren’t free. In the past, particularly with Windows (since this is what the videos recorded) it would be normal for my computer to freeze or crash every other hour. It happens much less today
I have absolutely no idea what you are talking about "true performance" - maybe you are from an alternate reality? I don't think of the 2000s as "true performance" but rather crashes and bugs and incredibly slow loading times for tiny amounts of data.
I can grep through a gigabyte text file in seconds. I couldn't even do that at all back in 2000.
It's one of the things that are infuriating for technical folks but meh for everybody else.
In the days of programs taking forever to load or web pages downloading images slowly we knew what technical limitations were there.
Now we know how much sluggishness comes from cruft, downright hostility (tracking), sloppy developer work, etc.. we know this is a human problem and that angers us.
But for non-technical people it hasn't changed too much. Computing wasn't super pleasant 25 years ago and it's not now. Instead of waiting half a minute for Word to load they wait 4-5 seconds for Spotify to load. They were never interested in Notepad or Cmd or Paint. It doesn't bother them that now they open slower than in 1999.
It's very surprising to me how much non-tech people are willing to put up with. They'd patiently wait for their phone to freeze for a solid 5 seconds because they installed an antivirus on it, and then proceed to try to use it at 2 FPS.
The quality bar is so low now, it went underground. It's so easy to compete with the status quo. You just pretend that all the "progress" that happened in software development technologies over the last 15 years didn't happen. And, most importantly, you treat JS as a macro language for a glorified word processor and don't try to build actual applications with this stack.
Many non-technical users assume blame for slowness and clunkiness. So many times I've heard self-blame when helping out family. "I shouldn't have so many pictures on my desktop" (there are 8 pictures). "I just can't figure out these phones" phone is dark patterning or has hidden functionality six menu items deep and behind a swipe/long press. "I never should have let it update, it's never been the same" - a forced update.
I'm even more amazed at my IT colleagues (so "tech" people) putting up with windows' doing god knows what constantly. We have the exact same machines, yet mine somehow basically never has its fan on running Linux and compiling Rust, whereas theirs is often audible while running Windows with a couple of chrome tabs open and outlook.
They also don't bat an eye when lazy devs recommend we should reboot the servers daily. They put up with laggy VNC when they have a perfectly good remote desktop solution set up.
I guess, if it sorta kinda works, people just get used to it. The fans blowing like a jet engine. Apps taking forever to load. Windows getting stuck for no discernible reason.
Latency is one of the things my iPhone owning friends complain the most about when they try another phone (usually a Samsung loaded to the brim with bloatware)
I agree though I’d say non-technical people are naive and so don’t know why the experience is not ideal or fun or smooth. I suspect if asked the right questions, non-technical people would also complain.
Computers have always been just useful enough for as long as I’ve used them (since the 80s). We’ve _always_ put up with a lot of nonsense and pain because the alternative is worse.
The thing is, users do care, they just don't understand how to articulate it. The Doherty threshold is real, and it's baked into our physiology as humans.
So it does bother them, it's not 'meh', it's just the status quo. Every once in awhile you run across an application or website that's fast, and it's jarring how much better it feels to use. That's something worth striving for.
The strange thing for me is how much time we spent in the early 2000s discussing website responsiveness and quick loading times as ways to improve user engagement and productivity. Although I can't provide any statistics that I'm intimately familiar with, I recall reading numerous case studies where improvements in responsiveness resulted in significant productivity gains for end users. If I recall correctly, this wasn't just about dial-up connections and multi-second page loads. This belief was still prevalent even when discussing sub-second responsiveness.
Perhaps the direction of the case studies started to shift, and we stopped hearing about it. However, it seems to me more like we pushed hard to reach a certain level of speed in our computer usage, then became complacent, and have been regressing ever since.
I don't have any data on it but I think people gradually came to accept the slower loading times as a reasonable cost, in return they got true multimedia and the fat client experience (first with flash, then jquery and so on) which was impossible in hypermedia.
Currently node unifies (or seems to unify) the FE and BE into something that looks pretty worrying (or rather alien) to someone who grew up with LAMP stack and CGI for dynamic content.
I've literally started using Word and Excel less often because of just how long it takes them to start up. Like, I just want to write a quick document or edit my CV, why do I have to sit here and wait for 10-20 seconds??
Electron apps don't have to be shit; VS Code demonstrates this. But the VS Code people are also insanely performance-focused; your average app developer does not care.
There is no mention in this article on the effect of modern security measures on desktop app latency. I'm thinking about things like verifying the signatures of binaries before launch (sometimes this requires network roundtrips as on macOS). Also, if there are active security scans running in the background, this will eventually effect latency, even if you have lots of efficiency cores that can do this in the background (at some point there will be IO contention).
Another quibble I have with the article is the statement that the visual effects on macOS were smoothly animated from the start. This is not so. Compositing performance on Mac OS X was pretty lousy at first and only got significantly better when Quartz Extreme was released with 10.2 (Jaguar). Even then, Windows was much more responsive than Mac OS X. You could argue that graphics performance on macOS didn't get really good until the release of Metal.
Nowadays, I agree, Windows performance is not great, and macOS is quite snappy, at least on Apple Silicon. I too hope that these performance improvements aren't lost with new bloat.
I don't think all this should be relevant - once it was launched at least one time, the OS should be able to tell that the binary hasn't changed, and thus signature verification is not necessary.
The comparison is using OS X 10.6 - I used to daily drive it and it was pretty snappy on my machine - which is corroborated by this guy's Twitter video capture.
As for Windows performance - Notepad/File Explorer might be slower than strictly necessary, but one of the advantages of Windows' backwards compatibility, is that I keep using the same stuff - Total Commander and Notepad++, that I used from the dawn of time, and those things haven't gotten slow (or haven't changed lol).
Assuming the binary on the disk hasn't changed opens to time of use type attacks, where an "evil disk" is substituted that changes the binary after the first load. (Or, more likely, some sort of rootkit)
Unlikely, yes, but that sort of thing comes up in security reviews.
Even on an M2 Mac Spotlight takes a second or two to appear after you hit the shortcut. Apple notes takes an absurd 8 seconds to start. Apple music and Spotify are also take seconds to start. Skype takes 10 seconds.
I'm very happy with my M2 Mac. It's a giant leap forward in performance and battery life. But Electron is a giant leap backward of similar magnitude.
To fix the Spotlight delay, you have to disable Siri in the system settings.
If Siri is enabled, the OS waits until you release the space bar to determine if you performed a long or a short press, which causes Spotlight to be delayed.
I just tried all the examples you gave (on my M1 Max) except Skype which I don't have installed here, they all load in 1 second or faster, maybe its something else on your Mac?
I'm using the Air M2 16GB, switched from Notion to Notes and Reminders, one of the reasons is that these apps are way faster. It also uses under 100MB of ram.
Note: The main reason is that Notion is fantastic but too much for me. I'm not that organized...
The comparison between the win32 Notepad and the UWP version is telling, though, on the same hardware, and with the same security constraints. Similar between the old (Window 7) calculator and the newer one.
I'm glad you mentioned this in the comments - I was wondering if they were going to touch how applications are sandboxed and everything. I would imagine that is a large part of current 'sluggishness'.
If those mythical safety features actually make an impact then shouldn't they slow down everything, including a hello world program? Yet the performance gap between well-optimized and sluggish software only grows.
Almost certainly not? Sandboxing and anything non-visual can happen at a ridiculously fast pace.
I'd suspect a lot of this is offloading so much of the graphical setup to not the application. Feels like we effectively turned window management into system calls?
I run a machine with lots of RAM and a hefty CPU and a monster GPU and an SSD for my Linux install and...a pair of HDD's with ZFS managing them as a mirror.
Wat. [1]
I also have Windows on a separate drive...that is also an HDD.
Double wat.
My Linux is snappy, but I also run the most minimal of things: I run the Awesome Window Manager with no animations. I run OpenRC with minimal services. I run most of my stuff in the terminal, and I'm getting upset at how slow Neovim is getting.
But my own stuff, I run off of ZFS on hard drives, and I'll do it in VM's with constrained resources.
Why?
While my own desktop has been optimized, I want my software to be snappy everywhere, even on lesser machines, even on phones. Even on Windows with an HDD.
This is my promise: my software will be far faster than what companies produce.
There is a Raymond Chen post (I'll come back to add the link if I find it) that explains how the developers working on Windows 95 were only allowed to use machines with the minimum specs required by the OS. This was to ensure that the OS ran well on those specs.
And, IMHO, that's the way it should be: I think it's insane(?) to give developers top-of-the-line hardware, because such hardware is not representative of the user population... and that's part of why I stick to older hardware for longer than others would say is reasonable.
> I think it's insane(?) to give developers top-of-the-line hardware, because such hardware is not representative of the user population
But a developer's needs are radically different from the user's needs. In typical web dev, the developer's machine is playing the role of the client machine, web server, and database server, all in one. On top of that you'll probably be running multiple editors/IDEs and other dev tools, which are pretty much always processor and memory hungry. Even for desktop development the dev needs to be able to run compilers and debuggers that the user doesn't care a thing about. If you truly care about the low end user experience then you need to do acceptance testing against your minimum supported specs. It's pretty crazy to intentionally hamstring the productivity of some of your most expensive employees.
Merad's sibling post is exactly why my machine is beefy instead of weak: I still need that power.
Of course, I run Gentoo, so...
And I run my Gentoo updates off of a ramdisk to save my SSD and for that much more does.
But if I intentionally run my code in hamstrung VM's, it achieves the effect you are advocating, which is a good thing, by the way. I do agree with you.
Oh what ritualistic nonsense. Those developer machines will also have to compile the code and run the application in debug mode, which will make the app slower than on a regular user’s machine if we assume the same hardware.
Fast software that nobody uses helps nobody. Slow software that everyone uses is, well, slow. So I'm curious: how many people use your software? I get that this topic is like nerd rage catnip but if people actually want to help users, then they need to meet users where they're at. And if "marketing" is what's needed, then maybe it is. Software is generally built for humans after all.
Don't get me wrong, All of my servers at home run Void Linux and use runit. Pretty much anything that runs on them is snappy and they run on 10 year old hardware but still sing because I use software written in Go or native languages. But remembering the particulars about runit services and symlinks is something I forget every 3 months between deploying new services. Trying to troubleshoot the logger is also a fun one where I remember then forget a few months later. Using systemd, this all just comes for free. Maybe I should write all of this down but I'm doing this for fun aren't I?
The reason users don't care that much about slow software is because they use software primarily to get things done.
So my new projects are not out yet, but I will market them heavily once they are.
One of them is an init system specifically designed to be easier to use than runit and $6, while still being sane, unlike systemd. Yes, I'm going to focus on ease of use because as you said, that matters a lot.
However, I do have a project out in the public now. It ships with the base system in FreeBSD and also ships with Mac OSX. It is a command-line tool that lots of people may use in bash scripts.
Is that widespread enough?
Also, one reason people adopted mine over the GNU alternative is speed.
> My Linux is snappy, but I also run the most minimal of things: I run the Awesome Window Manager with no animations.
Same: a very lean Linux install with the Awesome WM, now running on a 7700X (NVMe PCIe 4.0 x4 SSD in my case). I also use one of the keyboard that comes stock with the lowest latency (and I bumped its polling rate because why not).
"A vibrant screen that responded instantly when you tapped replaced cramped keyboards."
I tentatively assume this results from a partial edit. Something to do with "replaced your dumbphone and tapped on cramped keyboards" or just maybe the slightly non sequitur "replaced erroneous characters on cramped keyboards"?
Parse errors aside,
"Design Tools - Users are consistently frustrated when Sketch or Figma are slow. Designers have high APM (actions per minute) and a small slowdown can occur 5-10 times per minute.
"
Haven't used those, but with some design tools, it felt like 5-10 times per second, especially when trying to get an idea on screen quickly!
Regarding when it's okay to be slow:
"When a human has to keep up with a machine (e.g. we slow down video game framerates, otherwise the game would run at 60x speed and overwhelm you).
"
I…wouldn't slow the framerate, unless you mean the game's speed is locked to refreshrate (as some ancient graphical games are), so allowing frames to be generated faster causes the gameworld to change faster. Or unless unrestrained framerate results in too much heat or power consumption.
I think there’s an important additional factor, which is how dynamic so much UI is these days. So much is looked up at runtime, rather than being determined at compile time or at least at some static time. That means you can plug a second monitor into your laptop and everything will “just work”. But there is no reason it should take a long time to start system settings (an example from the article) as the set of settings widgets doesn’t change much — for many people, never — and so can be cached either the first time you start it or precached by a background process. Likewise a number of display-specific decisions can be made, at least for the laptop’s screen or phone’s screen, and frozen.
The number of monitors was also checked at runtime at the 90's. The only difference is that by then it was checked only at startup, and now there some aynchronous function that checks it all the time... What is to say that now it should be faster.
On any of the complex Linux DEs, the set of settings widgets was always set at startup, since those complex DEs existed. On Windows that varies from one interface to another (there are many), but at least for Win10, things have gotten much more static on the new interface. (I dunno about 11.)
Anyway, the amount of things you can read from the user on a Windows app startup time is staggering. The applications on the article have many orders of magnitude less (relevant) data to deal with.
The "plug in a monitor" case is an example where computers are hilariously (and likely unnecessarily) slow to do something that should be simple.
Say I have two monitors plugged in and running. When I plug in a third one, here's what happens:
1. Monitor 2 goes blank.
2. Monitor one flashes off then back on.
3. Monitor 2 comes back on.
4. Both monitors go off.
5. All three monitors finally come back on.
Putting on my developer hat, I kind of know what's going on here. The devices are frantically talking to the device drivers, transmitting their capabilities, the OS is frantically reading config files to understand where to display the virtual desktops, everyone is frantically handshaking with everyone else. It's a terrible design and should not be excused. Putting on my end-user hat, what the fuck is this shit? I just plugged a monitor in. I'm not asking the computer to perform wizardry.
> Linux is probably the system that suffers the least from these issues as it still feels pretty snappy on modest hardware. […]. That said, this is only an illusion. As soon as you start installing any modern app that wasn’t developed exclusively for Linux… the slow app start times and generally poor performance show up.
This is not an illusion. Cross-platform programs suck, so everyone avoids them, right? Electron apps and whatnot are universally mocked. You would only use one for an online services like Spotify or something. The normal use case is downloading some nice native code from your repo.
> Cross-platform programs suck, so everyone avoids them, right?
The thing is, cross-platform tooling sucks. A plain CLI program is already bad enough to get running across platforms - even among Unix/POSIX compliant-ish platforms, which is why autoconf, cmake and a host of other tools exist... but GUI programs? That's orders of magnitude worse to get right. And games manage to blow past that as every platform has their completely own graphics stack.
Electron and friends abstract all that suckiness away from developers and allow management to hire cheap fresh JS coding bootcamp graduates instead of highly capable and paid Qt/Gtk/SDL/whatever consultants.
No, they don't always suck. As an example, QT is cross platform, fast and complete.
But Electron is god awfully slow, and Eclipse can be too. The difference is QT apps are compiled and are generally written in C/C++, where are Electron and Eclipse are interpreted / JIT'ed, and use GC under the hood. As a consequence, they run at 1/2 the speed, use many times more RAM and to make matters worse Electron is single threaded.
The problem isn't cross platform. It's the conveniences most programmers lean on to make themselves productive - like GC, or async so they can avoid the dangers of true concurrency, or an interpreter to so they don't have to recompile for every platform out there. They do work - we are more productive in turning out code, but the come at a cost.
Browsers are ridiculously fast. People just add more and more until they hit the wall where they need to start optimizing. For someone trying to get things done, that’s probably the right method. But it means that ultimately what determines how fast your program runs is not how fast your computer runs, but how much delay you’re okay with.
It's not just cross-platform programs that suck. Applications moving to gtk4 from gtk3 add an extra 100-200ms of startup latency every time because of OpenGL initialization and shader compilation (there is currently no cache), varying a lot depending on your CPU. Applications that used to open instantly now have have a noticeable pause, even if it's still short compared to the worst Windows applications. It's been reported multiple times for well over a year now and no improvements have been made.
Not really sure about the cross-platform caveat... `dropbox-lnx` eats away 265MB of RES for it's file syncing job, I'm pretty sure what Dropbox was doing on my old laptop was very heavily 265MBly important. Also, `keepassxc` takes away 180MB of RES when you opened it.
The first Linux that I've used, Mandriva (KDE), runs smoothly on my Intel Pentium 4 PC with 256MB of RAM. I can even do some web browsing (Flash gaming) and music playing without feeling slow. That's an entire OS, running on 256MB of RAM.
On MDV, KDE and PIV, same here with an Athlon 2000 and 256 MB of RAM with Knoppix and later FreesBIE with XFCE, which ran much faster and snappier than even today's i7's with Plasma 5.
And if it's for an online service, why even have it as a separate app? Just run it in your browser. You'll get the exact same thing but in the one instance of Chromium that you already run anyway. You also get the added benefit that your extensions work with it.
These kinds of realizations have made me look into permacomputing [1], suckless [2] and related fields of research and development in the past few months.
We should be getting so much more from our hardware! Let's not settle for software that makes us feel bad.
Mind you, the software we had 20 years ago was fully featured, smaller and faster than what we have today (orders of magnitude faster when adjusted for the increase in hardware speed).
Suckless, on the other hand, is impractical esthetic minimalism that removes "bloat" by removing the program. I'd rather run real software than an art project.
If you want more from your hardware, the answer is neither the usual bloatware, nor Suckless crippleware.
It's funny, the older I get the less I care about this stuff. I like to use technology to, well, live my life in a more effective, effort-free manner. I have lots of friends who aren't other nerdy techies. In high school I refused to use "proprietary, inefficient WYSIWYG garbage that disrespected the user" like Word and typed up all of my essays in LaTeX instead. Now I get accounting spreadsheets for vacations going on my smartphone using Google Sheets. I still love writing code but my code has become more oriented around features and experiences for myself rather than code for the sake of code.
I love exploring low-latency code but rather than trying to create small, sharp tools the suckless way, I like to create low latency experiences end-to-end. Thinking about rendering latency, GUI concurrency, interrupt handling, etc. Suckless tools prioritize the functionality and the simplicity of code over the actual experience of using the tool. One of my favorite things to do is create offline-first views (which store things in browser Local Storage) of sites like HN that paper over issues with network latency or constrained bandwidth leading to retries.
I find suckless and permacomputing to be the siren song of a type of programmer, the type of programmer who shows up to give a presentation and then has to spend 10 minutes getting their lean Linux distro to render a window onto an external screen at the correct DPI, or even to connect to the wifi using some wpa_supplicant incantations.
It's the cycle and not just in software. Younglings are idealists and try a thousand things, some of which happen to change and improve the ever-turning wheel of software, while elders turn the wheel and ensure the system, practices and knowledge continue and get transferred.
Fully agree. When I was younger I used to care more about privacy, and would use nothing but FOSS apps and OSes no matter how much it inconvenienced me.
There's a significant opportunity cost for zealotry.
WTF happened to all the THOUSANDS AND THOUSANDS of machines I deployed to datacenters over the decades? Where are the F5 load balancers I spent $40,000 on per box in 1999?
I know that when we did Lucas' presidio migration, tens of million$ of SGI boxes went to the ripper. That sucks.
edit:
All these machines could be used to house a 'slow internet' or 'limited interenet'
Imagine when we graduate our wealth gap to include the information gap - where the poor only have access to an internet indexed to September 2021 - oh wait...
But really - that is WHAT AI will bring: an information gap: only the wealthy companies will have real-time access to information on the internet - and all the poor will have outdated information that will have already have been mined for its value.
think of how HFT network cards with insane packet buffers, and private fiber lines gave hedgies the microsecond advantages on trading stocks...
Thats basically what AI will power - the hyper-accelerated exploitation of information on the web via AI - but the common man, will be releagated to obsolete AI, while the @sama people of the world build off-planet bunkers.
Occasionally I end up with a truckload of gear from things like that. The circumstances that saved it from shredding are usually something like Founding Engineer X couldn't stand to see all that nice workstation stuff go in the trash so he kept it in his garage for 25 years and now his kids are selling it.
I use Suckless terminal myself, but if I'm not mistaken it's actually not the fastest terminal out there, despite its simplicity[^1]. My understanding is that many LOCs and complex logic routines are dedicated to hardware/platform-specific optimizations and compatibility, but admittedly this type of engineering is well beyond my familiarity.
Also, OpenBSD's philosophy is very similar to Suckless. One of the more notable projects that come to mind is the `doas` replacement for `sudo`.
[^1]: This is based on Dan Luu's testing (https://danluu.com/term-latency/). I don't know when this testing was done but I assume a few years ago because I remember finding it before.
I'm somewhat offended on behalf of OpenBSD! doas is a good program written for good reasons (cf. https://flak.tedunangst.com/post/doas). Suckless sudo would ensure it was as painful as possible to use, so suckless fans feel like cool sysadmin hackers setting it up. (Just compile in the permitted uids lol!)
Yeah, totally possible to get excellent results with older hardware, and really stellar results with very new hardware, if you're running stuff that's not essentially made to be slow.
I basically only upgrade workstations due to web browsers needs, and occasionally because a really big KiCAD project brings a system to a crawl. At this point even automated test suite runtimes are more improved by fixing things that stop test parallelization from working efficiently vs. bigger hardware.
I am constantly dumbfounded with the fact that the majority consumer of memory on my machine is a FN web browser! (that said, I *do* have basically like 30 tabs open at any given time...
(It would be cool if dormant tabs can just hold the URL, and kill all memory requirements when dormant after N period of time..)
But heck, even when I am running a high-end game on my machine, the memory consumption is less than when I have a bunch of tabs open displaying mostly text ...
My impression of Suckless is that it’s “Unix philosophy” software where you edit the code and recompile instead of using dynamic configuration like all those config files. And while there are way too many ad hoc app-specific config systems out there, I don’t see how Suckless makes a huge difference for simplifying things.
As noted by another thread, the Notepad example is surprisingly telling.
My initial gut was to blame the modern drawing primitives. I know that a lot of the old occlusion based ideas were somewhat cumbersome on the application, but they also made a lot of sense to scope down all of the work that an app had to do?
That said, seeing Notepad makes me think it is not the modern drawing primitives, but the modern application frameworks? Would be nice to see a trace of what all is happening in the first few seconds of starting these applications. My current imagination is that it is something akin to a full classpath scan of the system to find plugins that the application framework supported, but that all too many applications don't even use.
That is, used to, writing an application started with a "main" and you did everything to setup the window and what you wanted to show. Nowadays, you are as likely to have your main be offloaded to some logic that your framework provided, with you providing a ton of callbacks/entrypoints for the framework to come back to.
We are very, very bad at what we do, yet somehow get richly rewarded for it.
We've even invented a new performance problem: intermittent performance. Performance isn't just poor, it's also extremely variable due to distributed computing, lamba, whichever. So users can't even learn the performance pattern.
Where chip designers move heaven and earth the move compute and data as closely together as is physically possible, leave it to us geniuses to tear them apart as far as we can. Also, leave it to us to completely ignore parallel computing so that your 16 cores are doing fuck all.
You may now comment on why our practices are fully justified.
10x every project timeline and it's fixed, simple as.
Granted, the big downside is, you have to keep your talent motivated and on-task 10x as long, that's like turning a quarter horse into a plough horse, it's not likely to happen quickly, if at all. You'd really need to start over with the kids who are in highschool now writing calculator apps in python by making them re-write them in C and grade them on how few lines they use.
ie, it's a pipe dream, and will continue to be until we run out of hardware capability, which has been "soon" for the last 30 years, so don't hold your breath.
I'm skeptical. I've seen too many developers write poorly performing code purely out of indifference. If you gave them more time they'd have a lot more Reddit karma but I'd still be finding N+1 problems in every other code review.
> Granted, the big downside is, you have to keep your talent motivated and on-task 10x as long [...]
No, the big downside is that you have to tell your client it'll be 10x times as expensive because you want better performance.
Labour productivity vs wages doubled since the 1970s and the trend seems to be continuing. It was about 150% in 2000 so we can use Excel as as the benchmark.
This means that an accountant today can wait for Excel to load for 2 whole hours of their 8 hour shift, and still be as productive as an accountant from 20 years ago!
Isn't that amazing! Technology is so cool, and our metrics for defining economic success are incredible.
Just because an accountant can burn two hours a day waiting for the computer, doesn't mean that we should burn those two hours.
I think that most of the tech stack is there technology-wise, just not aesthetic wise.
If you are happy with how UI widgets looked an behaved on Windows 7, you can have sub-millisecond startup times now for many apps. Trouble is, people would rather wait and look at pretty things than have an uninterrupted workflow.
[1] You knew there was gonna be a "but", right? Why else would I respond?
It doesn't matter that it would be better for everyone and the planet. It's a prisoners dilemma. I only get rewarded and promoted for shipping stuff and meeting deadlines even if the products are slow.
Thank God for open source. Programmers produce amazing libraries, frameworks, languages and systems when business demands and salary is out of the picture.
I mistakenly thought that people might flock to it once I was able to demonstrate that it was significantly faster than other systems. I couldn't have been more wrong. Even database queries that were several times faster than other systems got a big yawn by people who saw it demonstrated. No one seemed interested in why it was so fast.
But people will jump on the latest 'fashionable' application, platform, or framework often no matter how slow and inefficient it is.
It may be big, but at the end of the day it will produce some fast binaries if fed the right thing.
Too many people are arguing that things like Slack and Linen are performing as well as could be expected due to the functionality they are providing.
https://9to5mac.com/2021/02/23/quill-chat-team-messaging-iph...
Most of them have never experienced the level of instability and insecurity of past computing either. Those improvements aren’t free. In the past, particularly with Windows (since this is what the videos recorded) it would be normal for my computer to freeze or crash every other hour. It happens much less today
I can grep through a gigabyte text file in seconds. I couldn't even do that at all back in 2000.
Deleted Comment
In the days of programs taking forever to load or web pages downloading images slowly we knew what technical limitations were there.
Now we know how much sluggishness comes from cruft, downright hostility (tracking), sloppy developer work, etc.. we know this is a human problem and that angers us.
But for non-technical people it hasn't changed too much. Computing wasn't super pleasant 25 years ago and it's not now. Instead of waiting half a minute for Word to load they wait 4-5 seconds for Spotify to load. They were never interested in Notepad or Cmd or Paint. It doesn't bother them that now they open slower than in 1999.
The quality bar is so low now, it went underground. It's so easy to compete with the status quo. You just pretend that all the "progress" that happened in software development technologies over the last 15 years didn't happen. And, most importantly, you treat JS as a macro language for a glorified word processor and don't try to build actual applications with this stack.
They also don't bat an eye when lazy devs recommend we should reboot the servers daily. They put up with laggy VNC when they have a perfectly good remote desktop solution set up.
I guess, if it sorta kinda works, people just get used to it. The fans blowing like a jet engine. Apps taking forever to load. Windows getting stuck for no discernible reason.
Computers have always been just useful enough for as long as I’ve used them (since the 80s). We’ve _always_ put up with a lot of nonsense and pain because the alternative is worse.
So it does bother them, it's not 'meh', it's just the status quo. Every once in awhile you run across an application or website that's fast, and it's jarring how much better it feels to use. That's something worth striving for.
Perhaps the direction of the case studies started to shift, and we stopped hearing about it. However, it seems to me more like we pushed hard to reach a certain level of speed in our computer usage, then became complacent, and have been regressing ever since.
Currently node unifies (or seems to unify) the FE and BE into something that looks pretty worrying (or rather alien) to someone who grew up with LAMP stack and CGI for dynamic content.
Enter htmx to turn the tide again.
Electron apps don't have to be shit; VS Code demonstrates this. But the VS Code people are also insanely performance-focused; your average app developer does not care.
Deleted Comment
Another quibble I have with the article is the statement that the visual effects on macOS were smoothly animated from the start. This is not so. Compositing performance on Mac OS X was pretty lousy at first and only got significantly better when Quartz Extreme was released with 10.2 (Jaguar). Even then, Windows was much more responsive than Mac OS X. You could argue that graphics performance on macOS didn't get really good until the release of Metal.
Nowadays, I agree, Windows performance is not great, and macOS is quite snappy, at least on Apple Silicon. I too hope that these performance improvements aren't lost with new bloat.
The comparison is using OS X 10.6 - I used to daily drive it and it was pretty snappy on my machine - which is corroborated by this guy's Twitter video capture.
As for Windows performance - Notepad/File Explorer might be slower than strictly necessary, but one of the advantages of Windows' backwards compatibility, is that I keep using the same stuff - Total Commander and Notepad++, that I used from the dawn of time, and those things haven't gotten slow (or haven't changed lol).
Unlikely, yes, but that sort of thing comes up in security reviews.
I'm very happy with my M2 Mac. It's a giant leap forward in performance and battery life. But Electron is a giant leap backward of similar magnitude.
If Siri is enabled, the OS waits until you release the space bar to determine if you performed a long or a short press, which causes Spotlight to be delayed.
Note: The main reason is that Notion is fantastic but too much for me. I'm not that organized...
I'd suspect a lot of this is offloading so much of the graphical setup to not the application. Feels like we effectively turned window management into system calls?
I run a machine with lots of RAM and a hefty CPU and a monster GPU and an SSD for my Linux install and...a pair of HDD's with ZFS managing them as a mirror.
Wat. [1]
I also have Windows on a separate drive...that is also an HDD.
Double wat.
My Linux is snappy, but I also run the most minimal of things: I run the Awesome Window Manager with no animations. I run OpenRC with minimal services. I run most of my stuff in the terminal, and I'm getting upset at how slow Neovim is getting.
But my own stuff, I run off of ZFS on hard drives, and I'll do it in VM's with constrained resources.
Why?
While my own desktop has been optimized, I want my software to be snappy everywhere, even on lesser machines, even on phones. Even on Windows with an HDD.
This is my promise: my software will be far faster than what companies produce.
Because speed will be my killer feature. [2]
[1]: https://www.destroyallsoftware.com/talks/wat
[2]: https://bdickason.com/posts/speed-is-the-killer-feature/
And, IMHO, that's the way it should be: I think it's insane(?) to give developers top-of-the-line hardware, because such hardware is not representative of the user population... and that's part of why I stick to older hardware for longer than others would say is reasonable.
But a developer's needs are radically different from the user's needs. In typical web dev, the developer's machine is playing the role of the client machine, web server, and database server, all in one. On top of that you'll probably be running multiple editors/IDEs and other dev tools, which are pretty much always processor and memory hungry. Even for desktop development the dev needs to be able to run compilers and debuggers that the user doesn't care a thing about. If you truly care about the low end user experience then you need to do acceptance testing against your minimum supported specs. It's pretty crazy to intentionally hamstring the productivity of some of your most expensive employees.
Of course, I run Gentoo, so...
And I run my Gentoo updates off of a ramdisk to save my SSD and for that much more does.
But if I intentionally run my code in hamstrung VM's, it achieves the effect you are advocating, which is a good thing, by the way. I do agree with you.
Don't get me wrong, All of my servers at home run Void Linux and use runit. Pretty much anything that runs on them is snappy and they run on 10 year old hardware but still sing because I use software written in Go or native languages. But remembering the particulars about runit services and symlinks is something I forget every 3 months between deploying new services. Trying to troubleshoot the logger is also a fun one where I remember then forget a few months later. Using systemd, this all just comes for free. Maybe I should write all of this down but I'm doing this for fun aren't I?
The reason users don't care that much about slow software is because they use software primarily to get things done.
So my new projects are not out yet, but I will market them heavily once they are.
One of them is an init system specifically designed to be easier to use than runit and $6, while still being sane, unlike systemd. Yes, I'm going to focus on ease of use because as you said, that matters a lot.
However, I do have a project out in the public now. It ships with the base system in FreeBSD and also ships with Mac OSX. It is a command-line tool that lots of people may use in bash scripts.
Is that widespread enough?
Also, one reason people adopted mine over the GNU alternative is speed.
Same: a very lean Linux install with the Awesome WM, now running on a 7700X (NVMe PCIe 4.0 x4 SSD in my case). I also use one of the keyboard that comes stock with the lowest latency (and I bumped its polling rate because why not).
This feels not just a bit but very snappy.
"A vibrant screen that responded instantly when you tapped replaced cramped keyboards."
I tentatively assume this results from a partial edit. Something to do with "replaced your dumbphone and tapped on cramped keyboards" or just maybe the slightly non sequitur "replaced erroneous characters on cramped keyboards"?
Parse errors aside,
"Design Tools - Users are consistently frustrated when Sketch or Figma are slow. Designers have high APM (actions per minute) and a small slowdown can occur 5-10 times per minute. "
Haven't used those, but with some design tools, it felt like 5-10 times per second, especially when trying to get an idea on screen quickly!
Regarding when it's okay to be slow:
"When a human has to keep up with a machine (e.g. we slow down video game framerates, otherwise the game would run at 60x speed and overwhelm you). "
I…wouldn't slow the framerate, unless you mean the game's speed is locked to refreshrate (as some ancient graphical games are), so allowing frames to be generated faster causes the gameworld to change faster. Or unless unrestrained framerate results in too much heat or power consumption.
I think there’s an important additional factor, which is how dynamic so much UI is these days. So much is looked up at runtime, rather than being determined at compile time or at least at some static time. That means you can plug a second monitor into your laptop and everything will “just work”. But there is no reason it should take a long time to start system settings (an example from the article) as the set of settings widgets doesn’t change much — for many people, never — and so can be cached either the first time you start it or precached by a background process. Likewise a number of display-specific decisions can be made, at least for the laptop’s screen or phone’s screen, and frozen.
Here’s some sobering perspective on this from 40 years ago: https://www.folklore.org/StoryView.py?story=Saving_Lives.txt
On any of the complex Linux DEs, the set of settings widgets was always set at startup, since those complex DEs existed. On Windows that varies from one interface to another (there are many), but at least for Win10, things have gotten much more static on the new interface. (I dunno about 11.)
Anyway, the amount of things you can read from the user on a Windows app startup time is staggering. The applications on the article have many orders of magnitude less (relevant) data to deal with.
Say I have two monitors plugged in and running. When I plug in a third one, here's what happens:
1. Monitor 2 goes blank.
2. Monitor one flashes off then back on.
3. Monitor 2 comes back on.
4. Both monitors go off.
5. All three monitors finally come back on.
Putting on my developer hat, I kind of know what's going on here. The devices are frantically talking to the device drivers, transmitting their capabilities, the OS is frantically reading config files to understand where to display the virtual desktops, everyone is frantically handshaking with everyone else. It's a terrible design and should not be excused. Putting on my end-user hat, what the fuck is this shit? I just plugged a monitor in. I'm not asking the computer to perform wizardry.
This is not an illusion. Cross-platform programs suck, so everyone avoids them, right? Electron apps and whatnot are universally mocked. You would only use one for an online services like Spotify or something. The normal use case is downloading some nice native code from your repo.
The thing is, cross-platform tooling sucks. A plain CLI program is already bad enough to get running across platforms - even among Unix/POSIX compliant-ish platforms, which is why autoconf, cmake and a host of other tools exist... but GUI programs? That's orders of magnitude worse to get right. And games manage to blow past that as every platform has their completely own graphics stack.
Electron and friends abstract all that suckiness away from developers and allow management to hire cheap fresh JS coding bootcamp graduates instead of highly capable and paid Qt/Gtk/SDL/whatever consultants.
No, they don't always suck. As an example, QT is cross platform, fast and complete.
But Electron is god awfully slow, and Eclipse can be too. The difference is QT apps are compiled and are generally written in C/C++, where are Electron and Eclipse are interpreted / JIT'ed, and use GC under the hood. As a consequence, they run at 1/2 the speed, use many times more RAM and to make matters worse Electron is single threaded.
The problem isn't cross platform. It's the conveniences most programmers lean on to make themselves productive - like GC, or async so they can avoid the dangers of true concurrency, or an interpreter to so they don't have to recompile for every platform out there. They do work - we are more productive in turning out code, but the come at a cost.
Who?
I see people constantly talking about those "consultants", yet what they are?
Just dev that has single-person company?
Maybe Tauri will make things better in the long run, but so far I haven't heard of any adoption at all.
The first Linux that I've used, Mandriva (KDE), runs smoothly on my Intel Pentium 4 PC with 256MB of RAM. I can even do some web browsing (Flash gaming) and music playing without feeling slow. That's an entire OS, running on 256MB of RAM.
On MDV, KDE and PIV, same here with an Athlon 2000 and 256 MB of RAM with Knoppix and later FreesBIE with XFCE, which ran much faster and snappier than even today's i7's with Plasma 5.
We should be getting so much more from our hardware! Let's not settle for software that makes us feel bad.
[1] http://www.permacomputing.net/
[2] https://suckless.org/
Mind you, the software we had 20 years ago was fully featured, smaller and faster than what we have today (orders of magnitude faster when adjusted for the increase in hardware speed).
Suckless, on the other hand, is impractical esthetic minimalism that removes "bloat" by removing the program. I'd rather run real software than an art project.
If you want more from your hardware, the answer is neither the usual bloatware, nor Suckless crippleware.
I love exploring low-latency code but rather than trying to create small, sharp tools the suckless way, I like to create low latency experiences end-to-end. Thinking about rendering latency, GUI concurrency, interrupt handling, etc. Suckless tools prioritize the functionality and the simplicity of code over the actual experience of using the tool. One of my favorite things to do is create offline-first views (which store things in browser Local Storage) of sites like HN that paper over issues with network latency or constrained bandwidth leading to retries.
I find suckless and permacomputing to be the siren song of a type of programmer, the type of programmer who shows up to give a presentation and then has to spend 10 minutes getting their lean Linux distro to render a window onto an external screen at the correct DPI, or even to connect to the wifi using some wpa_supplicant incantations.
There's a significant opportunity cost for zealotry.
No WPA incantations, the OS handles multiple connections seamlessly.
With that, magicpoint/sent and xrandr you just don't care, it works.
WTF happened to all the THOUSANDS AND THOUSANDS of machines I deployed to datacenters over the decades? Where are the F5 load balancers I spent $40,000 on per box in 1999?
I know that when we did Lucas' presidio migration, tens of million$ of SGI boxes went to the ripper. That sucks.
edit:
All these machines could be used to house a 'slow internet' or 'limited interenet'
Imagine when we graduate our wealth gap to include the information gap - where the poor only have access to an internet indexed to September 2021 - oh wait...
But really - that is WHAT AI will bring: an information gap: only the wealthy companies will have real-time access to information on the internet - and all the poor will have outdated information that will have already have been mined for its value.
think of how HFT network cards with insane packet buffers, and private fiber lines gave hedgies the microsecond advantages on trading stocks...
Thats basically what AI will power - the hyper-accelerated exploitation of information on the web via AI - but the common man, will be releagated to obsolete AI, while the @sama people of the world build off-planet bunkers.
Also, OpenBSD's philosophy is very similar to Suckless. One of the more notable projects that come to mind is the `doas` replacement for `sudo`.
[^1]: This is based on Dan Luu's testing (https://danluu.com/term-latency/). I don't know when this testing was done but I assume a few years ago because I remember finding it before.
I basically only upgrade workstations due to web browsers needs, and occasionally because a really big KiCAD project brings a system to a crawl. At this point even automated test suite runtimes are more improved by fixing things that stop test parallelization from working efficiently vs. bigger hardware.
(It would be cool if dormant tabs can just hold the URL, and kill all memory requirements when dormant after N period of time..)
But heck, even when I am running a high-end game on my machine, the memory consumption is less than when I have a bunch of tabs open displaying mostly text ...
My initial gut was to blame the modern drawing primitives. I know that a lot of the old occlusion based ideas were somewhat cumbersome on the application, but they also made a lot of sense to scope down all of the work that an app had to do?
That said, seeing Notepad makes me think it is not the modern drawing primitives, but the modern application frameworks? Would be nice to see a trace of what all is happening in the first few seconds of starting these applications. My current imagination is that it is something akin to a full classpath scan of the system to find plugins that the application framework supported, but that all too many applications don't even use.
That is, used to, writing an application started with a "main" and you did everything to setup the window and what you wanted to show. Nowadays, you are as likely to have your main be offloaded to some logic that your framework provided, with you providing a ton of callbacks/entrypoints for the framework to come back to.
Deleted Comment