I had an Atari ST in a closet, and decided to get rid of it a while back. I pulled it out to test it. The boot sequence, all the way to a desktop with a mouse that moves, takes less than one second. If you boot from a hard drive, maybe another second. For a while I just kept hitting the reset button, marveling at the speed at which the ST came up.
Most machines I work with these days take minutes to get rolling.
Okay, I know that systems are bigger and more complicated now; buses have to be probed and trained, RAM has to be checked, network stuff needs to happen, etc., etc., but minutes? This is just industry laziness, a distributed abdication of respect for users, a simple piling-on of paranoia and "one or two more seconds won't matter, will it?"
Story time.
A cow-orker of mine use to work at a certain very large credit card company. They were using IBM systems to do their processing, and downtime was very, very, very important to them. One thing that irked them was the boot time for the systems, again measured in minutes; the card company's engineers were pretty sure the delays were unecessary, and asked IBM to remove them. Nope. "Okay, give us the source code to the OS and we'll do that work." Answer: "No!"
So the CC company talked to seven very large banks, the seven very large banks talked to IBM, and IBM humbly delivered the source code to the OS a few days later. The CC company ripped out a bunch of useless gorp in the boot path and got the reboot time down to a few tens of seconds.
When every second is worth money, you can get results.
Corporate windows installations are unnecessarily slow because they run antivirus and all kinds of domain membership stuff in the boot and logon path. A clean install with fast boot and without that gunk takes seconds.
A headless linux can come up in seconds (UEFI fast boot + EFI stub) or even less than a second if you're in a VM and don't have to deal with the firmware startup. Booting to a lightweight window manager would only add a few seconds on top.
Headless, schmeadless: a RPi with the Slowest Disk In The Universe come up to a functional GUI under a minute. Slower boot (and correlated, slow work) on far beefier hardware is down to pure crud. Sure, you need a bazillion updaters, two for each program, and three antiviruses checking each other - in other words, "users have fast computers, SHIP IT!"
If your host machine is a real server (eg poweredge), itll do a selftest. This already takes tens of seconds. If you want fast bootup times, you need to be the BIOS. The example in the top post is stuff that either loads data off a flash chip (like a bios works) or a disk (requires some bootstrapping).
I'm actually relatively confident that you could get it down to 8-9 seconds to the basic X environment with an NVMe SSD and maybe even less if you were to cache some application states.
> The boot sequence, all the way to a desktop with a mouse that moves, takes less than one second.
That's an exceptional case though - the GUI OS was hardwired in ROM. The Amiga was an otherwise comparable machine that had to load much of the OS from disk, and it did take its time to boot to desktop.
Even when the OS was loaded off of a hard drive, the boot time was still about 2-3 seconds (I did measure this).
Not really exceptional.
It took maybe a minute to load off floppy disk. That is STILL shorter than the POST time for every machine I work with these days, with the possible exception of the Raspberry Pi in the closet.
Several years ago I have done consulting for a large travel booking service that advertises on TV.
As strange as it might sound, there's a large artificial delay added between the time the service knows the answer to customer's search and the time the answer is sent to the customer's web browser.
The reason for that delay is that the without it the customers do not believe that the service performed an exhaustive search and do not complete the transaction!
Also, it stops the client from hammering at the server...but this is a very inelegant way do do it, and devs would rightfully complain of such crude measures, so it might be necessary to frame it differently ;)
Login screen is hardly the end of the boot up process. I have had work laptops where I would login then go get a water because it’s going to take several minutes to finish loading.
> Most machines I work with these days take minutes to get rolling.
Minutes? That sounds exaggerated.
How powerful was your Atari ST compared to other machines at the time versus the machines you work with these days compared to other machines available?
Because I'm not even on a particularly new machine and from a powered off state I'm logged into Windows and starting applications within 5 seconds. And for example that's at 7680x2880 resolution, not 640x400.
POST on a Dell M640 is about three minutes. Other Dell systems are similar. POST on the workstations that I use are in the range of 1-2 minutes. This is before the OS gets control (once that happens, it's usually 15-20 seconds to a usable system).
The ST was a pretty decent performer at the time (arguably a little faster than the original Macintosh, for instance). Both the ST and the Macintosh took about the same amount of time to boot from floppy disk (though the OS was in ROM for the vast majority of STs that were built).
I've used 4.3BSD on a VAX 11/780 - and its remarkable to me how similar the experience is, even vi startup times are close. It's weird. I guess some things only go so fast. Similarly, my OS X 10.4 (or 10.5 desktop) laptop boots only marginally slower than my OS X 10.14 laptop.
The latest AMD CPUs are particularly bad at this, I got a 3600 and for half a year now there are known problems with extremely slow booting. The latest BIOS update made them a bit better but it's still at completely unacceptable levels.
I just put a Ryzen 5 3400G in a B450 board and it hits the login screen from cold start in like 3 seconds (and no there's not still a bunch of stuff churning in the background - it's all but done loading at that point).
An SSD can make a world of difference. Most of the time spent during boot up is in executing random reads (of about 4Kb size) from storage, and SSDs are a factor of magnitude faster there
Sorry, most of a box's boot time is the BIOS and other stuff. The servers I run at work take 3-4 minutes to boot, the last twenty seconds or so is the OS coming up. The consumer PCs I use have a similar ratio.
POST time is crazy bad. It's almost like the engineers working on it don't care.
Kind of related: does anyone else notice how long it takes to change channels on the TV these days? It used to be instantaneous when cable first came out and at some point it became this laggy experience where you'll press buttons on the remote and the channel takes forever to change. I hate it and it's one of the reasons I don't have cable any more.
I-frames are only sent, say, once or twice a second.
When a channel is switched, the TV has to wait for the next I-frame, since P-frames (and B-frames) only encode the difference to the previous I-frame (or to the previous and next I-frame in the case of B-frames).
If you are aware of a possibility for efficient video compression that avoids this problem, tell the HN audience; the really smart people who developed the video codecs apparently have not found a solution for this. ;-)
Otherwise complain to your cable provider that they do not send more I-frames to decrease the time to switch between channels (which would increase the necessary bandwidth).
It's actually worse than that - first you have to tune, then you have to wait for a PAT frame (which has a index of PMTs in it), then you have to wait for a PMT (which contains pointers to the audio, video and ECM streams, and then you have to wait for an ECM (encrypted key for the stream), at that point you have have decrypted video and can start looking for I-frames ....
(smart systems both cache a whole bunch of this stuff and revalidate their caches on the fly while they are tuning - first tunes after boot might be slower as these caches are filled)
> When a channel is switched, the TV has to wait for the next I-frame, since P-frames (and B-frames) only encode the difference to the previous I-frame (or to the previous and next I-frame in the case of B-frames).
You can apply the encoded difference to a grey (or black!) screen, the way (IIRC) VLC does in such cases. This means that the user immediately gets a hint of what's happening onscreen, especially since the audio can start playing immediately (also, often the P/B-frames replace a large portion of the on-screen content as people move around, etc.). Surely it isn't any worse than analog TV "snow".
If it looks too weird for the average user, make it an "advanced" option - 'quick' channel changes or something.
I remember when a family member first got satellite - the tuner would change instantly, and the show would "pepper in" over a second or so until the next full frame. There's no technical reason the change can't be displayed immediately - it might not be pretty, but whether that's preferable is subjective.
Unless I'm very mistaken about modern digital transmissions, cable coming into the house is still based on a broadcast system, which means you're getting everything all the time. The frames are all there for every channel, they're just not being read. I don't know how much processing or memory it would take to read and store frames for surrounding channels to the one you're on, but I imagine its possible.
Cache the most recent iframe on the network and have the STB pull + display the cached iframe till it catches up? This would enable fast channel scanning/flipping at the very least ...
No need to reinvent video encoding. At least my local provider seems to fix this by having all the channels streamed as multicast continuously and then having the TV box request a small bit of video over normal TCP to do the channel switch immediately and only later syncing to the multicast. That allows you to change channels quickly at any time and starting to watch at whatever the latest I-frame was.
I notice this happening when IGMP forwarding is broken in my router and channels will only play for a second or two after being switched to and then stopping. Switch times are pretty good.
Or, the TV UX designers can realize this, and make the TV respond instantly by switching eg. to channel logo plus a name of current programme (it is available from EPG) and then replacing it with video 0.5 later.
This would allow a rapid channel surfing, something I haven't been able to do on any recent TV.
Why not have the previous and next channel frame-data points loaded in the background? This would enable to switch quickly, even if it costs a bit more resources on the hardware side.
No. It's not a codec problem. They can leave it on the last decoded frame and fade it nicely to, say, the average color for the 1 second without going to black, and you don't have to be a super decoder genius to implement something that's a little less aesthetically jarring.
My favorite "the world is terrible" curmudgeon observation is how awful hotel TVs (and hotel TV remotes) are. Every single hotel room TV I've had the displeasure of using in the last N years takes more than 20 seconds to turn on after multiple button-presses, starts out on a terrible in-house channel at high volume regardless of the last-watched channel+volume, takes multiple seconds to change channels, and has no real TV guide option to let you see what's on. This plus HDMI ports are regularly inaccessible and it's nearly impossible to use streaming sticks (roku) due to web-based wifi portals that only work half the time if you have an ad-blocker enabled.
Careful what you wish for - many years ago I worked for an interactive TV company that focused on the hospitality market. One large chain of hotels seriously asked if we could do a remote control with three additional large buttons at the top for "Beer", "Burger" and "Porn".
Turns out getting like that manufactured in the quantities we were looking at is a nightmare - so it didn't happen.
Edit: Clearly it would have been easier to have one button for "Beer, burger and porn" - but that has only occurred to me now.
While in Japan, the TVs would often turn on the moment you entered the room. This was fine, as I would mute them or turn them off. At one hotel, I managed to bug one out such that volume didn't work anymore. No worries, I'll just turn it off. Except when I went to turn it off, it crashed the TV app and it automatically restarted. All the wires were integrated so they couldn't be tampered with and the unit had no real buttons. I thought I was going to have to sleep with the same four very excitable adverts on rotation blasting into my ears!
Mercifully, pressing the TV Source button triggered a different app that didn't crash when I pressed the off button, and in what must be the software engineering achievement of the decade, the off button turned off the screen.
In the hotels I stayed at in Europe it's usually a standard non-smart TV plus a custom board that connects to it over HDMI. Sometimes the whole thing is enclosed in some kind of plastic shroud that clips over the TV but nothing a bit of "percussive maintenance" can't fix. From there, the HDMI port is accessible.
However, in most cases, at least in mid-range rooms, the TV is barely bigger than my laptop so it just doesn't make sense to use it.
The usual problem I have is that I need to switch to DHCP based DNS to register my MAC address to the room number, then switch back so the hotel can't screw with my DNS lookups.
It might not be your ad-blocker or script-blocker; it might be your DNS settings.
I'm in a room right now that has an HDMI cable on a little pug infront the TV. Unfortunately I never remember to bring my USB-C to HDMI adapter when I stay here.
When working on digital settop boxes a decade or 2 ago, channel change latency was one of the toughest problems. Eventually we (collectively, the whole industry) gave up and settled for 1 second as acceptable. Channel surfing quietly went away. When your i-frame only comes so often, there's not a whole lot you can do.
Nowadays, that problem is solved by reducing actual live content to a strict minimum; everything else can be on-demand.
Maybe there's not a lot you could have done while keeping the hardware cheap. I can think of a few ways to improve the user experience of channel surfing without waiting for an i-frame every time.
The cheapest would be to just use a constant neutral-grey i-frame whenever the channel flips, and update that until a real i-frame comes along, while playing the channel audio immediately. Ugly video for a second, but high-action scenes fill in faster. I'd bet that most people could identify an already-watched movie or series before an i-frame comes in, at least 80% of the time.
More expensive would be to cache incoming i-frames of channels adjacent to the viewed channel, and use the cached image instead of the grey frame. Looks like a digital channel dropping update frames during a thunderstorm for a second.
Prohibitively expensive (back then) would be to use multiple tuners that tune in to channels adjacent to the viewed channel, and then swap the active video and audio when the channel up or channel down buttons are pressed. Drop the channels that just exited the surfing window, and tune in to the ones that just entered it. Surfing speed limited by number of tuners.
Televisions still don't do this, even after more than a decade of digital broadcast, and multiple-tuner, multiple-output DVR boxes.
These days they should be able to guess what 20 channels you might change to and start recording those in the background.
I've always suspected the reason it's slow is because you press the remote button, the DVR sends that to the provider, the provider has to verify that you can do what you are asking it to do, then a response comes, then the change can start.
Can't there be a dedicated channel or connection that's always tuned to that just broadcasts I frames from all channels, so that the box has all the latest frames for all channels and can start playing instantly when switching channels?
I remember when this happened, when it went "digital" and I know why it takes a second to switch channels, but it ruined the experience for me. While, at the time there was ways to watch video on the computer or from a DVD or VHS, I still liked to "channel surf" every once in a while. But that requires switching channels fast for an random amount of time based upon what was on and how long it took your brain in that moment to absorb it. And sometimes you'd stop and go back. But with digital, most of the time the load time was longer than the time I'd be on that channel in the first place. It'd take minutes to go through 10 channels as opposed to seconds. Channel surfing was its own activity for me back then - and it was ruined.
Nowadays there's youtube - but it's hardly the same thing.
I don't know who can afford cable any more to be honest. The only people I know who pay for cable are in the 45+ age range and they only use it because they just leave it on 24x7 or to watch the weather.
This I do agree with. I haven't had cable for years and years, but when I'm at someone elses house I am baffled how insanely slow it is. That would drive me NUTS on a day-to-day basis.
To be more accurate, the speed with which electrons move through the wire is rather low (which does not matter, of course, because the signals are carried by the electromagnetic forces which propagate very quickly).
Modern cable systems are more akin to satellite broadcast systems than they are to the terrestrial broadcast systems of yore.
There's an order of magnitude more content on cable these days. When you tune a channel now, instead of simply tuning to a frequency and demodulating the signal, content is spread across many frequencies, each of which carries a QAM-modulated MPEG transport stream that carries many programs. It takes longer to tune QAM than baseband analog or even PSK. So the physical frequency change takes longer than it used to do. Once frequency is tuned, decoder may need to check provisioning in order to be able to decrypt whatever program you've requested, so that adds to the time it takes.
It's not only switching channels, even just turning the thing on. Also switching between inputs takes a long time, a large part of which is spend in the menu. The only thing I ever do is switch between AppleTV and PS4 and adjust volume because the PS4 is orders of magnitude louder than the AppleTV, yet so of that always feels super clunky if I have to use the TV remote.
It's patently absurd that a modern cable box chugs just when flipping through the channel guide. The provider has control over the software and the hardware so it should be butter smooth all the time.
Doing that would take extra work, and cost more money. Why should they bother, when the dinosaurs who still watch cable TV aren't going to pay them any more for this butter-smooth experience? The people who care about such things have all moved to on-demand streaming services.
Yeah, it sucks though that most tv providers supply the lowest-specced box they can provide. For my parents it was bad when they bought a 4k television, and the box "supports" that, though since some firmware update it starts lagging every 20secs. Dropped frames and the usual video-"smudge".
I hope they get a (FREE) update soon. Because this is just a broken promise.
And I'm not even talking about the Netflix "app" that's on there. Holy s#!t that's slow. Or the TV-guide. They now resort to teletext because that's much faster... I mean...
To call this rose-tinted glasses when considering how things worked in 1983 is a massive understatement.
A counterexample: in 1983, enter two search terms, one of them slightly misspelled or misremembered, hit f3: "no results", spend 10 minutes trying to find the needle in the haystack, give up and physically search for the thing yourself.
Enter two search terms slightly incorrectly now: no of the time it will know exactly what you want, may even autocorrect a typo locally, get your accurate search results in a second.
When things were faster 30+ years ago (and they absolutely were NOT the vast majority of the time, this example cherry picked one of the few instances that they were), it was because the use case was hyperspecific, hyperlocalized to a platform, and the fussy and often counterintuitive interfaces served as guard rails.
The article has absolutely valid points on ways UIs have been tuned in odd ways (often to make them usable, albeit suboptimally, for the very different inputs of touch and mouse), but the obsession about speed being worse now borders on quixotic. Software back then was, at absolute best, akin to a drag racer - if all you want to do it move 200 meters in one predetermined direction then sometimes is was fine. Want to go 300 meters, or go in a slightly different direction, or don't know how to drive a drag racer? Sorry, need to find a different car/course/detailed instruction manual.
I agree that a lot of the use cases back then were hyperspecific and the software were drag racers. However, much of what he talks about is productivity tools. I believe those fit the same description. I occasionally work with many different screens using a TN3270, some of which look very different and have very different purposes, however they have a common interface with similar keystrokes. This makes navigating even unfamiliar screens a breeze. He talks about is the commonality of keyboard language compared to that of GUI and I think that is an excellent point of his.
So to some degree that's preaching to the choir around here; I still use emacs extensively and adore it, but I'd also never wish it upon anyone who doesn't value work efficiency over low cognitive load and nice aesthetics. In my experience, at least, that's most people.
In light of that, I think it's less that we've "lost" keyboard-driven efficiency as much as knowingly sacrificed it in favor of spending UI/UX dev time on more generally desired/useful features. The nice thing about being the type of power user who wants more keyboard functionality is that you can often code/macro it yourself.
Part of that was the widespread use low-resolution, high refresh rate, zero-latency (CRT) monitors, and the use of hardware interrupts instead of a serial bus for input.
> one of the things that makes me steaming mad is how the entire field of web apps ignores 100% of learned lessons from desktop apps
I can't agree with this enough. The whole of web development in general really grinds my gears these days. Stacking one half-baked technology on top of another, using at least 3 different languages to crap out a string of html that then gets rendered by a browser. Using some node module for every small task, leaving yourself with completely unauditable interdependent code that could be hijacked by a rogue developer at any moment. And to top it all off now we're using things like Electron to make "native" apps for phones and desktops.
It seems so ass-backwards. This current model is wasteful of computing resources and provides a generally terrible user experience. And it just seems to get worse as time passes. :/
I’ve gone back to making simple HTML pages for all sorts of projects. Even trying to avoid CSS when possible.
It’s funny, in a way, because the “problem” with straight HTML is that it was straight hierarchical (and thus vertical) and so a lot of space was wasted on wide desktop displays. We used tables and the later CSS to align elements horizontally.
Now on phones straight html ends up being a very friendly and fast user experience. A simple paragraph tag with a bunch of text and a little padding works great.
Related: I was bored last Sunday, so I decided to install Sublime Text. I'm normally a VS Code user, but VS Code is built on Electron and it felt a little sluggish for certain files, so I wanted to give something else a try.
I've been using Sublime all week and it feels like an engineering masterpiece. Everything is instantly responsive. It jumps between files without skipping a beat. My battery lasts longer. (I don't want to turn this into an editor debate, though. Just a personal example.)
If you would've asked me a month ago, I would've said that engineers cared too much about making things performant to the millisecond. Now, I would say that many of them don't care enough. I want every application to be this responsive.
I never realized how wasteful web tech was until I stopped using it. And I guess you could say the same for a lot of websites – we bloat everything with node_modules and unnecessary CSS and lose track of helping our users accomplish their goals.
If I remember correctly, this is like the third comment I have read on HN in years that shows Sublime Text is faster than VSCode.
I have been arguing about this since may be 2016 or earlier, on HN it was the echo chamber of how much faster VSCode is compared to Atom! I cant believe this is built on Electron etc. And with every VSCode update I tried and while it was definitely faster than Atom, it was no way near as fast as Sublime. And every time this was brought up the answer was they felt no different between VSCode and Sublime or VSCode is fast enough it didn't matter.
The biggest problem of all problems is not seeing it as a problem.
I hope VSCode will continue to push the boundary of Electron Apps, they had a WebGL Render if I remember correctly that was way faster, not sure if there are anymore things in the pipeline.
I used to choose Eclipse over IntelliJ for similar reasons years ago. When it came to refactoring and other integrated features IntelliJ won, but Eclipse could do partial compilation on save and the run of a unit test automatically within moments, something which on IntelliJ invoked a Maven build that often took minutes.
The performance of feedback and the speed with which you can go through the basic cycle of coding and testing it works as expected is the speed at which you can develop code and the editor is a pretty critical part of that.
Interestingly enough I had kind of the opposite experience
Since I haven't done much with Java itself, the build times weren't as impactful on me.
What made a big change was how IntelliJ, despite being pure Swing GUI, was order of magnitude lower latency, from as simple things as booting the whole IDE, to operating on, well, everything.
Then I switched from Oracle to IBM J9 and I probably experienced yet another order of magnitude speedup.
VS Code ~= VS. It's a text editor, not an IDE (unless you consider things like VIM to be IDEs because you can use plugins to do IDE-like stuff with them.)
More relevant to the article, I fully agree with the authors upset at trying to do two parts of the same task in google maps, it's entirely infuriating.
> in 1998 if you were planning a trip you might have gotten out a paper road map and put marks on it for interesting locations along the way
In 1998, I used https://www.mapquest.com/ to plan a road trip a thousand miles from where I was living, and it was, at the time, an amazing experience, because I didn't need to find, order and have shipped to me a set of paper road maps.
In the 1970s, when I had a conversation with someone on the phone, the quality stayed the same throughout. We never 'lost signal'. It was an excellent technology that had existed for decades, and, in one particular way, was better than modern phones. But guess what? Both parties were tied to physical connections.
Google Maps is one product, and provides, for the time being, an excellent experience for the most common use cases.
> amber-screen library computer in 1998: type in two words and hit F3. search results appear instantly
So that's a nice, relatively static and local database lookup, cool.
I wrote 'green screen' apps in Cobol for a group of medical centers in the early and mid 90s. A lot of the immediate user interface was relatively quick, but most of the backend database lookups were extremely slow, simply because the amount of data was large, a lot of people were using it in parallel, and the data was constantly changing. Also: that user interface required quite a bit of training, including multi-modal function key overlays.
This article has a couple of narrow, good points, but is generally going in the wrong direction, either deliberately or because of ignorance.
Most machines I work with these days take minutes to get rolling.
Okay, I know that systems are bigger and more complicated now; buses have to be probed and trained, RAM has to be checked, network stuff needs to happen, etc., etc., but minutes? This is just industry laziness, a distributed abdication of respect for users, a simple piling-on of paranoia and "one or two more seconds won't matter, will it?"
Story time.
A cow-orker of mine use to work at a certain very large credit card company. They were using IBM systems to do their processing, and downtime was very, very, very important to them. One thing that irked them was the boot time for the systems, again measured in minutes; the card company's engineers were pretty sure the delays were unecessary, and asked IBM to remove them. Nope. "Okay, give us the source code to the OS and we'll do that work." Answer: "No!"
So the CC company talked to seven very large banks, the seven very large banks talked to IBM, and IBM humbly delivered the source code to the OS a few days later. The CC company ripped out a bunch of useless gorp in the boot path and got the reboot time down to a few tens of seconds.
When every second is worth money, you can get results.
A headless linux can come up in seconds (UEFI fast boot + EFI stub) or even less than a second if you're in a VM and don't have to deal with the firmware startup. Booting to a lightweight window manager would only add a few seconds on top.
That's an exceptional case though - the GUI OS was hardwired in ROM. The Amiga was an otherwise comparable machine that had to load much of the OS from disk, and it did take its time to boot to desktop.
Not really exceptional.
It took maybe a minute to load off floppy disk. That is STILL shorter than the POST time for every machine I work with these days, with the possible exception of the Raspberry Pi in the closet.
As strange as it might sound, there's a large artificial delay added between the time the service knows the answer to customer's search and the time the answer is sent to the customer's web browser.
The reason for that delay is that the without it the customers do not believe that the service performed an exhaustive search and do not complete the transaction!
My current Windows 10 machine boots faster than my monitor turns on (<5 seconds from power on to login screen).
Deleted Comment
I guess it just comes down to priorities. I'm sure if PC specs included "boot time (lower is better,)" we'd see boot times drop quickly.
Minutes? That sounds exaggerated.
How powerful was your Atari ST compared to other machines at the time versus the machines you work with these days compared to other machines available?
Because I'm not even on a particularly new machine and from a powered off state I'm logged into Windows and starting applications within 5 seconds. And for example that's at 7680x2880 resolution, not 640x400.
POST on a Dell M640 is about three minutes. Other Dell systems are similar. POST on the workstations that I use are in the range of 1-2 minutes. This is before the OS gets control (once that happens, it's usually 15-20 seconds to a usable system).
The ST was a pretty decent performer at the time (arguably a little faster than the original Macintosh, for instance). Both the ST and the Macintosh took about the same amount of time to boot from floppy disk (though the OS was in ROM for the vast majority of STs that were built).
Who care about boot time when you do it once in a while versus actually using that interface?
https://www.youtube.com/watch?v=A1b9kUP0WtI
My Ryzen boots within 8 seconds on a b450 Pro motherboard.
POST time is crazy bad. It's almost like the engineers working on it don't care.
I-frames are only sent, say, once or twice a second.
When a channel is switched, the TV has to wait for the next I-frame, since P-frames (and B-frames) only encode the difference to the previous I-frame (or to the previous and next I-frame in the case of B-frames).
If you are aware of a possibility for efficient video compression that avoids this problem, tell the HN audience; the really smart people who developed the video codecs apparently have not found a solution for this. ;-)
Otherwise complain to your cable provider that they do not send more I-frames to decrease the time to switch between channels (which would increase the necessary bandwidth).
(smart systems both cache a whole bunch of this stuff and revalidate their caches on the fly while they are tuning - first tunes after boot might be slower as these caches are filled)
You can apply the encoded difference to a grey (or black!) screen, the way (IIRC) VLC does in such cases. This means that the user immediately gets a hint of what's happening onscreen, especially since the audio can start playing immediately (also, often the P/B-frames replace a large portion of the on-screen content as people move around, etc.). Surely it isn't any worse than analog TV "snow".
If it looks too weird for the average user, make it an "advanced" option - 'quick' channel changes or something.
I notice this happening when IGMP forwarding is broken in my router and channels will only play for a second or two after being switched to and then stopping. Switch times are pretty good.
This would allow a rapid channel surfing, something I haven't been able to do on any recent TV.
This is a bad comment/reply - upstream wasn't complaining about the codec but about the usability (of the devices).
Turns out getting like that manufactured in the quantities we were looking at is a nightmare - so it didn't happen.
Edit: Clearly it would have been easier to have one button for "Beer, burger and porn" - but that has only occurred to me now.
Mercifully, pressing the TV Source button triggered a different app that didn't crash when I pressed the off button, and in what must be the software engineering achievement of the decade, the off button turned off the screen.
However, in most cases, at least in mid-range rooms, the TV is barely bigger than my laptop so it just doesn't make sense to use it.
It might not be your ad-blocker or script-blocker; it might be your DNS settings.
Or just let me use an HDMI port.
I don't watch channels anymore, and I don't want to pay for your pay-per-view content.
Nowadays, that problem is solved by reducing actual live content to a strict minimum; everything else can be on-demand.
The cheapest would be to just use a constant neutral-grey i-frame whenever the channel flips, and update that until a real i-frame comes along, while playing the channel audio immediately. Ugly video for a second, but high-action scenes fill in faster. I'd bet that most people could identify an already-watched movie or series before an i-frame comes in, at least 80% of the time.
More expensive would be to cache incoming i-frames of channels adjacent to the viewed channel, and use the cached image instead of the grey frame. Looks like a digital channel dropping update frames during a thunderstorm for a second.
Prohibitively expensive (back then) would be to use multiple tuners that tune in to channels adjacent to the viewed channel, and then swap the active video and audio when the channel up or channel down buttons are pressed. Drop the channels that just exited the surfing window, and tune in to the ones that just entered it. Surfing speed limited by number of tuners.
Televisions still don't do this, even after more than a decade of digital broadcast, and multiple-tuner, multiple-output DVR boxes.
I've always suspected the reason it's slow is because you press the remote button, the DVR sends that to the provider, the provider has to verify that you can do what you are asking it to do, then a response comes, then the change can start.
Nowadays there's youtube - but it's hardly the same thing.
Nowadays there's many layers of various quality, and software reactivity is a factor of so many things.
i don't know anyone who does that, but it is damn trivial.
(former middleware engineer)
That said, I'm not sure that counts for the latency by itself. Pretty sure it doesn't. :(
The answer: put an ad in there.
And I'm not even talking about the Netflix "app" that's on there. Holy s#!t that's slow. Or the TV-guide. They now resort to teletext because that's much faster... I mean...
A counterexample: in 1983, enter two search terms, one of them slightly misspelled or misremembered, hit f3: "no results", spend 10 minutes trying to find the needle in the haystack, give up and physically search for the thing yourself.
Enter two search terms slightly incorrectly now: no of the time it will know exactly what you want, may even autocorrect a typo locally, get your accurate search results in a second.
When things were faster 30+ years ago (and they absolutely were NOT the vast majority of the time, this example cherry picked one of the few instances that they were), it was because the use case was hyperspecific, hyperlocalized to a platform, and the fussy and often counterintuitive interfaces served as guard rails.
The article has absolutely valid points on ways UIs have been tuned in odd ways (often to make them usable, albeit suboptimally, for the very different inputs of touch and mouse), but the obsession about speed being worse now borders on quixotic. Software back then was, at absolute best, akin to a drag racer - if all you want to do it move 200 meters in one predetermined direction then sometimes is was fine. Want to go 300 meters, or go in a slightly different direction, or don't know how to drive a drag racer? Sorry, need to find a different car/course/detailed instruction manual.
Want to give me fancy autocorrect? Fine. But first:
* Make a UI with instant feedback, which doesn't wait on your autocorrect
* Give me exact results instantly before your autocorrect kicks in
* Run your fancy slow stuff in the background if resources are available
* Update results when you get them... if I didn't hit "enter" and got away from you before that.
It's not that complicated. We've got the technology.
And also, there's still no fucking reason a USB keyboard/mouse should be more laggy than their counterparts back in the day.
Either way I'm not sure it rises to the level of indignation shown here.
Check out this specific part of his postings: https://i.imgur.com/Roz80Nd.png
The main idea I got from his rant was that we have mostly lost the efficiencies that a keyboard can provide.
In light of that, I think it's less that we've "lost" keyboard-driven efficiency as much as knowingly sacrificed it in favor of spending UI/UX dev time on more generally desired/useful features. The nice thing about being the type of power user who wants more keyboard functionality is that you can often code/macro it yourself.
We could really do better on the latter category here in the 21st century.
Deleted Comment
I can't agree with this enough. The whole of web development in general really grinds my gears these days. Stacking one half-baked technology on top of another, using at least 3 different languages to crap out a string of html that then gets rendered by a browser. Using some node module for every small task, leaving yourself with completely unauditable interdependent code that could be hijacked by a rogue developer at any moment. And to top it all off now we're using things like Electron to make "native" apps for phones and desktops.
It seems so ass-backwards. This current model is wasteful of computing resources and provides a generally terrible user experience. And it just seems to get worse as time passes. :/
It’s funny, in a way, because the “problem” with straight HTML is that it was straight hierarchical (and thus vertical) and so a lot of space was wasted on wide desktop displays. We used tables and the later CSS to align elements horizontally.
Now on phones straight html ends up being a very friendly and fast user experience. A simple paragraph tag with a bunch of text and a little padding works great.
I've been using Sublime all week and it feels like an engineering masterpiece. Everything is instantly responsive. It jumps between files without skipping a beat. My battery lasts longer. (I don't want to turn this into an editor debate, though. Just a personal example.)
If you would've asked me a month ago, I would've said that engineers cared too much about making things performant to the millisecond. Now, I would say that many of them don't care enough. I want every application to be this responsive.
I never realized how wasteful web tech was until I stopped using it. And I guess you could say the same for a lot of websites – we bloat everything with node_modules and unnecessary CSS and lose track of helping our users accomplish their goals.
I have been arguing about this since may be 2016 or earlier, on HN it was the echo chamber of how much faster VSCode is compared to Atom! I cant believe this is built on Electron etc. And with every VSCode update I tried and while it was definitely faster than Atom, it was no way near as fast as Sublime. And every time this was brought up the answer was they felt no different between VSCode and Sublime or VSCode is fast enough it didn't matter.
The biggest problem of all problems is not seeing it as a problem.
I hope VSCode will continue to push the boundary of Electron Apps, they had a WebGL Render if I remember correctly that was way faster, not sure if there are anymore things in the pipeline.
The performance of feedback and the speed with which you can go through the basic cycle of coding and testing it works as expected is the speed at which you can develop code and the editor is a pretty critical part of that.
Since I haven't done much with Java itself, the build times weren't as impactful on me.
What made a big change was how IntelliJ, despite being pure Swing GUI, was order of magnitude lower latency, from as simple things as booting the whole IDE, to operating on, well, everything.
Then I switched from Oracle to IBM J9 and I probably experienced yet another order of magnitude speedup.
[1] https://code.visualstudio.com/
[2] https://visualstudio.microsoft.com/vs/
More relevant to the article, I fully agree with the authors upset at trying to do two parts of the same task in google maps, it's entirely infuriating.
Edit: duplicate submission: one directly on twitter, this one through hte threaded reader. The other submission has >350 comments: https://news.ycombinator.com/item?id=21835417
In 1998, I used https://www.mapquest.com/ to plan a road trip a thousand miles from where I was living, and it was, at the time, an amazing experience, because I didn't need to find, order and have shipped to me a set of paper road maps.
In the 1970s, when I had a conversation with someone on the phone, the quality stayed the same throughout. We never 'lost signal'. It was an excellent technology that had existed for decades, and, in one particular way, was better than modern phones. But guess what? Both parties were tied to physical connections.
Google Maps is one product, and provides, for the time being, an excellent experience for the most common use cases.
> amber-screen library computer in 1998: type in two words and hit F3. search results appear instantly
So that's a nice, relatively static and local database lookup, cool.
I wrote 'green screen' apps in Cobol for a group of medical centers in the early and mid 90s. A lot of the immediate user interface was relatively quick, but most of the backend database lookups were extremely slow, simply because the amount of data was large, a lot of people were using it in parallel, and the data was constantly changing. Also: that user interface required quite a bit of training, including multi-modal function key overlays.
This article has a couple of narrow, good points, but is generally going in the wrong direction, either deliberately or because of ignorance.