Readit News logoReadit News
dwheeler · 2 years ago
Hooray, with these changes, the tested setup finally manages to have a smaller input median latency (Console ~12 msec) than an Apple //e from 1983 (30 msec). It only took 41 years: https://www.extremetech.com/computing/261148-modern-computer...https://danluu.com/input-lag/

But wait! Not so fast (smile). This benchmark uses the compositor "raw Mutter 46.0", not GNOME Shell. Raw mutter is "a very bare-bones environment that is only really meant for testing."

In addition, this measurement is not end-to-end because it does not include keyboard latency. In this test, the "board sends a key press over USB (for example, Space)". Latencies just within a keyboard can go up to 60msec by themselves: https://danluu.com/keyboard-latency/

What are the true end-to-end numbers for the default configuration, which is the only situation and configuration that really matters? I wish the article had measured that. I suspect the numbers will be significantly worse.

I do congratulate the work of the GNOME team and the benchmarker here. Great job! But there are important unanswered questions.

Of course, the Apple //e used hardware acceleration and didn't deal with Unicode, so there are many differences. Also note that the Apple //e design was based on the older Apple ][, designed in the 1970s.

Still, it would be nice to return to the human resposiveness of machines 41+ years old.

ryukoposting · 2 years ago
I'd argue that it's actually a good thing that the author ignored keyboard latency. We all have different keyboards plugged into different USB interfaces plugged into different computers running different versions of different operating systems. Throw hubs and KVMs into the mix, too.

If the latency of those components varies wildly over the course of the test, it would introduce noise that reduces our ability to analyze the exact topic of the article - VTE's latency improvements.

Even if the latency of those components were perfectly consistent over the course of the author's test, then it wouldn't affect the results of the test in absolute terms, and the conclusion wouldn't change.

This exact situation is why differences in latency should never be expressed as percentages. There are several constants that can be normalized for a given sample set, but can't be normalized across an entire population of computer users. The author does a good job of avoiding that pitfall.

The Mutter thing is interesting. The author holds that constant, and GNOME sits on top of Mutter, so I think it's reasonable to assume we'd see the same absolute improvement in latency. GNOME may also introduce its own undesirable variance, just like keyboard latency. I'd be very curious to see if those guesses holds up.

KronisLV · 2 years ago
> We all have different keyboards plugged into different USB interfaces plugged into different computers running different versions of different operating systems.

I've actually been curious about getting a wireless keyboard recently, but wondered about how big of a latency impact there would be. Normally I use IDEs that definitely add a bit of sluggishness into the mix by themselves, but something compounded on top of that would probably make it way more annoying.

A quick search lead me to this site: https://www.rtings.com/keyboard

It does appear that they have a whole methodology for testing keyboards: https://www.rtings.com/keyboard/tests/latency

For what it's worth, it seems that well made keyboards don't have too much latency to them, even when wireless (though the ones that use Bluetooth are noticeably worse): https://www.rtings.com/keyboard/1-3-1/graph/23182/single-key...

Just found that interesting, felt like sharing. Might actually go for some wireless keyboard as my next one, if I find a good form factor. Unless that particular keyboard does something really well and most others just have way worse wireless hardware in them.

gsich · 2 years ago
Hubs have no noticable impact on latency.

https://www.youtube.com/watch?v=nbu3ySrRNVc explains it and has some statistics.

gigatexal · 2 years ago
Then use the Apple 2e and forgo all the nicities of modern operating systems. Honestly this take is a whole lot of words to shit on all the open source devs working hard to provide things to us for free and I’m not having it.
refulgentis · 2 years ago
+100. It's my least favorite talking point because I'm old enough to seen it linked 100 times, I find it very unlikely it was faster when measured by the same methods, and the article itself notes the funny math around CRTs.
hulitu · 2 years ago
Or even better: don't use Gnome. /s
half-kh-hacker · 2 years ago
The methodology of the linked keyboard latency article including the physical travel time of the key has always irked me a little
dotancohen · 2 years ago
Yet as a lover of Cherry Reds and Blues, in my opinion that time should most definitely be included. I am not a gamer, but I do notice the difference when I'm on a Red keyboard and when I'm on a Blue keyboard.
emn13 · 2 years ago
My initial gut reaction to this was - yeah, of course. But after reading https://danluu.com/keyboard-latency/ - I'm not so sure. Why exactly should physical travel time not matter? If a keyboard has a particularly late switch, that _does_ affect the effective latency, does it not?

I can sort of see the argument for post actuation latency in some specific cases, but as a general rule, I'm struggling to come up with a reason to exclude delays due to physical design.

klysm · 2 years ago
Yeah that is definitely something that shouldn’t be included
naikrovek · 2 years ago
Dealing with Unicode is not the challenge that people seem to believe it is. There are edge cases where things can get weird, but they are few and those problems are easily solved.

What really got my goat about this article is that prior to the latest tested version of Gnome, the repaint rate was a fixed 40Hz! Whose decision was that?

dwheeler · 2 years ago
Unicode is more challenging when you're talking about hardware acceleration. On an Apple //e, displaying a new character required writing 1 byte to a video region. The hardware used that byte to index into a ROM to determine what to display. Today's computers are faster, but they must also transmit more bytes to change the character display.

That said, I can imagine more clever uses of displays might produce significantly faster displays.

Twirrim · 2 years ago
> What really got my goat about this article is that prior to the latest tested version of Gnome, the repaint rate was a fixed 40Hz! Whose decision was that?

From a previous VTE weekly status message, leading up to the removal:

"I still expect more work to be done around frame scheduling so that we can remove the ~40fps cap that predates reliable access to vblank information." (from https://thisweek.gnome.org/posts/2023/10/twig-118/)

So as with a lot of technical debt, it's grounded in common sense, and long since past time it should have been removed. It just took someone looking to realise it was there and work to remove it.

nialv7 · 2 years ago
I found the claim in the keyboard latency article suspicious. If keyboards regularly had 60ms key press to USB latency, rhythm games will be literally unplayable. Yet I never had this kind of problem with any of the keyboards I have owned.
jameshart · 2 years ago
Real physical acoustic pianos have a latency of about 30ms (the hammer has to be released and strike the string).

Musicians learn to lead the beat to account for the activation delay of their instrument - drummers start the downstroke before they want the drum to sound; guitarists fret the note before they strike a string… I don’t think keyboard latency would make rhythm games unplayable provided it’s consistent and the feedback is tangible enough for you to feel the delay in the interaction.

GuB-42 · 2 years ago
For rhythm games, you want to minimize jitter, latency doesn't matter much. Most games usually have some kind of compensation, so really the only thing that high latency does is delay in visual feedback, which usually doesn't matter that much as players are not focusing on the notes they just played. And even without compensation, it is possible to adjust as long as the latency is constant (it is not a good thing though).

It matters more in fighting games, where reaction time is crucial because you don't know in advance what your opponent is doing. Fighting game players are usually picky about their controller electronics for that reason, the net code of online games also gets a lot of attention.

In a rhythm game, you usually have 100s of ms to prepare you moves, even when the execution is much faster. It is a form of pipelining.

bobmcnamara · 2 years ago
The 60ms keyboard is wireless.

Some of them will batch multiple key presses over a slower connection interval, maybe 60ms, then the radio to USB converter blasts them over together.

So you can still type fast, but you absolutely cannot play rhythm games.

prmoustache · 2 years ago
Rhythm is about frequency, not latency. As long as latency is stable, anyone adapts to it.
jasonjmcghee · 2 years ago
I don't buy the 60ms latency either, but it's very easy to compensate for consistent latency when playing games, and most rhythm games choreograph what you should do in "just a moment" which is probably at least 10x more than 60ms
_aavaa_ · 2 years ago
Rhythm games will often have compensation for the delays of the input, audio, and visuals. Some do this without telling the users, others do it explicitly, e.g. Crypt of the Necrodancer.
NoahKAndrews · 2 years ago
The latency of the physical key going down is counted in that post, so it includes mechanical "latency" that will differ depending on how hard you press the keys and if you fully release the key.
Narishma · 2 years ago
Rhythm games are probably the easiest to play with high latency, as long as it's consistent.
Izkata · 2 years ago
> a smaller input median latency (Console ~12 msec) than an Apple //e from 1983 (30 msec).

> Still, it would be nice to return to the human resposiveness of machines 41+ years old.

A while ago someone posted a webpage where you could set an arbitrary latency for an input field, and while I don't know how accurate it was, I'm pretty sure I remember having to set it to 7 or 8ms for it to feel like xterm.

DiabloD3 · 2 years ago
BTW, remember, most people still have 60hz monitors. Min latency can only be 16.6ms, and "fast as possible" is just going to vary inbetween 16.6 and 33.3ms.

The real improvement wouldn't be reducing latency as much as allowing VRR signaling from windowed apps, it'd make the latency far more consistent.

Qwertious · 2 years ago
>BTW, remember, most people still have 60hz monitors. Min latency can only be 16.6ms, and "fast as possible" is just going to vary inbetween 16.6 and 33.3ms.

No the minimum will be 0ms, since if the signal arrives just before the monitor refreshes then it doesn't need to wait. This is why people disable VSync/FPS caps in videogames - because rendering at higher than <60FPS> means that the latest frame is more up-to-date when the <60Hz> monitor refreshes.

The maximum monitor-induced latency would be 16.6ms. Which puts the average at 8.3ms. Again, not counting CPU/GPU latency.

33.3ms would be waiting about two frames, which makes no sense unless there's a rendering delay.

aidenn0 · 2 years ago
Best-case median latency @60hz is 8.3ms (i.e. if there were 0 time consumed by the input and render, it would vary equidistributed between 0 and 16.6ms.
bobmcnamara · 2 years ago
Linux, for better or for worse, doesn't have universal support for VSYNC.

Without it, you can race the scanout and minimum latency can get well under 1ms.

This was required for me to get a video stream down below 1 frame of latency.

rjh29 · 2 years ago
Use xfce. It is much more responsive than gnome.
roomey · 2 years ago
The only problem is, if you use xfce for any length of time it is horrible going back to any other DM.

Don't even attempt to use a modern windows install, you end up feeling that it is actually broken!

lenerdenator · 2 years ago
Xfce always felt like Linux to me. Like sure, there are other interfaces, but this is the Linux one.

I want an ARM laptop with expandable memory, user-replaceable battery, second SSD bay, and a well-supported GNU/Linux OS that has xfce as the UI - from the factory. That's the dream machine.

sodality2 · 2 years ago
Xfce was my favorite until I found i3/sway. Even more responsive, and less mouse required since everything is first and foremost a keyboard shortcut, from workspace switching to resizing and splitting windows.
Gormo · 2 years ago
XFCE's goal is also implement a stable, efficient, and versatile DE that adheres to standard desktop conventions, rather than to import all of the limitations and encumbrances of mobile UIs onto the desktop in the mistaken belief that desktop computing is "dead".
tristan957 · 2 years ago
The XFCE Terminal uses VTE. This makes no sense.
pcwalton · 2 years ago
Well, the Apple II had a 280x192 display (53 kpixel), and my current resolution (which is LoDPI!) is 2560x1600 (4 Mpixel). When you phrase it as "in 2024 we can render 76x the pixels with the same speed" it actually sounds rather impressive :)
seba_dos1 · 2 years ago
> so there are many differences

Yeah, one of those is that modern stacks often deliberately introduce additional latency in order to tackle some problems that Apple IIe did not really have to care about. Bringing power consumption down, less jitter, better hardware utilization and not having visual tearing tend to be much more beneficial to the modern user than minimizing input-to-display latency, so display pipelines that minimize latency are usually only used in a tiny minority of special use-cases, such as specific genres of video games.

It's like complaining about audio buffer latency when compared to driving a 1-bit beeper.

addicted · 2 years ago
Such a disappointing but typical HN comment.

The changes described in the OP are unequivocally good. But the most promoted comment is obviously something snarky complaining about a tangential issue and not the actual thing the article’s about.

daghamm · 2 years ago
Why so negative?

This is obviously an improvement, who cares if it is not perfect by your standards?

Deleted Comment

iso8859-1 · 2 years ago
It's an interesting observation, and it's constructive, providing actual data and knowledge. I downvoted you for negativity because you're providing nothing interesting.

Dead Comment

2OEH8eoCRo0 · 2 years ago
Input latency or drawing to the screen? What's the latency of the Apple machine running GNOME?
unwind · 2 years ago
Very nice!

I love both the underlying focus on performance by the VTE developers, and the intense hardware-driven measurement process in the article!

The use of a light sensor to measure latency reminded me of Ben Heck's clearly named "Xbox One Controller Monitor" [1] product [1] which combines direct reading of game console controller button states with a light sensor to help game developers keep their latency down. It looks awesome, but it's also $900.

[1]: https://www.benheck.com/xbox1monitor/

blueflow · 2 years ago
The focus on performance is a recent thing. VTE was pretty slow to begin with.
amaranth · 2 years ago
The focus on _latency_ is a recent thing. VTE was previously focused on throughput and low CPU usage, which are conflicting goals.
bobmcnamara · 2 years ago
Fun fact: if you have vsync enabled, latency depends on where you position the sensor.
stinos · 2 years ago
This, and the linked article, show the photo sensor halfway the monitor. Nothing wrong with that for comparing measurements, but for quite a lot (possibly the majority) of typical monitors out there that actually means for a refresh of 60Hz putting the sensor at the top of the screen will give you about 8mSec faster and at the bottom 8mSec slower measurements because pixels / lines thereof are driven top to bottom. Like a CRT basically. So if you're getting into the details (just like where to put the threshold on the photo sensor signal to decide when the pixel is on) that should probably be mentioned. Also because 8mSec is quite the deal when looking at the numbers in tha article :)

Likewise just saying 'monitor x' is 30mSec slower than 'monitor y' can be a bit of a stretch; it's more like 'I measured this to be xxmSec slower on my setup with settings X and Y and Z'. I.e. should also check if the monitor isn't applying some funny 'enhancement' adding latency with no perceivable effect but which can be turned off, whether when switching monitors your graphics card and/or its driver didn't try to be helpful and magically switched to some profile where it tries to apply enhancements, corrections, scaling and whatnot which add latency. All without a word of warning from said devices usually, but these are just a couple of the things I've seen.

hinkley · 2 years ago
If I’m watching a screen scroll in real time I’m much more likely to be looking at the bottom third of the screen.
robotburrito · 2 years ago
It's crazy to me we exist in a world where we can render hyper realistic 3d scenes and play games that once felt like they would be impossible to render on consumer grade hardware AND in the world where we are still trying to perfect putting text on a screen for a terminal LOL :)
yjftsjthsd-h · 2 years ago
Isn't some of it that we're optimizing more for graphics and there are tradeoffs between the two (so getting better at graphics tends to make the text worse)? Partially offset by terminals using GPU acceleration, but you're still paying for that pipeline.
jethro_tell · 2 years ago
How much of this includes that fact that a) it didn't matter too much previously, I.e. it works and b) until recently there's been a lot of network latency to solve for in a lot of terminal use cases.
l33tman · 2 years ago
Not related to the speed, but is there any terminal for Linux that works like the Mac OSX terminal, in that you can shut it down and restart and it will bring up all the tabs and their cmd histories and scrollbacks for each tab? They do that by setting different bash history files for each tab etc.

And I prefer GUI terminals for this use-case...

__jonas · 2 years ago
This is pretty tangential, but I just (1 hour ago) found out that iterm2 on Mac can integrate with tmux [1] if you run tmux with the argument "-CC", in a way that makes your tmux sessions map to the GUI windows / tabs in iterm2, including when using tmux on a remote machine via ssh.

I'm really excited about this, since I always forgot the hotkeys/commands for controlling tmux.

[1] https://iterm2.com/documentation-tmux-integration.html

felixding · 2 years ago
AFAIK, iTerm2 is still the only terminal that support this tmux control mode.
severino · 2 years ago
> They do that by setting different bash history files for each tab etc.

I wonder what happens when you close all the tabs, and then open a new one. Are all the tabs history merged back into a "general" history file when you close them, so you will get access to its commands in new ones?

livrem · 2 years ago
I set HISTFILE=~/.bash_history_$TMUX_PANE (when that variable is set).

A bit kludgy, but works good enough for me.

apienx · 2 years ago
I use Tmux. It's a terminal-agnostic multiplexer. Gives you persistence and automation superpowers.

https://github.com/tmux/tmux/wiki

pxc · 2 years ago
> And I prefer GUI terminals for this use-case...

This probably isn't to your liking, then, but perhaps it will be of use to someone else: https://github.com/tmux-plugins/tmux-resurrect

(You can, of course, use tmux with any GUI terminal emulator you like.)

fikama · 2 years ago
This. I was just looking for the exactly the same thing. For now I use tmux with tmux-ressurect to persist state between reboots. - It works okeyish, I would say it's a good hack but still a hack. It's sad there isn't really solution for this problem maybe aside for warp. My little UX dream is to have such a solution of saveing workspaces integrated with whole OS and apps inside it. - That would be cool.
yjftsjthsd-h · 2 years ago
You could do that with VMs. Maybe qubes?
prmoustache · 2 years ago
I find it more preferable to setup bash (or whatever shell you are using) to append into .BASH_HISTORY file at every command. You don't always remember on which tab/pane/window you type which command and most people would do Ctrl + R or use a fuzzy search tool anyway.

Also many times you open a new tab/pane/window and would like to access the history of another one who is already busy running a job so a common history is usually preferrable.

YMMV of course.

Medox · 2 years ago
Tabby [1][2] has "Restore terminal tabs on app start" under Settings > Terminal > Startup.

[1] https://tabby.sh/

[2] https://github.com/eugeny/tabby

Asmod4n · 2 years ago
At the time mac os got that functionality their macbooks had removeable batteries.

One fun fact: you could just remove the battery while your apps where running and when booting up again every window you had open would just reopen with everything you had typed saved to disk in case of the iwork apps.

progmetaldev · 2 years ago
While I fully believe this could work, seems like it might need to be a temp solution. Ripping out the battery seems like a solution that might have more downsides than advantages.
miohtama · 2 years ago
Warp?

https://www.warp.dev/

Might not be 1:1 what you ask for, but it can do scrollbacks for sure and has very advanced command history features.

mplanchard · 2 years ago
I have heard some good things about this, but I refuse to use a closed source terminal.
johnchristopher · 2 years ago
I suppose you could "just" run tmux and have more or less the same results (opened tabs with history, but won't survive a reboot) ?
cqqxo4zV46cp · 2 years ago
A lot leaning on “more or less”.
prmoustache · 2 years ago
You can do the saving part automatically by setting a different history file automatically for each instance of the shell, for example using a timestamp on your rc file and force them to append after every command.

Then if you open a new shell and want history from a particular older shell you can do `history -r <filename>`

So it is an hybrid automatically saved, manually recovered mode.

jesprenj · 2 years ago
VSCode/vscodium's IDE terminal does this.
progmetaldev · 2 years ago
Is that not related to Powershell's handling of history?
dacryn · 2 years ago
no it doesn't, at least not in my case. Any settings to do?

Deleted Comment

DanielVZ · 2 years ago
warp.dev is a good alternative
gaws · 2 years ago
> good alternative

- Closed source

- Requires an account to use

- Collects your data

No, thanks. There are plenty of superior options out there.

robert_foss · 2 years ago
guake
freedomben · 2 years ago
indeed, if you just need a quick terminal for the odd command here and there, guake is amazing
INTPenis · 2 years ago
I used Gnome for years, then switched to sway and alacritty 2 years ago and honestly I can't tell any difference. I guess this is just like high end audio equipment, my ears/eyes are not tuned to the difference.
rocqua · 2 years ago
Have you tried switching back? It is often more noticable when latency increases than when it decreases.
berkes · 2 years ago
If that is the case, why should I switch to a terminal with lower latency?
INTPenis · 2 years ago
No and it doesn't matter enough to me to try.
aembleton · 2 years ago
I've been using Gnome for years and am currently on Gnome 46. I hadn't noticed any difference in the terminal latency from Gnome 45. Like you, I think I just don't notice these things.
flexagoon · 2 years ago
I'm on Gnome, and I moved from Gnome Console to Wezterm because of the latency. It wasn't noticeable when I used terminal as a small window, but most of the time my terminal was fullscreen, and the delay was unbearable.
VeninVidiaVicii · 2 years ago
Lately I’ve been noticing some very fast flickering when I type on Gnome terminal. Guess this coincides with the new terminal.
philsnow · 2 years ago
Not a fair comparison, probably, but I swore off of gnome-terminal ~20 years ago because it was using half of my cpu while compiling a kernel. Xterm used ~2%.
TacticalCoder · 2 years ago
To be fair even up to this day and on a modern Linux setup (Ryzen 7000 series CPU), that's still how Gnome terminal (before the patch in TFA are applied) does feel compared to xterm.

Things may, at long last, get better now.

BaculumMeumEst · 2 years ago
The only thing I care about regarding latency/responsiveness is whether a terminal makes me want to vomit when I'm scrolling in vim.
mrob · 2 years ago
Maybe your screen and/or keyboard is adding enough latency that you'll never get good results no matter what software you use. The difference between bad latency and very bad latency isn't so obvious. Have you tried gaming hardware?
p9fus · 2 years ago
not OP but i personally run a 144hz monitor with a normal keyboard with a cable and i've never noticed this stuff either haha
medstrom · 2 years ago
I'm guessing you notice it more in some use-cases. Can anyone chime in with an example?
abhinavk · 2 years ago
Latency adds up. When I had to work with a not-so-fast monitor (60Hz, mediocre processing and pixel-response lag), it became very apparent to point of annoyance. Using alacritty helped a bit. Our brains adapt quickly but you also notice it when switching frequently between setups. (Monitor or terminal emulators)
PhilipRoman · 2 years ago
Finally a terminal benchmark that isn't just cat-ing a huge file. Would love to see the same test with a more diverse set of terminals, especially the native linux console.
aumerle · 2 years ago
You want better benchmarks look at the ones delivered bya ctual terminal developers, for example: https://sw.kovidgoyal.net/kitty/performance/#throughput

or https://github.com/alacritty/vtebench/tree/master

michaelt · 2 years ago
Those tend to say things like "This benchmark is not sufficient to get a general understanding of the performance of a terminal emulator. It lacks support for critical factors like frame rate or latency. The only factor this benchmark stresses is the speed at which a terminal reads from the PTY."

So although those tests may be more extensive, they're not "better" in every regard.

Of course it's perfectly reasonable to want a benchmark that people can run without needing to build any custom hardware.

codedokode · 2 years ago
Sorry for being off-topic but what I dislike the most about Gnome Terminal is that it opens a small window by default (like 1/4 of my screen size) and even if you resize it, it doesn't remember the size after restart. It turns out you need to go to settings and manually specify how many columns and rows do you want.
bitvoid · 2 years ago
That behavior is pretty common with many terminals. Off the top of my head, I know the default macOS Terminal and Windows Terminal both behave like that where you need to change the default dimensions in the settings.

I personally prefer having the dimensions set to a default size and then resizing specific windows that require more space. But it should at least be an option to have it remember resizes.

lbhdc · 2 years ago
You can change that in the settings.

Hamburger menu > Preferences > The name of your profile (mine is just called "Unnamed").

Changing the "initial terminal size" will do what you want. I have mine set to 132x43.

adrianmonk · 2 years ago
I often have many terminals open of various sizes. It's not clear what size would be the correct one to remember.

Therefore, I don't want it to try. It's fine for software to try to be smart if there's high confidence it knows what you want. Otherwise, it's yet another case, "Look, I automatically messed things up for you. You're welcome."

kaanyalova · 2 years ago
Console, the newer gnome terminal remembers the window size.
bobmcnamara · 2 years ago
IIRC, this was a feature from CMD.EXE
lanstin · 2 years ago
It is just old school terminal behavior. Even xterm iirc 25 lines 80 columns. That natural screen size. See also natural terminal colors green letter with black background.