1. I didn't expect people to have such a negative reaction to sideways text. It doesn't bother me personally, but it seems some people really can't work with it. I'll likely avoid it in everything else I do going forward.
2. I feel a big part of the problem here is that it's not obvious how to get it back once it's gone. I could certainly try making the text visible after the bar is gone.
Another problem is that on low resolution screens (or small browser windows) the boxes on the top left hide the text on the bars behind. I had to zoom out to 50% for it to be readable, which then put other bars behind the boxes.
The original source linked from this post [0] is using models that assume exponential growth of bandwidths over time (see the JavaScript at the bottom of the page): this is fun, but these figures are real things that can be measured, so I think it’s very misleading for the site in this link to present them without explaining they’re basically made up.
The 1Gb network latency figure on this post is complete nonsense (I left another comment about this further down); looking at the source data it’s clear that this is because this isn’t based on a 1Gb network, but rather a “commodity NIC” with this model, and the quoted figure is for a 200Gb network:
function getNICTransmissionDelay(payloadBytes) {
// NIC bandwidth doubles every 2 years
// [source: http://ampcamp.berkeley.edu/wp-content/uploads/2012/06/Ion-stoica-amp-camp-21012-warehouse-scale-computing-intro-final.pdf]
// TODO: should really be a step function
// 1Gb/s = 125MB/s = 125*10^6 B/s in 2003
// 125*10^6 = a*b^x
// b = 2^(1/2)
// -> a = 125*10^6 / 2^(2003.5)
var a = 125 * Math.pow(10,6) / Math.pow(2,shift(2003) * 0.5);
var b = Math.pow(2, 1.0/2);
var bw = a * Math.pow(b, shift(year));
// B/s * s/ns = B/ns
var ns = payloadBytes / (bw / Math.pow(10,9));
return ns;
}
Because they're comparing two different things. The main memory reference is latency, the 1K is a throughput measurement.
In other words they're not saying "if you send only 1K of data it will take this long". They're saying "if you send 1 GB, then the total time divided by 1 million is this much".
I don't think I'd wait even 15 seconds. Maybe on average across all users because a lot of users have slower connections or devices so they're more patient. But I'd expect to have something in 3 or 4 seconds. Even that I consider slow. At probably 8 or 10 I'm gone.
I say this without hate: it's absolutely fascinating how bad this UX is. Having said this, I am sure that I have committed worse UX crimes in my career, but when curse of knowledge hits you, only your users can see the problems. But lucky samwho has the HN community that is not shy of criticizing ;-).
I think it's really interesting and instructional to think about why the UX feels so bad. My ideas are:
- The page has one main job: presenting latency numbers to the viewer.
- This job is easy enough. There are many ways to get this done. So people expect the main job to be done at least as good as with these other ways.
- I hypothesize that the page prioritizes other jobs before the main job. It tries to make finding the relationship between those numbers fun to detect.
* Users are foremost interested in the main job, but this main job is done poorly because you don't see all latency numbers in one view (maybe after clicking a few times at the right places, but for such an easy task this is way too much work)
- It's very difficult to grasp the mental model of the UI just aby using it. You click somewhere and things happen. Even now that I have used it for a few minutes, I have no idea what it does or is supposed to do. I found it very interesting how much it frustrated my that repeated clicks are not idempotent and made the UI "diverge". It makes you somehow feel lost and worry about breaking things.
- The user must read the help text. But users don't do this. At least I didn't until I was very frustrated. Then this help text changes. And changes again. I don't want to learn a new application only to read a simple list of numbers.
These are my main points, I think. To me, it was very interesting. Thanks for that, samwho. and kudos for sharing this publically :-)
I'm in the middle of writing up a self-reflective post about this and I just wrote the following:
"Ultimately, the way I'm presenting the data is egregious and unnecessary. I can see why people are annoyed about it. The extra visuals and interactions get in the way of what's being shown, they don't enhance it. Tapping around feels fun to me, but it isn't helping people understand. This experiment prioritised form way more than it prioritised function."
We've come to some of the same conclusions, though you in more detail than me. The idea about clicks not being idempotent wasn't something I ever noticed, but now you've said it I can't not.
If you're willing, I'd love to connect with you 1:1 and talk a bit more about this. My contact details are on my homepage.
I mostly agree, and certainly a list of numbers or maybe a log plot would be better if the goal was communicating the raw data. Certainly the click-about jumpy interface is pretty janky. However there’s one thing I think this does better than a list of numbers would: Most people (me included) have a hard time getting an intuitive feel for things like just how much smaller 1ns is compared to 1ms or truly how much a billion dollars is. SI prefixes or a log scale can give the wrong /feeling/ even when they’re giving the right /information/.
Sometimes, the inconvenience of a linear scale is the point.
Pages that I think use this technique to really good effect:
The title is missing "Latency" which would show many other results on searching. My go to is this one[0] because it's plain text and shows "Syscall" and "Context switch".
Latency numbers every programmer should know
L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns
Syscall on Intel 5150 ...................... 105 ns
Compress 1K bytes with Zippy ............. 3,000 ns = 3 µs
Context switch on Intel 5150 ............. 4,300 ns = 4 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns = 20 µs
SSD random read ........................ 150,000 ns = 150 µs
Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs
Round trip within same datacenter ...... 500,000 ns = 0.5 ms
Read 1 MB sequentially from SSD* ..... 1,000,000 ns = 1 ms
Disk seek ........................... 10,000,000 ns = 10 ms
Read 1 MB sequentially from disk .... 20,000,000 ns = 20 ms
Send packet CA->Netherlands->CA .... 150,000,000 ns = 150 ms
Assuming ~1GB/sec SSD
I don't get how expressing these numbers in time unit is useful ?
I've been a developer for embedded systems in the telecom industry for nearly two decades now, and I had never met anyone using something else than "cycles" or "symbols" until today... Except obviously for the mean RTT US<->EU.
Because it's something very different. I was expecting standalone numbers that would hint to the user something is wonky if they showed up in unexpected places - numbers like 255 or 2147483647.
I bring criticism: The first few bars on my screen cannot be read, as the text is hidden behind the floating HUD. If I click on the next few bars, to bring those below the box, then the bar becomes too small and the text is cropped, so I cannot read it either.
It is also a bit uncomfortable to read 90° text. It's fun to click the bars and play with the UI, but not to actually read what they say. It's a nice visualization, but it suffers from form over function! I can't comfortably use it to learn about the numbers I should know :(
I appreciate the feedback! I'm trying to get better and comments like this genuinely help.
Are you reading on a landscape tablet? I know the sizes of stuff are wrong on that form factor. Desktop and mobile shouldn't have the first couple of bars obscured.
The sideways text is meant to be a subtle nod to the fact the page scrolls sideways, but I agree it's not as nice to read as it would be were the text the right way around.
>> Desktop and mobile shouldn't have the first couple of bars obscured.
I am on a desktop with a huge monitor in ultra high res. It is pretty bad.
>> The sideways text is meant to be a subtle nod to the fact the page scrolls sideways, but I agree it's not as nice to read as it would be were the text the right way around.
Then the subtle nod is lost on me... why not turn the text when I click, or have hover text, or make the whole page rotated 90 degrees?
Like the original response, it was fun for one second, then I was like I can not read this stuff, or its painful.
I'm reading on a 1080p desktop. Although accounting for the browser chrome (bookmarks, tabs on the side), my window.inner{Width,Height} comes out as 1583x950
Don’t forget Grace Hopper (1906 – 1992), American computer scientist, mathematician, and United States Navy rear admiral.
> Hopper became known for her nanoseconds visual aid. People (such as generals and admirals) used to ask her why satellite communication took so long. She started handing out pieces of wire that were just under one foot long—11.8 inches (30 cm)—the distance that light travels in one nanosecond. She gave these pieces of wire the metonym "nanoseconds." She was careful to tell her audience that the length of her nanoseconds was actually the maximum distance the signals would travel in a vacuum, and that signals would travel more slowly through the actual wires that were her teaching aids. Later she used the same pieces of wire to illustrate why computers had to be small to be fast. At many of her talks and visits, she handed out "nanoseconds" to everyone in the audience, contrasting them with a coil of wire 984 feet (300 meters) long, representing a microsecond. Later, while giving these lectures while working for DEC, she passed out packets of pepper, calling the individual grains of ground pepper picoseconds.
Some of these have always been quite counterintuitive to me, particularly the networking ones. Google Stadia was always an exercise in edge cases in expectations on these numbers for me.
It felt weird that a gaming computer in a datacenter could be "faster" than a computer on my network, but one frame takes ~16ms to render, bandwidth is big enough to stream, network latency might only be another ~frame, and suddenly the image is on my machine within 2 or 3 frames. However there were unexpectedly slow parts! The controller actually ran over WiFi directly, so that inputs went straight to the server rather than via Bluetooth, comparing with Xbox Cloud on a Bluetooth controller, this made a huge difference, but that makes sense because Bluetooth's latency might be 1-2 frames itself. It's counterintuitive to me that the latency from my controller to my computer, less than 1m, might be higher than the latency from my computer, to my router, to my ISP, to Google's DC, and to a server. Similarly, the latency on HDMI from a computer to my TV is in the same ballpark of a few frames because of all the processing my cheap TV does to look good.
Man, I had such high hopes for Stadia. I was an SRE at Google when it was being built and knew some of the traffic folks working on the networking parts of it. Some of the absolute best people. Such a shame.
I’d never have considered adding WiFi to the controller to _reduce_ latency, that’s absolutely wild. Thanks for sharing!
I'm not sure why you'd find it wild. Any gamer with decent tech knowledge never buys Bluetooth wireless devices (mouse kb headset etc) for gaming precisely for this reason. Sites like rting measures latency for the same reason.
It is indeed a year. The latencies are based on the calculations done by Colin Scott in https://github.com/colin-scott/interactive_latencies and support projecting out into the future. Sorry it's not as obvious as it could be.
L1 cache reference = 1ns
Branch mispredict = 3ns
L2 cache reference = 4ns
Mutex lock/unlock = 17ns
Send 1K bytes over 1 Gbps network = 44ns
Main memory reference = 100ns
Compress 1K bytes with Zippy = 2us
Read 1 MB sequentially from memory = 3us
Read 4K randomly from SSD = 16us
Read 1 MB sequentially from SSD = 49us
Round trip within same datacenter = 500us
Read 1 MB sequentially from disk = 825us
Disk seek = 2ms
Send packet CA->Netherlands->CA = 150ms
Can we discuss the actual material now.
1. The vertical text is difficult to read despite its size, because it's vertical.
2. When we click on it a large part of the text disappears below the bottom margin of the page.
Problem number 1 is not so bad but the combination with 2 kills the UX. The text in the clicked bar should appear somewhere on screen, horizontally.
Edit: if anybody like me wonders what's Zippy its a C++ compression library from Google. It's called Snappy now [1]
[1] https://en.wikipedia.org/wiki/Snappy_(compression)
2. I feel a big part of the problem here is that it's not obvious how to get it back once it's gone. I could certainly try making the text visible after the bar is gone.
Deleted Comment
The 1Gb network latency figure on this post is complete nonsense (I left another comment about this further down); looking at the source data it’s clear that this is because this isn’t based on a 1Gb network, but rather a “commodity NIC” with this model, and the quoted figure is for a 200Gb network:
[0] https://colin-scott.github.io/personal_website/research/inte...In other words they're not saying "if you send only 1K of data it will take this long". They're saying "if you send 1 GB, then the total time divided by 1 million is this much".
Deleted Comment
Doubt.
40ms - average human thinks the operation is instant.
15s - user gets frustrated and closes your app or website.
I think it's really interesting and instructional to think about why the UX feels so bad. My ideas are:
- The page has one main job: presenting latency numbers to the viewer.
- This job is easy enough. There are many ways to get this done. So people expect the main job to be done at least as good as with these other ways.
- I hypothesize that the page prioritizes other jobs before the main job. It tries to make finding the relationship between those numbers fun to detect. * Users are foremost interested in the main job, but this main job is done poorly because you don't see all latency numbers in one view (maybe after clicking a few times at the right places, but for such an easy task this is way too much work)
- It's very difficult to grasp the mental model of the UI just aby using it. You click somewhere and things happen. Even now that I have used it for a few minutes, I have no idea what it does or is supposed to do. I found it very interesting how much it frustrated my that repeated clicks are not idempotent and made the UI "diverge". It makes you somehow feel lost and worry about breaking things.
- The user must read the help text. But users don't do this. At least I didn't until I was very frustrated. Then this help text changes. And changes again. I don't want to learn a new application only to read a simple list of numbers.
These are my main points, I think. To me, it was very interesting. Thanks for that, samwho. and kudos for sharing this publically :-)
I'm in the middle of writing up a self-reflective post about this and I just wrote the following:
"Ultimately, the way I'm presenting the data is egregious and unnecessary. I can see why people are annoyed about it. The extra visuals and interactions get in the way of what's being shown, they don't enhance it. Tapping around feels fun to me, but it isn't helping people understand. This experiment prioritised form way more than it prioritised function."
We've come to some of the same conclusions, though you in more detail than me. The idea about clicks not being idempotent wasn't something I ever noticed, but now you've said it I can't not.
If you're willing, I'd love to connect with you 1:1 and talk a bit more about this. My contact details are on my homepage.
Sometimes, the inconvenience of a linear scale is the point.
Pages that I think use this technique to really good effect:
https://xkcd.com/1732/https://mkorostoff.github.io/1-pixel-wealth/
I've been a developer for embedded systems in the telecom industry for nearly two decades now, and I had never met anyone using something else than "cycles" or "symbols" until today... Except obviously for the mean RTT US<->EU.
On big computers, cycles are squishy (HT, multicore, variable clock frequency, so many clock domains) and not what we're dealing with.
If we're making an architectural choice between local storage and the network, we need to be able to make an apples to apples comparison.
I think it's great this resource is out there, because the tradeoffs have changed. "RAM is the new disk", etc.
Deleted Comment
It is also a bit uncomfortable to read 90° text. It's fun to click the bars and play with the UI, but not to actually read what they say. It's a nice visualization, but it suffers from form over function! I can't comfortably use it to learn about the numbers I should know :(
Are you reading on a landscape tablet? I know the sizes of stuff are wrong on that form factor. Desktop and mobile shouldn't have the first couple of bars obscured.
The sideways text is meant to be a subtle nod to the fact the page scrolls sideways, but I agree it's not as nice to read as it would be were the text the right way around.
I am on a desktop with a huge monitor in ultra high res. It is pretty bad.
>> The sideways text is meant to be a subtle nod to the fact the page scrolls sideways, but I agree it's not as nice to read as it would be were the text the right way around.
Then the subtle nod is lost on me... why not turn the text when I click, or have hover text, or make the whole page rotated 90 degrees?
Like the original response, it was fun for one second, then I was like I can not read this stuff, or its painful.
It is big but unreadable in 4k 32inch screen.
- Peter Norvig (original (?)) - http://norvig.com/21-days.html#answers
- Jeff Dean (slides) - https://www.cs.cornell.edu/projects/ladis2009/talks/dean-key...
- Colin Scott - https://github.com/colin-scott/interactive_latencies?tab=rea...
- this post
> Hopper became known for her nanoseconds visual aid. People (such as generals and admirals) used to ask her why satellite communication took so long. She started handing out pieces of wire that were just under one foot long—11.8 inches (30 cm)—the distance that light travels in one nanosecond. She gave these pieces of wire the metonym "nanoseconds." She was careful to tell her audience that the length of her nanoseconds was actually the maximum distance the signals would travel in a vacuum, and that signals would travel more slowly through the actual wires that were her teaching aids. Later she used the same pieces of wire to illustrate why computers had to be small to be fast. At many of her talks and visits, she handed out "nanoseconds" to everyone in the audience, contrasting them with a coil of wire 984 feet (300 meters) long, representing a microsecond. Later, while giving these lectures while working for DEC, she passed out packets of pepper, calling the individual grains of ground pepper picoseconds.
https://en.wikipedia.org/wiki/Grace_Hopper
It felt weird that a gaming computer in a datacenter could be "faster" than a computer on my network, but one frame takes ~16ms to render, bandwidth is big enough to stream, network latency might only be another ~frame, and suddenly the image is on my machine within 2 or 3 frames. However there were unexpectedly slow parts! The controller actually ran over WiFi directly, so that inputs went straight to the server rather than via Bluetooth, comparing with Xbox Cloud on a Bluetooth controller, this made a huge difference, but that makes sense because Bluetooth's latency might be 1-2 frames itself. It's counterintuitive to me that the latency from my controller to my computer, less than 1m, might be higher than the latency from my computer, to my router, to my ISP, to Google's DC, and to a server. Similarly, the latency on HDMI from a computer to my TV is in the same ballpark of a few frames because of all the processing my cheap TV does to look good.
I’d never have considered adding WiFi to the controller to _reduce_ latency, that’s absolutely wild. Thanks for sharing!