"This prototype implementation generates full-entropy bit-strings and posts them in blocks of 512 bits every 60 seconds. Each such value is sequence-numbered, time-stamped and signed, and includes the hash of the previous value to chain the sequence of values together and prevent even the source to retroactively change an output package without being detected."
People here were joking about putting time on the blockchain, and, well, NIST is already doing it.
If each block contains the hash of the previous block, then I think that it is a blockchain (regardless of if there is multiple authors or only a single author). A git repository is a blockchain, too.
People keep saying Merkle DAGs when someone calls a linear chain of recursively hashed data blocks a blockchain.
I don’t understand.
My understanding of the Merkle Tree is that it’s a recursive hash, but the leaf nodes are the data, each layer up the tree is the hash of the child nodes.
In a merkle tree, only the leaf nodes store (or reference) data, everything else is just a hash.
Is there another merkle structure I don’t know about?
Can someone give an example use case of this? I'm not sure I understand why a very public long string of random characters on a block chain is useful, except as a way to prove an event didn't happen prior to a certain time
Mostly, it's so the public can verify events that were supposed to be random really were random. The executive summary gives plenty of examples, but think of a pro sports draft lottery. Fans always think those are rigged. They could simply use these outputs and a hashing function that maps a 512-bit block to some set with cardinality equal to the number of slots and pre-assign slots to participating teams based on their draft weight. Then fans could verify using this public API that the draw the league claims came up randomly really did come up randomly.
People always think polls are rigged. This could be used to publicly produce random population samples for polling.
This was also used to prove a Bell inequality experiment worked with no loopholes.
What's amazing is that if your computer is not set to automatically sync its time, you can see how fast it's drifting.
My main desktop is 1.7 seconds ahead at the moment. Probably haven't updated the clock in a few weeks: which isn't that much. Other systems shall drift much more.
As to "why" it's not setting the time using NTP automatically: maybe I like to see how quickly it drifts, maybe I want as little services running as possible, maybe I've got an ethernet switch right in front of me which better not blink too much, maybe I like to be reminded of what "breaks" once the clocks drifts too much, maybe I want to actually reflect at the marvel of atomic drift when I "manually" update it, etc. Basically the "why" is answered by: "because I want it that way".
Anyway: many computer's internal clock/crystal/whatever-thinggamagic are not precise at all.
There’s a fun thing about quartz wristwatches: one of the biggest contributions to frequency fluctuations in a quartz oscillator is temperature. But if it is strapped to your wrist, it is coupled to your body’s temperature homeostasis. So a quartz watch can easily be more accurate than a quartz clock!
Really good watches allow you to adjust their rate, so if it runs slightly fast or slow at your wrist temperature, you can correct it.
One of the key insights of John Harrison, who won the Longitude prize, was that it doesn’t matter so much if a clock runs slightly fast or slightly slow, so long as it ticks at a very steady rate. Then you can characterise its frequency offset, and use that as a correction factor to get the correct GMT after weeks at sea.
It kinda makes you wonder why desktop computers don't use the AC frequency as a stable-ish time source. Short-term accuracy is pretty poor, but it can definitely do better than 12 seconds over a week!
Can't say too much but I saw an IoT product where, if NTP failed, they would all slowly fall behind. I really appreciated this because fixing NTP would jump forward, leaving a gap in perceived time instead of living the same moment twice.
So I assumed that, like how speedometers purposely read a little high, the crystals must purposely read a little slow so that computers don't slip into the future.
I worked on an embedded video system once where no one took into account the slight difference between the encoder and decoder clock frequencies. If the encoder was slower, no problem, you dropped a frame every once in a while. If the encoder was faster, you would buffer frames up to the memory limit of the system. Since these systems were typically left on for weeks at a time, the pilots would eventually discover the video they were using to fly a vehicle was several seconds in the past. Luckily this was discovered before initial shipment and the code was changed to drop everything in the buffer except the latest video frame.
That's a neat way of ensuring that time never jumps backwards on your system!
Reminds me of the same idea but applied in the opposite way in some train station clocks: Their second hands take slightly less than a minute to complete one rotation, after which they stop and wait for a signal sent from a central clock to be released simultaneously.
Making a clock run slightly slow or fast is much easier than making it run just about correctly :)
When setting up a mini PC as a home server about 40 days ago, I did not realize Fedora Server does not configure NTP synchronization by default. In only two weeks I managed to accumulate 30 seconds worth of drift. Prometheus was complaining about it but I had erroneously guessed that the drift alert was due to having everything on a single node. Then when querying metrics and seeing the drift cause errors, I compared the output of date +'%s' on the server and my own laptop. The difference was well over 30 seconds.
> Typical crystal RTC accuracy specifications are from ±100 to ±20 parts per million (8.6 to 1.7 seconds per day), but temperature-compensated RTC ICs are available accurate to less than 5 parts per million.[12][13] In practical terms, this is good enough to perform celestial navigation, the classic task of a chronometer. In 2011, chip-scale atomic clocks became available. Although vastly more expensive and power-hungry (120 mW vs. <1 μW), they keep time within 50 parts per trillion.
Interesting breakdown. But this format is horrible for conveying information. An improvement would be removing the slides, crafting some coherent paragraphs and then reinserting some of the more crucial images for support.
After someone gives a talk in person, some of the things they can do online:
1. Mention that they gave a talk
2. Post a video recording online
3. Post the slides online, as-is (PDF or whatever) with no explanation
4. Lay out the slides on a HTML page, with accompanying text (what would have been said by the speaker), so that it's easier to read — while still being clear you're “reading” a talk.
5. Redo/rewrite the whole thing into text form, paragraphs and all.
The author here has done 1 to 4, and you're complaining they've not also done 5, but that's a lot of work and I don't begrudge someone not doing that. I'll be grateful someone presented their talk in a readable form in the first place.
When I gave the talk, I showed the slide before I talked about it. It’s normal to show the speaker notes below the slides in software like Keynote or Powerpoint.
It's simply not intuitive in the way it was presented that the line of text was a footer for the picture. The text and pictures are mistakenly read as belonging to the same "layer", sequentially, which is not what the author intended. It's obvious what that intent was, but it's not structured correctly to be properly interpreted.
I was really bothered that on the website version, the NTP packet diagram is largely illegible. I hope that when they gave this talk on slides, you could read it.
Especially because half of the text just repeats what's on the slides and ultimately I didn't see an easy way to make the slides bigger. Like the NTP packet format slide was mostly unreadable.
Shout out also to the NTP Pool, a volunteer group of NTP servers that is the common choice for a lot of devices. Particularly open source stuff. Microsoft, Apple, and Google all run their own time servers but the NTP Pool is a great resource for almost everything else. https://www.ntppool.org/en/
I was in the pool for a while using a RIPE NCC gps synced pci card. It was fun, but machineroom dynamics made keeping a dome antenna attached was hard: they hate special cables, and roof access is a security and water nightmare.
A rubidium clock is pretty cheap these days anyway.
Now, I'm on Bert Huberts gps drift and availability thing with a raspberry pi measuring visibility and availability out my home office window. Much more fun.
It's an interesting situation when instruments or measurements become more precise, stable, or reliable than the reference material.
And when someone (usually an individual) finally discovers that it has happened, or in some cases makes it so.
>the ephemeris second is based on an astronomical ephemeris, which is a mathematical model of the solar system
>the standard ephemeris was produced by Simon Newcomb in the late 1800s
>he collected a vast amount of historical astronomical data to create his mathematical model
>it remained the standard until the mid 1980s
>in 1952 the international astronomical union changed the definition of time so that instead of being based on the rotation of the earth about its axis, it was based on the orbit of the earth around the sun
>in the 1930s they had discovered that the earth’s rotation is not perfectly even: it slows down and speeds up slightly
>clocks were now more precise than the rotation of the earth, so the ephemeris second was a new more precise standard of time
>in 1952 the international astronomical union changed the definition of time so that instead of being based on the rotation of the earth about its axis, it was based on the orbit of the earth around the sun >in the 1930s they had discovered that the earth’s rotation is not perfectly even: it slows down and speeds up slightly
Yeah, I remember studying that back in high school but I wonder... what previous actual duration of a second they used? And also, being based on the rotation of Earth, what kind of data was the "vast amount of historical astronomical data" Newcomb collected? How can you reliably capture and store the length of time if you can only base it on the Earth rotation speed which varies over time? I would guess the data compared it to other natural phenomena?
When time was based on earth rotation, astronomers used “transit instruments” to observe when certain “clock stars” passed directly overhead. The clock stars had accurately known positions, so if you routinely record the time they pass overhead according to your observatory’s clock, then you can work out how accurate your clock is.
Newcomb’s data would have been accurately timed observations, as many as he could get hold of, going back about two and a half centuries.
That's pretty much what we already have, isn't it?
True Time™ is determined by essentially averaging dozens of atomic clocks from laboratories all over the world. It doesn't really get any more "community-maintained" and "democratized" than that!
The article, and this comment, makes me wonder what impact a coordinated attack on the root time-keeping mechanisms might have. It seems like there's a fair bit of redundancy / consensus, but what systems would fail? On what timeline? How would they recover?
It's probably possible to calibrate your clock using a clear night sky and a modern cell phone camera. I bet second accuracy isn't an absurd expectation. Now it'd probably take an unreasonable amount of time to calibrate...
DARPA are funding the Robust Optical Clock Network (ROCkN) program, which aims to create optical atomic clocks with low size, weight, and power (SWaP) that yield timing accuracy and holdover better than GPS atomic clocks and can be used outside a laboratory.
Most of the big cloud providers have deployed the equivalent of the opencompute time card which sources its time from GPS sources but can maintain accurate time in cases of GPS unavailability.
Note that for NTP it’s better to use a Raspberry Pi 4 than older boards. The old ones have their ethernet port on the wrong side of a USB hub, so their network suffers from millisecond-level packet timing jitter. You will not be able to get microsecond-level NTP accuracy.
For added fun, you can turn the Raspberry Pi into an oven compensated crystal oscillator (ocxo) by putting it in an insulated box and running a CPU burner to keep it toasty. https://blog.ntpsec.org/2017/03/21/More_Heat.html (infohazard warning: ntpsec contains traces of ESR)
"This prototype implementation generates full-entropy bit-strings and posts them in blocks of 512 bits every 60 seconds. Each such value is sequence-numbered, time-stamped and signed, and includes the hash of the previous value to chain the sequence of values together and prevent even the source to retroactively change an output package without being detected."
People here were joking about putting time on the blockchain, and, well, NIST is already doing it.
It's not a blockchain, but a single writer Merkle DAG. No consensus necessary. Much like a git repository with a single author.
I don’t understand.
My understanding of the Merkle Tree is that it’s a recursive hash, but the leaf nodes are the data, each layer up the tree is the hash of the child nodes.
In a merkle tree, only the leaf nodes store (or reference) data, everything else is just a hash.
Is there another merkle structure I don’t know about?
https://en.wikipedia.org/wiki/Merkle_tree
If the nodes with hashes contain data, it’s not a merkle tree.
But shouldn't we want decentralized consensus for this?
What if NIST's key(s) were to get compromised, or the org were to disband or become corrupt/dysfunctional?
Hmm. Just because something's a Merkle DAG doesn't make it useable on the Internet. A single-writer blockchain, perhaps?
Mostly, it's so the public can verify events that were supposed to be random really were random. The executive summary gives plenty of examples, but think of a pro sports draft lottery. Fans always think those are rigged. They could simply use these outputs and a hashing function that maps a 512-bit block to some set with cardinality equal to the number of slots and pre-assign slots to participating teams based on their draft weight. Then fans could verify using this public API that the draw the league claims came up randomly really did come up randomly.
People always think polls are rigged. This could be used to publicly produce random population samples for polling.
This was also used to prove a Bell inequality experiment worked with no loopholes.
It would be very useful to have a trusted source of time, with a few keys that are meant to never change, that anyone can rebroadcast.
We could have zero configuration clocks that get the time from the nearest phone or computer without any manual setup!
My main desktop is 1.7 seconds ahead at the moment. Probably haven't updated the clock in a few weeks: which isn't that much. Other systems shall drift much more.
As to "why" it's not setting the time using NTP automatically: maybe I like to see how quickly it drifts, maybe I want as little services running as possible, maybe I've got an ethernet switch right in front of me which better not blink too much, maybe I like to be reminded of what "breaks" once the clocks drifts too much, maybe I want to actually reflect at the marvel of atomic drift when I "manually" update it, etc. Basically the "why" is answered by: "because I want it that way".
Anyway: many computer's internal clock/crystal/whatever-thinggamagic are not precise at all.
After a week, 20 ppm would drift 12 * 10^-6 * 7 * 24 * 60 *60 = 12 seconds.
Your motherboard probably has a cr2032 keeping it powered when unplugged.
Crystals: https://www.digikey.com/en/products/filter/crystals/171?s=N4...
Really good watches allow you to adjust their rate, so if it runs slightly fast or slow at your wrist temperature, you can correct it.
One of the key insights of John Harrison, who won the Longitude prize, was that it doesn’t matter so much if a clock runs slightly fast or slightly slow, so long as it ticks at a very steady rate. Then you can characterise its frequency offset, and use that as a correction factor to get the correct GMT after weeks at sea.
Where are you getting that 12 from?
So I assumed that, like how speedometers purposely read a little high, the crystals must purposely read a little slow so that computers don't slip into the future.
Reminds me of the same idea but applied in the opposite way in some train station clocks: Their second hands take slightly less than a minute to complete one rotation, after which they stop and wait for a signal sent from a central clock to be released simultaneously.
Making a clock run slightly slow or fast is much easier than making it run just about correctly :)
> Typical crystal RTC accuracy specifications are from ±100 to ±20 parts per million (8.6 to 1.7 seconds per day), but temperature-compensated RTC ICs are available accurate to less than 5 parts per million.[12][13] In practical terms, this is good enough to perform celestial navigation, the classic task of a chronometer. In 2011, chip-scale atomic clocks became available. Although vastly more expensive and power-hungry (120 mW vs. <1 μW), they keep time within 50 parts per trillion.
1. Mention that they gave a talk
2. Post a video recording online
3. Post the slides online, as-is (PDF or whatever) with no explanation
4. Lay out the slides on a HTML page, with accompanying text (what would have been said by the speaker), so that it's easier to read — while still being clear you're “reading” a talk.
5. Redo/rewrite the whole thing into text form, paragraphs and all.
The author here has done 1 to 4, and you're complaining they've not also done 5, but that's a lot of work and I don't begrudge someone not doing that. I'll be grateful someone presented their talk in a readable form in the first place.
[I do agree this page was hard to read, at least on mobile and at least in its initial version—it's much better now—but I've seen many others post these "annotated talks" online and the format itself is not necessarily bad: for instance see https://idlewords.com/talks/ (example: https://idlewords.com/talks/superintelligence.htm) or https://noidea.dog/talks (example: https://noidea.dog/impostor) or https://simonwillison.net/tags/annotatedtalks/ (example: https://simonwillison.net/2022/Nov/26/productivity/) — maybe just some minor tweaks to CSS like putting the text to the right of the images would make it easier to read.]
"Here's a picture of an NTP packet"
picture of a man sitting at a desk
I liked it.
Deleted Comment
https://pages.cs.wisc.edu/~plonka/netgear-sntp/
A rubidium clock is pretty cheap these days anyway.
Now, I'm on Bert Huberts gps drift and availability thing with a raspberry pi measuring visibility and availability out my home office window. Much more fun.
And when someone (usually an individual) finally discovers that it has happened, or in some cases makes it so.
>the ephemeris second is based on an astronomical ephemeris, which is a mathematical model of the solar system
>the standard ephemeris was produced by Simon Newcomb in the late 1800s >he collected a vast amount of historical astronomical data to create his mathematical model >it remained the standard until the mid 1980s
>in 1952 the international astronomical union changed the definition of time so that instead of being based on the rotation of the earth about its axis, it was based on the orbit of the earth around the sun >in the 1930s they had discovered that the earth’s rotation is not perfectly even: it slows down and speeds up slightly >clocks were now more precise than the rotation of the earth, so the ephemeris second was a new more precise standard of time
Yeah, I remember studying that back in high school but I wonder... what previous actual duration of a second they used? And also, being based on the rotation of Earth, what kind of data was the "vast amount of historical astronomical data" Newcomb collected? How can you reliably capture and store the length of time if you can only base it on the Earth rotation speed which varies over time? I would guess the data compared it to other natural phenomena?
Newcomb’s data would have been accurately timed observations, as many as he could get hold of, going back about two and a half centuries.
True Time™ is determined by essentially averaging dozens of atomic clocks from laboratories all over the world. It doesn't really get any more "community-maintained" and "democratized" than that!
Deleted Comment
Most of the big cloud providers have deployed the equivalent of the opencompute time card which sources its time from GPS sources but can maintain accurate time in cases of GPS unavailability.
https://www.darpa.mil/news-events/2022-01-20
For added fun, you can turn the Raspberry Pi into an oven compensated crystal oscillator (ocxo) by putting it in an insulated box and running a CPU burner to keep it toasty. https://blog.ntpsec.org/2017/03/21/More_Heat.html (infohazard warning: ntpsec contains traces of ESR)