I just finished reading "A Deepness in the Sky" a 2000 SF book by Vernor Vinge. It's a great book with an unexpected reference to seconds since the epoch.
>Take the Traders' method of timekeeping. The frame corrections were incredibly complex - and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth's moon. But if you looked at it still more closely ... the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind's first computer operating systems.
That is one of my favorite books of all time. The use of subtle software references is really great.
I recommend Bobiverse series for anyone who wants more "computer science in space" or permutation city for anyone who wants more "exploration of humans + simulations and computers"
I’ll second the Bobiverse series, one of my favorites. Its descriptions of new technologies is at just the right level and depth, I think, and it’s subtly hilarious.
Without fail, if I read about time keeping, I learn something new. I had always thought unix time as the most simple way to track time (as long as you consider rollovers). I knew of leap seconds, but somehow didn’t think they applied here. Clearly I hadn’t thought about it enough. Good post.
I also read the link for “UTC, GPS, LORAN and TAI”. It’s an interesting contrast that GPS time does not account for leap seconds.
Saying that something happened x-number of seconds (or minutes, hours, days or weeks) ago (or in the future) is simple: it’s giving that point in time a calendar date that’s tricky.
> Saying that something happened x-number of [...]days or weeks) ago in the future) is simple
It's not, actually. Does 2 days and 1 hour ago mean 48, 49 or 50 hours, if there was a daylight saving jump in the meantime? If it's 3PM and something is due to happen in 3 days and 2 hours, the user is going to assume and prepare for 5PM, but what if there's a daylight saving jump in the meantime? What happens to "in 3 days and 2 hours" if there's a leap second happening tomorrow that some systems know about and some don't?
You rarely want to be thinking in terms of deltas when considering future events. If there is an event that you want to happen on jan 1, 2030 at 6 PM CET, there is no way to express that as a number of seconds between now and then, because you don't know whether the US government abolishes DST between now and 2030 or not.
To reiterate this point, there is no way to make an accurate, constantly decreasing countdown of seconds to 6PM CET on jan 1, 2030, because nobody actually knows when that moment is going to happen yet.
But because of the UNIX time stamp "re-synchronization" to the current calendar dates, you can't use UNIX time stamps to do those "delta seconds" calculations if you care about _actual_ amount of seconds since something happened.
Simple as long as your precision is at milliseconds and you don’t account for space travel.
We can measure the difference in speed of time in a valley and a mountain (“just” take an atomic clock up a mountain and wait for a bit, bring it back to your lab where the other atomic clock is now out of sync)
I have come to the conclusion that TAI is the simplest and that anything else should only be used by conversion from TAI when needed (e.g. representation or interoperability).
> There’s an ongoing effort to end leap seconds, hopefully by 2035.
I don't really like this plan.
The entire point of UTC is to be some integer number of seconds away from TAI to approximate mean solar time (MST).
If we no longer want to track MST, then we should just switch to TAI. Having UTC drift away from MST leaves it in a bastardized state where it still has historical leap seconds that need to be accounted for, but those leap seconds no longer serve any purpose.
In the ideal world, you are right, computer systems should've been using TAI for time tracking and converted it to UTC/local time using TZ databases.
But in the real world a lot of systems made the wrong choice (UNIX being the biggest offender) and it got deeply encoded in many systems and regulations, so it's practically impossible to "just switch to TAI".
So it's easier to just re-interpret UTC as "the new TAI". I will not be surprised if some time in the future we will get the old UTC, but under a different name.
The is no such thing as TAI. TAI is what you get if you start with UTC and then subtract the number of leap seconds you care about. TAI is not maintained as some sort of separate standard quantity.
In most (all?) countries, civil time is based on UTC. Nobody is going to set all clocks in the world backwards by about half a minute because it is somewhat more pure.
GPS time also has an offset compared to TAI. Nobody care a bout that. Just like nobody really cares about the Unix epoch. As long as results are consistent.
> The is no such thing as TAI. TAI is what you get if you start with UTC and then subtract the number of leap seconds you care about. TAI is not maintained as some sort of separate standard quantity.
There is, though? You can easily look at the BIPM's reports [0] to get the gist of how they do it. Some of the contributing atomic clocks are aligned to UTC, and others are aligned to TAI (according to the preferences of their different operators), but the BIPM averages all the contributing measurements into a TAI clock, then derives UTC from that by adding in the leap seconds.
The only think we can be certain of is that the Summer Solstice occurs when the mid summer sun shines through a trillithon at Stonehenge and strikes a certain point. From there we can work outwards.
The logical thing to do is to precisely model Stonehenge to the last micron in space. That will take a bit of work involving the various sea levels and so on. So on will include the thermal expansion of granite and the traffic density on the A303 and whether the Solstice is a bank holiday.
Oh bollocks ... mass. That standard kilo thing - is it sorted out yet? Those cars and lorries are going to need constant observation - we'll need a sort of dynamic weigh bridge that works at 60mph. If we slap it in the road just after (going west) the speed cameras should keep the measurements within parameters. If we apply now, we should be able to get Highways to change the middle of the road markings from double dashed to a double solid line and then we can simplify a few variables.
... more daft stuff ...
Right, we've got this. We now have a standard place and point in time to define place and time from.
No we don't and we never will. There is no absolute when it comes to time, place or mass. What we do have is requirements for standards and a point to measure from. Those points to measure from have differing requirements, depending on who who you are and what you are doing.
I suggest we treat time as we do sea level, with a few special versions that people can use without having to worry about silliness.
Provided I can work out when to plant my wheat crop and read log files with sub micro second precision for correlation, I'll be happy. My launches to the moon will need a little more funkiness ...
The hack is literally trivial. Check once a month to see if UTC # ET. If not then create a file called Leap_Second once a month, check if this file exists, and if so, then delete it, and add 1 to the value in a file called Leap_Seconds, and make a backup called 'LSSE' Leap seconds since Epoch.
You are not expected to understand this.
It keeps both systems in place.
If you want, I could make it either a hash or a lookup table.
Note also that the modern "UTC epoch" is January 1, 1972. Before this date, UTC used a different second than TAI: [1]
> As an intermediate step at the end of 1971, there was a final irregular jump of exactly 0.107758 TAI seconds, making the total of all the small time steps and frequency shifts in UTC or TAI during 1958–1971 exactly ten seconds, so that 1 January 1972 00:00:00 UTC was 1 January 1972 00:00:10 TAI exactly, and a whole number of seconds thereafter. At the same time, the tick rate of UTC was changed to exactly match TAI. UTC also started to track UT1 rather than UT2.
So Unix times in the years 1970 and 1971 do not actually match UTC times from that period. [2]
A funny consequence of this is that there are people alive today that do not know (and never will know) their exact age in seconds[1].
This is true even if we assume the time on the birth certificate was a time precise down to the second. It is because what was considered the length of a second during part of their life varied significantly compared to what we (usually) consider a second now.
[1] Second as in 9192631770/s being the the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom
There's a certain exchange out there that I wrote some code for recently, that runs on top of VAX, or rather OpenVMS, and that has an epoch of November 17, 1858, the first time I've seen a mention of a non-unix epoch in my career. Fortunately, it is abstracted to be the unix epoch in the code I was using.
It’s called the modified Julian day (MJD). And the offset is 2,400,000.5 days.
In the Julian day way of counting, each day ended at noon, so that all astronomical observations done in one night would be the same Julian day, at least in Europe. MJD moved the epoch back to midnight.
Well, at least there isn't any POSIX timestamp that correspond to more than one real time point. So, it's better than the one representation people use for everything.
That'd be like saying some points in time that don't have a ISO 8601 year. Every point in time has a year, but some years are longer than others.
If you sat down and watched https://time.is/UTC, it would monotonically tick up, except that occasionally some seconds would be very slightly longer. Like 0.001% longer over the course of 24 hours.
When storing dates in a database I always store them in Unix Epoch time and I don't record the timezone information on the date field (it is stored separately if there was a requirement to know the timezone).
Should we instead be storing time stamps in TAI format, and then use functions to convert time to UTC as required, ensuring that any adjustments for planetary tweaks can be performed as required?
I know that timezones are a field of landmines, but again, that is a human construct where timezone boundaries are adjusted over time.
It seems we need to anchor on absolute time, and then render that out to whatever local time format we need, when required.
> Should we instead be storing time stamps in TAI format, and then use functions to convert time to UTC as required, ensuring that any adjustments for planetary tweaks can be performed as required?
Yes. TAI or similar is the only sensible way to track "system" time, and a higher-level system should be responsible for converting it to human-facing times; leap second adjustment should happen there, in the same place as time zone conversion.
Unfortunately Unix standardised the wrong thing and migration is hard.
No, almost often no. Most software is written to paper over leap seconds: it really only happens at the clock synchronization level (chrony for example implements leap second smearing).
All your cocks are therefore synchronized to UTC anyway: it would mean you’d have to translate from UTC to TAI when you store things, then undo when you retrieve. It would be a mess.
Smearing is alluring as a concept right up until you try and implement it in the real world.
If you control all the computers that all your other computers talk to (and also their time sync sources), then smearing works great. You're effectively investing your own standard to make Unix time monatomic.
If, however, your computers need to talk to someone else's computers and have some sort of consensus about what time it is, then the chances are your smearing policy won't match theirs, and you'll disagree on _what time it is_.
Sometimes these effects are harmless. Sometimes they're unforseen. If mysterious, infrequent buggy behaviour is your kink, then go for it!
> and I don't record the timezone information on the date field
Very few databases actually make it possible to preserve timezone in a timestamp column. Typically the db either has no concept of time zone for stored timestamps (e.g. SQL server) or has “time zone aware” timestamp column types where the input is converted to UTC and the original zone discarded (MySQL, Postgres)
Oracle is the only DB I’m aware of that can actually round-trip nonlocal zones in its “with time zone” type.
Maybe, it really depends on what your systems are storing. Most systems really won't care if you are one second off every few years. For some calculations being a second off is a big deal. I think you should tread carefully when adopting any format that isn't the most popular and have valid reasons for deviating from the norm. The simple act of being different can be expensive.
Seconded. Don't mess around with raw timestamps. If you're using a database, use its date-time data type and functions. They will be much more likely to handle numerous edge cases you've never even thought about.
I think this article ruined my Christmas. Is nothing sacred? seconds should be seconds since epoch. Why should I care if it drifts off solar day? Let seconds-since-epoch to date representation converters be responsible for making the correction. What am I missing?
The way it is is really how we all want it. 86400 seconds = 1 day. And we operate under the assumption that midnight UTC is always a multiple of 86400.
We don’t want every piece of software to start hardcoding leap second introductions and handling smears and requiring a way to update it within a month when a new leap second is introduced.
You never worried or thought about it before, and you don’t need to! It’s done in the right way.
> We don’t want every piece of software to start hardcoding leap second introductions and handling smears and requiring a way to update it within a month when a new leap second is introduced.
That kind of thing is already needed for timezone handling. Any piece of software that handles human-facing time needs regular updates.
I think it would make most of our lives easier if machine time was ~29 seconds off from human time. It would be a red flag for carelessly programmed applications, and make it harder to confuse system time with human-facing UK time.
I don't want it this way: it mixes a data model concern (timestamps) with a ui concern (calendars). As other have said, it would be much better if we used TAI and handled leap seconds at the same level as timezones.
But most software that would need to care about that already needs to care about timezones, and those already need to be regularly updated, sometimes with not much more than a month's notice.
>Take the Traders' method of timekeeping. The frame corrections were incredibly complex - and down at the very bottom of it was a little program that ran a counter. Second by second, the Qeng Ho counted from the instant that a human had first set foot on Old Earth's moon. But if you looked at it still more closely ... the starting instant was actually about fifteen million seconds later, the 0-second of one of Humankind's first computer operating systems.
I recommend Bobiverse series for anyone who wants more "computer science in space" or permutation city for anyone who wants more "exploration of humans + simulations and computers"
I also read the link for “UTC, GPS, LORAN and TAI”. It’s an interesting contrast that GPS time does not account for leap seconds.
It's not, actually. Does 2 days and 1 hour ago mean 48, 49 or 50 hours, if there was a daylight saving jump in the meantime? If it's 3PM and something is due to happen in 3 days and 2 hours, the user is going to assume and prepare for 5PM, but what if there's a daylight saving jump in the meantime? What happens to "in 3 days and 2 hours" if there's a leap second happening tomorrow that some systems know about and some don't?
You rarely want to be thinking in terms of deltas when considering future events. If there is an event that you want to happen on jan 1, 2030 at 6 PM CET, there is no way to express that as a number of seconds between now and then, because you don't know whether the US government abolishes DST between now and 2030 or not.
To reiterate this point, there is no way to make an accurate, constantly decreasing countdown of seconds to 6PM CET on jan 1, 2030, because nobody actually knows when that moment is going to happen yet.
We can measure the difference in speed of time in a valley and a mountain (“just” take an atomic clock up a mountain and wait for a bit, bring it back to your lab where the other atomic clock is now out of sync)
I don't really like this plan.
The entire point of UTC is to be some integer number of seconds away from TAI to approximate mean solar time (MST).
If we no longer want to track MST, then we should just switch to TAI. Having UTC drift away from MST leaves it in a bastardized state where it still has historical leap seconds that need to be accounted for, but those leap seconds no longer serve any purpose.
---
However, this proposal is not entirely pointless. The point is:
1. Existing UTC timekeeping is unmodified. (profoundly non-negotiable)
2. Any two timestamps after 2035 different by an accurate number of physical seconds.
---
Given that MST is already a feature of UTC, I agree removing it seems silly.
But in the real world a lot of systems made the wrong choice (UNIX being the biggest offender) and it got deeply encoded in many systems and regulations, so it's practically impossible to "just switch to TAI".
So it's easier to just re-interpret UTC as "the new TAI". I will not be surprised if some time in the future we will get the old UTC, but under a different name.
In most (all?) countries, civil time is based on UTC. Nobody is going to set all clocks in the world backwards by about half a minute because it is somewhat more pure.
GPS time also has an offset compared to TAI. Nobody care a bout that. Just like nobody really cares about the Unix epoch. As long as results are consistent.
There is, though? You can easily look at the BIPM's reports [0] to get the gist of how they do it. Some of the contributing atomic clocks are aligned to UTC, and others are aligned to TAI (according to the preferences of their different operators), but the BIPM averages all the contributing measurements into a TAI clock, then derives UTC from that by adding in the leap seconds.
[0] https://webtai.bipm.org/ftp/pub/tai/annual-reports/bipm-annu...
The logical thing to do is to precisely model Stonehenge to the last micron in space. That will take a bit of work involving the various sea levels and so on. So on will include the thermal expansion of granite and the traffic density on the A303 and whether the Solstice is a bank holiday.
Oh bollocks ... mass. That standard kilo thing - is it sorted out yet? Those cars and lorries are going to need constant observation - we'll need a sort of dynamic weigh bridge that works at 60mph. If we slap it in the road just after (going west) the speed cameras should keep the measurements within parameters. If we apply now, we should be able to get Highways to change the middle of the road markings from double dashed to a double solid line and then we can simplify a few variables.
... more daft stuff ...
Right, we've got this. We now have a standard place and point in time to define place and time from.
No we don't and we never will. There is no absolute when it comes to time, place or mass. What we do have is requirements for standards and a point to measure from. Those points to measure from have differing requirements, depending on who who you are and what you are doing.
I suggest we treat time as we do sea level, with a few special versions that people can use without having to worry about silliness.
Provided I can work out when to plant my wheat crop and read log files with sub micro second precision for correlation, I'll be happy. My launches to the moon will need a little more funkiness ...
You are not expected to understand this.
It keeps both systems in place.
If you want, I could make it either a hash or a lookup table.
> As an intermediate step at the end of 1971, there was a final irregular jump of exactly 0.107758 TAI seconds, making the total of all the small time steps and frequency shifts in UTC or TAI during 1958–1971 exactly ten seconds, so that 1 January 1972 00:00:00 UTC was 1 January 1972 00:00:10 TAI exactly, and a whole number of seconds thereafter. At the same time, the tick rate of UTC was changed to exactly match TAI. UTC also started to track UT1 rather than UT2.
So Unix times in the years 1970 and 1971 do not actually match UTC times from that period. [2]
[1] https://en.wikipedia.org/wiki/Coordinated_Universal_Time#His...
[2] https://en.wikipedia.org/wiki/Unix_time#UTC_basis
This is true even if we assume the time on the birth certificate was a time precise down to the second. It is because what was considered the length of a second during part of their life varied significantly compared to what we (usually) consider a second now.
[1] Second as in 9192631770/s being the the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom
Deleted Comment
https://www.slac.stanford.edu/~rkj/crazytime.txt
To make these dates fit in computer memory in the 1950s, they offset the calendar by 2.4 million days, placing day zero on November 17, 1858.
https://en.wikipedia.org/wiki/Julian_day
Deleted Comment
https://www.joelonsoftware.com/2006/06/16/my-first-billg-rev...
The macOS/Swift Foundation API NSDate.timeIntervalSinceReferenceDate uses an epoch of January 1, 2001.
edit: Looks like Wikipedia has a handy list https://en.wikipedia.org/wiki/Epoch_(computing)#Notable_epoc...
That'd be like saying some points in time that don't have a ISO 8601 year. Every point in time has a year, but some years are longer than others.
If you sat down and watched https://time.is/UTC, it would monotonically tick up, except that occasionally some seconds would be very slightly longer. Like 0.001% longer over the course of 24 hours.
Should we instead be storing time stamps in TAI format, and then use functions to convert time to UTC as required, ensuring that any adjustments for planetary tweaks can be performed as required?
I know that timezones are a field of landmines, but again, that is a human construct where timezone boundaries are adjusted over time.
It seems we need to anchor on absolute time, and then render that out to whatever local time format we need, when required.
Yes. TAI or similar is the only sensible way to track "system" time, and a higher-level system should be responsible for converting it to human-facing times; leap second adjustment should happen there, in the same place as time zone conversion.
Unfortunately Unix standardised the wrong thing and migration is hard.
All your cocks are therefore synchronized to UTC anyway: it would mean you’d have to translate from UTC to TAI when you store things, then undo when you retrieve. It would be a mess.
If you control all the computers that all your other computers talk to (and also their time sync sources), then smearing works great. You're effectively investing your own standard to make Unix time monatomic.
If, however, your computers need to talk to someone else's computers and have some sort of consensus about what time it is, then the chances are your smearing policy won't match theirs, and you'll disagree on _what time it is_.
Sometimes these effects are harmless. Sometimes they're unforseen. If mysterious, infrequent buggy behaviour is your kink, then go for it!
Oracle is the only DB I’m aware of that can actually round-trip nonlocal zones in its “with time zone” type.
We don’t want every piece of software to start hardcoding leap second introductions and handling smears and requiring a way to update it within a month when a new leap second is introduced.
You never worried or thought about it before, and you don’t need to! It’s done in the right way.
That kind of thing is already needed for timezone handling. Any piece of software that handles human-facing time needs regular updates.
I think it would make most of our lives easier if machine time was ~29 seconds off from human time. It would be a red flag for carelessly programmed applications, and make it harder to confuse system time with human-facing UK time.