My favorite fun fact about Y2K38 is the VMS operating system.
> While the native APIs of OpenVMS can support timestamps up to the 31st of July 31086, the C runtime library (CRTL) uses 32-bit integers for time_t. As part of Y2K compliance work that was carried out in 1998, the CRTL was modified to use unsigned 32-bit integers to represent time; extending the range of time_t up to the 7th of February 2106.
This "fix" also made it impossible to represent times before 1970.
Admittedly that's probably a rare problem, but I can imagine it could have broken something
(An interesting tidbit: Since 2018-07-22, a 32-bit signed time_t with an epoch of 1970-01-01 has been able to represent the date of birth of every living human. Chiyo Miyako was born 1901-05-02 and died 2018-07-22 at the age of 117 years, 81 days. 2^31-1 seconds before the epoch was 1901-12-13. Reference: https://en.wikipedia.org/wiki/List_of_the_verified_oldest_pe...)
Probably due to concerns about changing the size of many structs that include time_t values, which could cause a lot of bugs in C code and similar where the size had been assumed. Moving to unsigned types doesn't change the storage requirements.
While it's handled in XFS (the filesystem/kernel aspect), to this day you have to opt into it when formatting:
mkfs.xfs -m bigtime=1 ...
I have filesystems from ~2002 still in use - I have a feeling I'm going to run into this when I inevitably forget how they're dragging their feet, lol.
I was surprised to see `xfs filesystem being remounted at [dir] supports timestamps until 2038` on a brand new Fedora install recently. Wonder what the hold up is.
I've been keeping tabs on that as well, long time Fedora user here.
I wish I could tell you! Usually this implies some sort of lacking stability; yet I don't recall any specific warnings when I was researching opting in.
There may be some insights on the upstream XFS/Red Hat mailing lists (most of the developers are there, IIRC)
I've replaced all of my old XFS filesystems to address this and haven't noticed any problems... but filesystem things can be subtle.
The XFS bigtime feature was "experimental" until 5.15, and is still disabled by default in mkfs.xfs. Filesystems with that enabled can't be mounted by older kernels which don't support it, and until 5.15 that causes a big fat "EXPERIMENTAL big timestamp feature in use. Use at your own risk!" warning.
In the embedded space I'm still seeing plenty of 2038-problematic code being produced today. 32 bit microcontroller + unix timestamps = future problems.
I think this will bite us hard - way harder than Y2K (which happened when software was far less prevalent). There must be millions or even billions of embedded devices out there which are susceptible to this problem - the only real question is how badly they'll fail.
Similar mistakes being made today on different scales. For instance, 3G cellular service is being removed in many places to free bandwith for 5G services. My 2015 automobile has a 3G cellular modem embedded for several services including on-the-air updates. So backwards steps my include plugging in a USB card if updates are needed.
Oh definitely. How long did python3 take? We’re still on ipv4. But lets chill. In 2038 I should be retired but I’ll return from my retirement and make bank
A friend of mine was telling me how he spun up a consultancy and made enough to retire off the back of Y2K. Must be time to spin up one for Y2K38 and make bank again.
You should just make the right npm package. You could call it: "probably_correct_time" and it'd just add 137 to the year if it was less than 1937 or so. Then raise some money to fund AI research to decipher if it's about the great depression or something to guess the right year. I joke but honestly someone is going to build this and then get acquired by Grammarly.
Some companies used this approach for Y2K, deciding to use a pivot similar to the above to decide whether to prefix 19 or 20 (or 21 or 22). It made the fix easy, but they didn't expand the field size from YY to CCYY, so all kinds of stupid errors await us in the future.
I was young at the time, and I did what management told me to do but felt icky about it because I knew it was hugely inferior and they didn't know what they were doing, and I was just a tiny cog in the team doing all the changes, most of which thought it was fine... Left that place long ago.
What you recommend is an easy fix, and might work most of the time, but its wrong and will cause infuriating problems, probably making others suffer your past deeds. Just do it properly once, first time.
Do you remember the mass hysteria and people running out and buying generators and flashlights and survival supplies? My aunt thought it would be the end of the world and bought a bunch of supplies in case everything stopped working.
>Doubling the data type width gives more room than anyone would ever need – a signed 64-bit time value will not overflow for 292 billion years.
This sentence reminds me of "The last question" by Asimov.
Well, it's good enough for us, but "humans" 292bi years from now will still have to deal with it.
Meh. Even if the transition to a 128 bit time value takes a billion years to roll out, it's no big deal. There's no reason to think it would be drastically more challenging for whatever intelligence to deal with it then than it is now.
One interesting idea in Vernor Vinge's A Deepness in the Sky was the job of "code archaeologist". A far future society, they were saddled with ongoing maintenance of centuries-old code, and needed to be able to figure out what those historical programmers were trying to do.
"OpenBSD is one of the first operating systems to be safe from the “Year 2038 Problem”. 64-bit time was introduced in 2013, so you don’t have to worry about the Unix Epoch 32-bit issue."
Yes. OpenBSD famously has an official policy of not providing binary compatibility. You can always break userspace for the benefits of code quality and safety. ABI has changed? No big deal, just recompile. Furthermore, since OpenBSD is an entire operating system, not just a kernel, the project has power to apply technical decisions to userspace, so you have more control to do that at will.
IIRC Not if they were using the OS time type. I recall BSD is entirely compiled from a shared repository so the only breaking software would be external applications and possibly saves that have an incompatible timestamp baked into the file format. Anything outside of their 'ports' package repo.
Cant this just be Epoch 0 and the next one be Epoch 1 and so on?
Adding a designation before the number, or date parsers knowing to separate the first or last digit from the rest of the number, meaning that its not just an overflowing large number its two numbers
You might as well just use a 64 bit int if you're going to be changing code anyways
But this blog post is about how code which is already using 64 bit ints can have code which silently truncates. Another case of C's loose handling of types, whereas in Rust there would've been an explicit cast required (granted, in this case the macro expands to having explicit type casts)
> You might as well just use a 64 bit int if you're going to be changing code anyways
That’s such a waste though. We could the current Unix time as is, and use a single additional bit to indicate if it’s the original epoch, or the new one. Imagine the net savings worldwide by using only 33 bits rather than 64 for the next 68 years.
For clarity, 'unsigned' and 'signed' int types with minimum bit sizes are defined either by macros or a header, and only 'signed' types are sign bit extended on read.
Up to 255 standards may be defined which will define at least: an epoch for that standard, how the 56 bits for the timedata field are used to store data.
Standard 255 should be reserved as invalid (a guard against timestamp manipulation error).
Standards 254 down through 248 should be reserved in case there is a need for a program specific format that should not be handled by a standard library. The seven slots are reserved to allow up to that many revisions / variations of new timestamp types without further complicating the software.
Standard 0 might be reserved or assigned to Unix epoch timestamps. (see also 255 for folding naive manipulations back to a 56 bit timestamp.)
Standard 1 might be 56-bit microseconds (0.001 seconds), with a Unix epoch?
Standard 2 might be 56-bit milliseconds (0.000001 seconds), with a Unix epoch?
Standard 3 might be 56-bit nanoseconds (0.000000001 seconds), with a Unix epoch?
Those are (approximately) the current popular ranges of timestamps, and the reason for supporting 1e+0, 1e-3, 13-6, and 1e-9 are the approximately 30, 20, 10, and 0 extra bits of offered precision over the other timestamps.
When writing that out however, I realized that even at a full 64 bits, a nanosecond precision timestamp has a woefully inadequate number of bits. (only ~524 years of precision according to someone that did the math ( https://stackoverflow.com/questions/43451565/store-timestamp... ))
If storage isn't a concern, it might be better to utilize 128 or more bits for a timestamp to ensure there's sufficient room for the desired range. However at that point it makes much more sense for application specific choices.
A timestamp in many applications should probably include a reference timezone anyway.
Not really related other than being a different mess, but I've been running with DNSSEC enabled for a few months and it seems developer.apple.com is one of the few that fail DNSSEC validation (due to two different issues :/) and has been for a while (maybe always):
What's kind of funny is that there seems to be a time related bug with the comments on the bug report page. All the timestamps look like [22 Dec 2021 9:43], except the last one, apparently made in 2022, which has a time of 20:22, and the year is missing: [3 Jan 20:22]
Maybe it's intentional that the year is omitted for current year, and just a coincidence that the time is 20:22?
> While the native APIs of OpenVMS can support timestamps up to the 31st of July 31086, the C runtime library (CRTL) uses 32-bit integers for time_t. As part of Y2K compliance work that was carried out in 1998, the CRTL was modified to use unsigned 32-bit integers to represent time; extending the range of time_t up to the 7th of February 2106.
A Y2K38 problem fixed during the Y2K!
Admittedly that's probably a rare problem, but I can imagine it could have broken something
(An interesting tidbit: Since 2018-07-22, a 32-bit signed time_t with an epoch of 1970-01-01 has been able to represent the date of birth of every living human. Chiyo Miyako was born 1901-05-02 and died 2018-07-22 at the age of 117 years, 81 days. 2^31-1 seconds before the epoch was 1901-12-13. Reference: https://en.wikipedia.org/wiki/List_of_the_verified_oldest_pe...)
I wish I could tell you! Usually this implies some sort of lacking stability; yet I don't recall any specific warnings when I was researching opting in.
There may be some insights on the upstream XFS/Red Hat mailing lists (most of the developers are there, IIRC)
I've replaced all of my old XFS filesystems to address this and haven't noticed any problems... but filesystem things can be subtle.
I think this will bite us hard - way harder than Y2K (which happened when software was far less prevalent). There must be millions or even billions of embedded devices out there which are susceptible to this problem - the only real question is how badly they'll fail.
[0]: https://tvtropes.org/pmwiki/pmwiki.php/Main/OneLastJob
I was young at the time, and I did what management told me to do but felt icky about it because I knew it was hugely inferior and they didn't know what they were doing, and I was just a tiny cog in the team doing all the changes, most of which thought it was fine... Left that place long ago.
What you recommend is an easy fix, and might work most of the time, but its wrong and will cause infuriating problems, probably making others suffer your past deeds. Just do it properly once, first time.
I think it's so you can monitor the fuel level from inside your RV or something.
This sentence reminds me of "The last question" by Asimov. Well, it's good enough for us, but "humans" 292bi years from now will still have to deal with it.
https://why-openbsd.rocks/fact/64bit-time/
It's certainty one way to do it, but I'm pretty sure this is unacceptable to most kernels.
Adding a designation before the number, or date parsers knowing to separate the first or last digit from the rest of the number, meaning that its not just an overflowing large number its two numbers
Any undesignated timestamp being part of epoch 0
But this blog post is about how code which is already using 64 bit ints can have code which silently truncates. Another case of C's loose handling of types, whereas in Rust there would've been an explicit cast required (granted, in this case the macro expands to having explicit type casts)
That’s such a waste though. We could the current Unix time as is, and use a single additional bit to indicate if it’s the original epoch, or the new one. Imagine the net savings worldwide by using only 33 bits rather than 64 for the next 68 years.
Dead Comment
Imagine a C style packed bitfield structure similar to this:
For clarity, 'unsigned' and 'signed' int types with minimum bit sizes are defined either by macros or a header, and only 'signed' types are sign bit extended on read.Up to 255 standards may be defined which will define at least: an epoch for that standard, how the 56 bits for the timedata field are used to store data.
Standard 255 should be reserved as invalid (a guard against timestamp manipulation error).
Standards 254 down through 248 should be reserved in case there is a need for a program specific format that should not be handled by a standard library. The seven slots are reserved to allow up to that many revisions / variations of new timestamp types without further complicating the software.
Standard 0 might be reserved or assigned to Unix epoch timestamps. (see also 255 for folding naive manipulations back to a 56 bit timestamp.)
Standard 1 might be 56-bit microseconds (0.001 seconds), with a Unix epoch?
Standard 2 might be 56-bit milliseconds (0.000001 seconds), with a Unix epoch?
Standard 3 might be 56-bit nanoseconds (0.000000001 seconds), with a Unix epoch?
Those are (approximately) the current popular ranges of timestamps, and the reason for supporting 1e+0, 1e-3, 13-6, and 1e-9 are the approximately 30, 20, 10, and 0 extra bits of offered precision over the other timestamps.
When writing that out however, I realized that even at a full 64 bits, a nanosecond precision timestamp has a woefully inadequate number of bits. (only ~524 years of precision according to someone that did the math ( https://stackoverflow.com/questions/43451565/store-timestamp... ))
If storage isn't a concern, it might be better to utilize 128 or more bits for a timestamp to ensure there's sufficient room for the desired range. However at that point it makes much more sense for application specific choices.
A timestamp in many applications should probably include a reference timezone anyway.
Apple has their version. https://developer.apple.com/documentation/corefoundation/154...
https://dnsviz.net/d/developer.apple.com/dnssec/
My favorite DNSSEC issue is that Verisign's DNSSEC checker fails DNSSEC:
https://dnsviz.net/d/dnssec-analyzer.verisignlabs.com/dnssec...
At least there is an easy solution to this mess, just remove it entirely. It is obvious that almost no one uses it.
https://bugs.mysql.com/bug.php?id=12654
Maybe it's intentional that the year is omitted for current year, and just a coincidence that the time is 20:22?