> And what’s worse, malicious threat actors can manipulate time synchronization protocols in many cases to trigger this vulnerability at the time of their choosing.
If you switch to 64 bit timestamps, and the network protocol supports dates > 2038, can you then just trigger the rollover bugs by pretending it's 2*64 - 1 seconds after epoch start?
Also, if the actions are potentially so severe, and NTP (or whatever is used) so vulnerable, why haven't we seen many such attacks in the wild?
Update: to be clear I'm not arguing that there isn't a problem, I've already run into it myself. I'm trying to understand how severe it is, how exploitable, and how robust a solution could be.
I think this effort would benefit from trying to qualify what “unpredictable ways” actually means. If anyone is testing devices, a catalog of test results describing the actual failure modes that were revealed would help make this whole thing more concrete.
I think many software engineers know that if you want to make any organization care about this type of issue, you need to be ready to demonstrate the severity and impact.
Even if the system boots properly, there's various critical systems that depend upon having the correct time. Say goodbye to things like HTTPS and SSL/TLS certificates.
Will the root-certificates still be trusted in 12 years? Will we largely use the same TLS versions? And if systems can be updated to account for that, shouldn't they also be able to be updated to deal with the timestamps limitation?
It means 'anywhere between being bricked and no problem at all, and we can't give you any idea of how severe or how likely any of those possiblities are'. The only way you can really know is to thorougly audit your system and/or test it. Preferably both.
Y2K showed that you don't need details beyond vague threats of "medication administered at the wrong time" and "planes falling out of the air" to get organizations and the public to care. No idea how that's going to tie into the conspiracy-heavy media landscape we inhabit now.
(Note I do think this is a serious issue that needs to be addressed. And I'd love to see specific examples. I'm just pushing back against the idea that examples would make much difference to advocacy efforts)
What? Y2K did have many demonstrable problems... Having a 2 digit year did obviously cause problems. The reason nothing happened is because a shit ton of time and money was spent making sure it didn't.
“Planes falling pit of the sky” still gets used both as an example of overblown Y2k fear-mongering AND the reason why all those quiet preparations were necessary.
I fixed a recent Y38 bug in some classic ASP code. The bug was nothing more than a simple `Date() + 5000` computation (adding 5000 days to the current date) as a sort of expiry date applied to something; I don't recall the exact details. VB6 did not take kindly to computing any date value beyond the Y38 max and threw an error. In practice this error ended up denying service to everyone even though the Y38 max date was 14 years in the future. You never know what little bugs like that are lurking in such legacy code.
Goes back at least slightly before that, as I've had 2038epochalypse.com registered since March 2017, but I can't recall whether I thought I was being clever or whether I heard it somewhere else.
Telling home users to check that their existing smart devices will still work in 13 years seems like overkill. It seems unlikely that more than a tiny fraction of them will still be in use then, if any.
Businesses installing new smart infrastructure and devices will need to pay attention to this, and in 10-15 years they'll need to work out what to replace, of course.
Agreed. A serious approach to this problem would be: Identify critical computers which are currently 13+ years old (most likely embedded systems). Assume that the same sorts of systems will be 13+ years old in 2038. Focus on raising awareness with that particular target audience, e.g. give talks about the 2038 problem at embedded systems conferences. Try to get it included in university curricula. Etc.
15 years ago i was working at a startup in SV and a kid we hired was saying how he was sad he missed out on y2k because he was too young. I filled him with joy when I mentioned the 2038 bug. lol
Are they doing anything to fix it or just raising awareness?
Here's an example of measuring packages that report warnings for software that has suspicious conversions. Compile with `-Wconversion` with both 32-bit and 64-bit time_t, and see what the difference is.
https://github.com/mkj/yocto-y2038
That is using yocto, but you could probably do something similar with other less-embedded distros too, if you can rebuild the world.
FWIW I didn't find much interesting with that apart from busybox dhcpd.
It looks mostly like a project for self promotion of the two authors. Maybe they offer some consulting services.
Funniest is that one of them wrote that they have "learned about it after Y2K bug".
I thought one learns about this overflow in a "introduction to programming" class...
It also says nothing about a formal education, just that he has worked in IT since his teens. I didn't hear of the 2038 problem myself until the whole Y2K debacle, but then, I was in my teens at the time.
If you switch to 64 bit timestamps, and the network protocol supports dates > 2038, can you then just trigger the rollover bugs by pretending it's 2*64 - 1 seconds after epoch start?
Also, if the actions are potentially so severe, and NTP (or whatever is used) so vulnerable, why haven't we seen many such attacks in the wild?
Update: to be clear I'm not arguing that there isn't a problem, I've already run into it myself. I'm trying to understand how severe it is, how exploitable, and how robust a solution could be.
I think many software engineers know that if you want to make any organization care about this type of issue, you need to be ready to demonstrate the severity and impact.
Even if the system boots properly, there's various critical systems that depend upon having the correct time. Say goodbye to things like HTTPS and SSL/TLS certificates.
However, I’m not sure how you make a 2038 test environment
It assumes that the OS/Kernel etc… are defacto frozen to 2025 or whatever increment until 2038
What was the y2k solution for the people that implemented those fixes in the 90s?
Deleted Comment
Dead Comment
It intercepts system calls to get the time and reports a fake time to the application.
https://github.com/wolfcw/libfaketime
Y2K showed that you don't need details beyond vague threats of "medication administered at the wrong time" and "planes falling out of the air" to get organizations and the public to care. No idea how that's going to tie into the conspiracy-heavy media landscape we inhabit now.
(Note I do think this is a serious issue that needs to be addressed. And I'd love to see specific examples. I'm just pushing back against the idea that examples would make much difference to advocacy efforts)
Businesses installing new smart infrastructure and devices will need to pay attention to this, and in 10-15 years they'll need to work out what to replace, of course.
https://en.wikipedia.org/wiki/Time_formatting_and_storage_bu...
Here's an example of measuring packages that report warnings for software that has suspicious conversions. Compile with `-Wconversion` with both 32-bit and 64-bit time_t, and see what the difference is. https://github.com/mkj/yocto-y2038
That is using yocto, but you could probably do something similar with other less-embedded distros too, if you can rebuild the world.
FWIW I didn't find much interesting with that apart from busybox dhcpd.
Funniest is that one of them wrote that they have "learned about it after Y2K bug". I thought one learns about this overflow in a "introduction to programming" class...