The major argument you get from "why are you using Windows 7" is exactly this, companies in infrastructure argue that they still get a supported operating system in return (despite the facts, despite EOL, despite reality of MS not patching actually, and just disclosing new vulnerabilities).
And currently there's a huge migration problem because Microsoft Windows 11 is a non-deterministic operating system, and you can't risk a core meltdown because of a popup ad in explorer.exe.
I have no idea why Microsoft is sleeping at the wheel so much, literally every big industry customer I've been at in Europe tells me the exact same thing, and almost all of them were Windows customers, and are now migrating to Debian because of those reasons.
(I'm proponent of Linux, but if I were a proponent of Windows I'd ask myself wtf Microsoft is doing for the last 10 years since Windows 7)
I've read somewhere a post of alleged MS Windows developer which was about software crisis in company, namely young programmers do not want to ReleaseHandle(HWND*) so the rewrite everything (probably to C#). He gave some example that a well done functionality was rewritten without a lot of security checks. But I am unable to find it right now. It might have been on Reddit... or somewhere else...
As much as I’d love that picture to be true, how many “big industry” players are moving a sizable number of Windows machines to Debian? And how many Windows machines did they even have to begin with relative to Linux?
On the client side where this “non-deterministic” OS issue is far more advanced, moving away is so rare it’s news when it happens. On the data center side I’ve seen it more as consolidation of the tech stack around a single offering (getting rid of the few Windows holdouts) and not substantially Windows based companies moving to Linux.
It’s about growth. Are any developers choosing to base their new backend on Windows in 2025? Or is Windows only really maintaining the relationships they already have, incapable of building a statistically significant network of new ones?
Even Azure, the new major revenue stream of Microsoft is built on Linux!
It's good while the software you run on it still supports that OS, for example the big one would be anything build upon Chromium (or electron) framework which deprecated win7 support when Microsoft ended ESU support (EOL +3 years).
The LTS, long support version and stuff are all confessions of a technical and organisational failures
If you are not able to upgrade your stuff every 2 to 3 years, then you will not be able to upgrade your stuff after 5, 10 or 15 years. After so long time, that untouched pill of cruft will be considered as legacy, built by people gone long ago. It will be a massive project, an entire rebuild/refactor/migration of whatever you have.
"If you do not know how to do planned maintenance, then you will learn with incidents"
I don't agree, and this feels like something written by someone who has never managed actual systems running actual business operations.
Operating systems in particular need to manage the hardware, manage memory, manage security, and otherwise absolutely need to shut up and stay out of the fucking way. Established software changes SLOWLY. It doesn't need to reinvent itself with a brand new dichotomy every 3 years.
Nobody builds a server because they want to run the latest version of Python. They built it to run the software they bought 10 years ago for $5m and for which they're paying annual support contracts of $50k. They run what the support contracts require them to run, and they don't want to waste time with an OS upgrade because the cost of the downtime is too high and none of the software they use is going to utilize any of the newly available features. All it does is introduce a new way for the system to fail in ways you're not yet familiar with. It adds ZERO value because all we actually want and need is the same shit but with security patches.
Genuinely I want HN to understand that not everyone is running a 25 person startup running a microservice they hope to scale to Twitter proportions. Very few people in IT are working in the tech industry. Most IT departments are understaffed and underfunded. If we can save three weeks of time over 10 years by not having to rebuild an entire system every 3 years, it's very much worth it.
Just for the context, I am employed by a multi-billion company (which has more than 100k people)
Here, I'm in charge of some low level infrastructure components (the kind on which absolutely everything rely on, 5sec of downtime = 5sec of everything is down)
On one of my scope, I've inherited from a 15 years-old junkyard
The kind with a yearly support
The kind that costs millions
The kind that is so complex, that has seen so less evolutions other the years that nobody knows it anymore (even the people who were there 15y ago)
The kind that slows everybody else because it cannot meet other teams' needs
Long story short, I've got a flamethrower and we are purging everything
Management is happy, customers are happy too, my mates also enjoy working with sane tech (and not braindamaged shit)
Having started my IT career in manufacturing this 100%. We didn’t have a choice in some sometimes. Our support contracts would say Windows XP is the supported OS. We had lines that ran on DOS 5 because it would’ve been several million in hardware and software costs to replace and then not counting downtime of the line and would the new stuff even be compatible with the PLCs and other items.
> .. they don't want to waste time with an OS upgrade because the cost of the downtime is too high and none of the software they use is going to utilize any of the newly available features
Oopsie you got pwned and now your database or factory floor is down for weeks. Recovery is going to require specialists and costs will be 10 times what an upgrade would have cost with controlled downtime.
I can't upvote this hard enough. It's nice to know there's at least one other person who feels this way out there.
Also, this is the most compelling reason I've seen so far to pay a subscription. For any business that merely relies upon software as an operations tool, it's far more valuable business-wise to have stuff that works adequately and is secure, than stuff that is new and fancy.
Getting security patches without having feature creep trojan-horsed into releases is exactly what I need!
I'm reminded of the services that will rebuild ancient electric motors to the exact spec so they can go back on the production line like nothing happened. For big manufacturing operations, it's not even worth the risk of replacing with aa new motor.
I'm not sure why there's a need to update anything every 2-3 years. In fact, the pace of change becomes exhausting in itself. In my day-to-day life, things are mostly well designed systems and processes; there's a stable code of practice when driving cars, going to the shops, picking up the shopping, paying for the items and then storing them.
What part of that process needs to change every 2-3 years? Because some 'angel investor' says we need growth which means pushing updates to make it appear like you're doing something?
old.reddit has worked the same for the last 10 years now, new.reddit is absolutely awful. That's what 2-3 years of 'change' gets you.
In fact, this website itself remains largely the same. Why change for the sake of it?
Why do you need to clean your house every week/couple of weeks ?
Why not clean only once a year ?
Keeping your infrastructure/code somehow uptodate ensures:
- each time you have to upgrade, this is not a big deal
- you have less breaking changes at each iteration, thus less work to do
- when you must upgrade for some reasons, the step is, again, not so big
- you are sure you own the infrastructure. That current people owns it (versus people who left the company 8 years ago)
- you benefits from innovation (yes, there is) and/or performance improvements (yes, there is)
Keeping your stuff rotting in a dark room brings nothing good
What kind of argument is to "upgrade your stuff every 2 to 3 years". What are you upgrading for? If the software runs fine and does it job without issues, what "stuff" is there to upgrade?
Consider that the average CTO is about 50† and that roughly people expect to retire at 65 and die at 80.
If you can get away with one or zero overhauls of your infra during your tenure then that's probably a hell of a lot easier than every two to three years.
Not necessarily. There are cases where hardware support ends and trying to get drivers for a newer kernel is basically impossible without a lot of work. For example, one of my highpoint HBAs are completely unusable without running an older kernel. I imagine there’s more custom designed hardware with the same problem out there.
If your software needs the world from stop moving for 2 years so that it can be prepared and released successfully, I am afraid your software will never be released :(
Sure. But infrastructure will always be seen as a one time cost because enshittifiction ensures every company with merit transitions from merit leaders to MBA leaders.
This happens so often its basically a failure of capitalism.
The person having to maintain this must be in a world of hurt. Unless they found someone who really likes doing this kind of thing? Still, maintaining such an old codebase while the rest of the world moves on...ugh...
Maybe I'm the odd one out but I love doing stuff that has long term stability written all over it. In fact the IT world moving as fast as it does is one of my major frustrations. Professionally I have to keep up so I'm reading myself absolutely silly but it is getting to the point where I expect that one of these days I'll end up being surprised because a now 'well known technique' was completely unknown to me.
I agree. We are going as far as being asked to release our public app on self-hosted kube cluster in 9 months, with no kube experience and nobody with a CKA in a 2.5 person ops team. "Just do it it's easy" is the name of the game now, if you fail you're bad, if you offer stability and respect delivery dates you are out-fashioned, and the discussion comes back every week and every warning and concern is ignored.
I remember a long time ago one of our client was a bank, they had 2 datacenters with a LACP router, SPARC machines, Solaris, VxFS, Sybase, Java app. They survived 20 years with app, OS and hardware upgrades and 0 second of downtime. And I get lectured by a 3 years old developer that I should know better.
> I love doing stuff that has long term stability written all over it
I also love doing stuff that has long term stability written all over it. In my 20 year career of trying to do that through various roles, I've learnt that it comes with a number of prerequisites:
1. Minimising & controlling your dependencies. Ensuring code you own is stable long term is an entirely different task to ensuring upstream code continues to be available & functional. Pinning only goes sofar when it comes to CVEs.
2. Start from scratch. The effort to bring an inherited codebase that was explicitly not written with longevity in mind into line with your own standards may seem like a fun challenge, but it becomes less fun at a certain scale.
3. Scale. If you're doing anything in (1) & (2) to any extent, keep it small.
Absolutely none of the above is remotely applicable to a project like Ubuntu.
> Unless they found someone who really likes doing this kind of thing?
There are more people like that than one might think.
There's a sizable community of people who still play old video games. There are people who meticulously maintain 100 year old cars, restore 500 year old works of art, and find their passion in exploring 1000 year old buildings.
The HN front page still gets regular posts lamenting loss of the internet culture of the 80s and 90s, trying to bring back what they perceive as lost. I'm sure there are a number of bearded dudes who would commit themselves to keeping an old distro alive, just for the sake of not having to deal with systemd for example.
> There's a sizable community of people who still play old video games.
I went to the effort of reverse engineering part of Rollercoaster Tycoon 3 to add a resizeable windowed mode and fix it's behaviour with high poll rate mice... It can definitely be interesting to make old games behave on newer platforms.
> I'm sure there are a number of bearded dudes who would commit themselves to keeping an old distro alive, just for the sake of not having to deal with systemd for example.
I don't think so: there are Debian forks that aspire to fight against the horrors of GNOME, systemd, Wayland and Rust, but they don't attract people to work on them.
On the other hand: dealing with 14.04 is practically cutting edge compared to stuff still using AIX and HPUX, which were outdated even 20 years ago lol
It's because they stopped development in the late 90s. Before Windows 95 (Chicago) came out, HP-UX with VUE was really cutting edge. IBM kinda screwed it up when they created CDE out of it though.
And besides the GUI, all unixes were way more cutting edge than anything windows except NT. Only when that went mainstream with XP it became serious.
I know your 20 year timeframe is after XP's release, but I just wanted to point out there was a time when the unixes were way ahead. You could even get common software like WP, Lotus 123 and even internet explorer and the consumer outlook (i forget the name) for them in the late 90s.
Well I look at it from the relativistic perspective. See, AIX or HPUX are frozen in time and there is no temptation whatsoever within those two environments.
Being stuck in Ubuntu 14.04 you can actually take a look out the window and see what you are missing by being stuck in the past. It hurts.
As someone who actively maintains old rhel, the development environment is something you can drag forward.
The biggest problem is fixing security flaws with patches that dont have 'simple' fixes. I imagine that they are going to have problems with accurately determining vulnerability in older code bases where code is similar, but not the same.
The unfortunate problem is that, the more popular software is, the more it gets looked at, its code worked on. But forked branches as they age, become less and less likely to get a look-at.
Imagine a piece of software that is on some LTS, but it's not that popular. Bash is going to be used extensively, but what about a library used by one package? And the package is used by 10k people worldwide?
Well, many of those people have moved on to a newer version of a distro. So now you're left with 18 people in the world, using 10 year old LTS, so who finds the security vulnerabilities? The distro sure doesn't, distros typically just wait for CVEs.
And after a decade, the codebase is often diverged enough, that vulnerability researchers, looking at newer code, won't be helpful for older code. They're basically unique codebases at that point. Who's going through that unique codebase?
I'd say that a forked, LTS apache2 (just an example) on a 15 year old LTS is likely used by 17 people and someone's dog. So one might ask, would you use software which is a security concern, let's say a http server or what not, if only 18 people in the world looked at the codebase? Used it?
And are around to find CVEs?
This is a problem with any rarely used software. Fewer hands on, means less chance of finding vulnerabilities. 15 year old LTS means all software is rare.
And even though software is rare, if an adversary finds out it is so, they can then play to their heart's content, looking for a vulnerability.
You typically need to maintain much newer C++ compilers because things from the browser world can only be maintained through periodic rebases. Chances are that you end up building a contemporary Rust toolchain as well, and possibly more.
(Lucky for you if you excluded anything close to browsers and GUIs from your LTS offering.)
When I'm writing new software, I kinda hate having to support old legacy stuff, because it makes my life harder, and means I cannot depend on new library or language features.
But that's not what happens here, this is probably mostly backporting security fixes to older version. I haven't done that to any meaningful amount, but why wouldn't you find a sense of purpose in it? And if you do, why wouldn't it be fun?
I'm wondering how the maintenance effort would be organised.
Would it be existing teams in the main functional areas (networking, file systems, user space tools, kernel, systemd &c) keeping the packages earmarked as 'legacy add-on' as they age out of the usual LTS, old LTS, oldold LTS and so on?
Or would it in fact be a special team so people spending most of their working week on the legacy add-on?
Does Canonical have teams that map to each release, tracking it down through the stages or do they have functional teams that work on streams of packages that age through?
IME (do note, the things I've dealt with were obsolete for a much shorter time), such work isn't particularly ugly even though the idea of it is. Some of it will feel like cheating because you just need to paraphrase a fix, some of it will be difficult because critical parts don't exist yet. Maybe you'll get to implement a tiny version of a new feature.
It's extra fun because it's not their own codebase; it's a bunch of upstreams that never planned to support it for that long. If they're lucky, some of them will even receive the bug reports and complaints directly...
why would old code be worth less? to me its a great achievement for a codebase to maintain stability and usability over a long timespan.
if you venture even five feet into the world of enterprise software (particularly at non tech companies) you will discover that fifteen years isnt a very long time. when you spend many millions on a core system that is critical to your business you want it to continue working for many, many years.
I've used Canonical's free 3-seat extended service mantainence (ESM) support on my one 14.04 LTS machine for a long time. It's so nice having a stable target for more than decade for my personal projects. I have so much software defined radio software that absolutely does break in ways I can't fix on a newer version of any Debian-alike. The ESM program has been a provider of peace of mind when still leaving that SDR machine connected to the internet and running javascript.
>30-day trial for enterprises. Always free for personal use.
>Free, personal subscription for 5 machines for you or any business you own
This "Pro" program also being free is a suprise to be sure, but a welcome one.
LTS releases are great. I only use LTS releases on my servers. Problem is, if you need PCI compliance (credit card industry requirements, largely making no sense), some credit card processors will tell you to work with companies like SecureMetrics, who "audit" systems.
SecureMetrics will scan your system, find an "old" ssh version and flag you for non-compliance, even though your ssh was actually patched through LTS maintenance. You will then need to address all the vulnerabilities they think you have and provide "proof" that you are running a patched version (I've been asked for screenshots…).
That’s normal in any compliance process, and why you typically want to vet the vendor that does the compliance monitoring. And auditor (some auditors are really overzealous).
For Debian there is Extended Long Term Support (ELTS) : a commercial offering to further extend the lifetime of Debian releases to 10 years (i.e. 5 supplementary years after the 5 years offered by the LTS project)
I'm now deploying all my projects in Incus container (LXC). My base system is upgradeable, ZFS-based, in future will be IncusOS but now just Ubuntu. Incus is connected in cluster so I can: backup/copy projects, move between machines etc.
Containers reuse host system's new kernel, while inside I get Ubuntu 22.04. I don't see a good reason, if 22.04 will get 15-year life support, to upgrade it much. It's a perfect combination for me, keeping the project on 22.04 essentially forever, as long as my 22.04 build-container can still build the new version.
Isn't Incus/LXD separate from and running on top of LXC?
People sometimes seem to use the names interchangeably which can be annoying because I run just plain LXC but when looking stuff up and come across "this is how you do XYZ on LXC" they are actually talking about LXD and it doesn't really apply.
I can't recall what is was last time, but this has happened a couple of times already...
It's a container with full os: systemd, journald, tailscale, ssh inside. No need to learn new docker world, just install the deb with your software inside
In a cluster mode, you can move container into another machine without downtime, back it up in full etc., also via one command.
In theory when using ZFS or btrfs you can do incremental backup of the snapshot (send the diff only), but I never tried it.
Home automation customers (the end users) probably are going to balk at the yearly subscription price of Ubuntu Pro. Especially for gadgets that likely cost less to buy upfront than a single year of Ubuntu Pro.
The major argument you get from "why are you using Windows 7" is exactly this, companies in infrastructure argue that they still get a supported operating system in return (despite the facts, despite EOL, despite reality of MS not patching actually, and just disclosing new vulnerabilities).
And currently there's a huge migration problem because Microsoft Windows 11 is a non-deterministic operating system, and you can't risk a core meltdown because of a popup ad in explorer.exe.
I have no idea why Microsoft is sleeping at the wheel so much, literally every big industry customer I've been at in Europe tells me the exact same thing, and almost all of them were Windows customers, and are now migrating to Debian because of those reasons.
(I'm proponent of Linux, but if I were a proponent of Windows I'd ask myself wtf Microsoft is doing for the last 10 years since Windows 7)
On the client side where this “non-deterministic” OS issue is far more advanced, moving away is so rare it’s news when it happens. On the data center side I’ve seen it more as consolidation of the tech stack around a single offering (getting rid of the few Windows holdouts) and not substantially Windows based companies moving to Linux.
Even Azure, the new major revenue stream of Microsoft is built on Linux!
Deleted Comment
Deleted Comment
If you are not able to upgrade your stuff every 2 to 3 years, then you will not be able to upgrade your stuff after 5, 10 or 15 years. After so long time, that untouched pill of cruft will be considered as legacy, built by people gone long ago. It will be a massive project, an entire rebuild/refactor/migration of whatever you have.
"If you do not know how to do planned maintenance, then you will learn with incidents"
Operating systems in particular need to manage the hardware, manage memory, manage security, and otherwise absolutely need to shut up and stay out of the fucking way. Established software changes SLOWLY. It doesn't need to reinvent itself with a brand new dichotomy every 3 years.
Nobody builds a server because they want to run the latest version of Python. They built it to run the software they bought 10 years ago for $5m and for which they're paying annual support contracts of $50k. They run what the support contracts require them to run, and they don't want to waste time with an OS upgrade because the cost of the downtime is too high and none of the software they use is going to utilize any of the newly available features. All it does is introduce a new way for the system to fail in ways you're not yet familiar with. It adds ZERO value because all we actually want and need is the same shit but with security patches.
Genuinely I want HN to understand that not everyone is running a 25 person startup running a microservice they hope to scale to Twitter proportions. Very few people in IT are working in the tech industry. Most IT departments are understaffed and underfunded. If we can save three weeks of time over 10 years by not having to rebuild an entire system every 3 years, it's very much worth it.
Here, I'm in charge of some low level infrastructure components (the kind on which absolutely everything rely on, 5sec of downtime = 5sec of everything is down)
On one of my scope, I've inherited from a 15 years-old junkyard
The kind with a yearly support
The kind that costs millions
The kind that is so complex, that has seen so less evolutions other the years that nobody knows it anymore (even the people who were there 15y ago)
The kind that slows everybody else because it cannot meet other teams' needs
Long story short, I've got a flamethrower and we are purging everything
Management is happy, customers are happy too, my mates also enjoy working with sane tech (and not braindamaged shit)
Oopsie you got pwned and now your database or factory floor is down for weeks. Recovery is going to require specialists and costs will be 10 times what an upgrade would have cost with controlled downtime.
Also, this is the most compelling reason I've seen so far to pay a subscription. For any business that merely relies upon software as an operations tool, it's far more valuable business-wise to have stuff that works adequately and is secure, than stuff that is new and fancy.
Getting security patches without having feature creep trojan-horsed into releases is exactly what I need!
What part of that process needs to change every 2-3 years? Because some 'angel investor' says we need growth which means pushing updates to make it appear like you're doing something?
old.reddit has worked the same for the last 10 years now, new.reddit is absolutely awful. That's what 2-3 years of 'change' gets you.
In fact, this website itself remains largely the same. Why change for the sake of it?
Why not cleaning the room only once every 2-3 years ?
Keeping your infrastructure/code somehow uptodate ensures: - each time you have to upgrade, this is not a big deal - you have less breaking changes at each iteration, thus less work to do - when you must upgrade for some reasons, the step is, again, not so big - you are sure you own the infrastructure. That current people owns it (versus people who left the company 8 years ago) - you benefits from innovation (yes, there is) and/or performance improvements (yes, there is)
Keeping your stuff rotting in a dark room brings nothing good
So that whoever is doing the upgrade can justify their salary and continued employment.
If you can get away with one or zero overhauls of your infra during your tenure then that's probably a hell of a lot easier than every two to three years.
† https://www.zippia.com/chief-technology-officer-jobs/demogra...
Deleted Comment
What happens if your software takes 2 years to develop?
This happens so often its basically a failure of capitalism.
I remember a long time ago one of our client was a bank, they had 2 datacenters with a LACP router, SPARC machines, Solaris, VxFS, Sybase, Java app. They survived 20 years with app, OS and hardware upgrades and 0 second of downtime. And I get lectured by a 3 years old developer that I should know better.
I also love doing stuff that has long term stability written all over it. In my 20 year career of trying to do that through various roles, I've learnt that it comes with a number of prerequisites:
1. Minimising & controlling your dependencies. Ensuring code you own is stable long term is an entirely different task to ensuring upstream code continues to be available & functional. Pinning only goes sofar when it comes to CVEs.
2. Start from scratch. The effort to bring an inherited codebase that was explicitly not written with longevity in mind into line with your own standards may seem like a fun challenge, but it becomes less fun at a certain scale.
3. Scale. If you're doing anything in (1) & (2) to any extent, keep it small.
Absolutely none of the above is remotely applicable to a project like Ubuntu.
There are more people like that than one might think.
There's a sizable community of people who still play old video games. There are people who meticulously maintain 100 year old cars, restore 500 year old works of art, and find their passion in exploring 1000 year old buildings.
The HN front page still gets regular posts lamenting loss of the internet culture of the 80s and 90s, trying to bring back what they perceive as lost. I'm sure there are a number of bearded dudes who would commit themselves to keeping an old distro alive, just for the sake of not having to deal with systemd for example.
I went to the effort of reverse engineering part of Rollercoaster Tycoon 3 to add a resizeable windowed mode and fix it's behaviour with high poll rate mice... It can definitely be interesting to make old games behave on newer platforms.
I don't think so: there are Debian forks that aspire to fight against the horrors of GNOME, systemd, Wayland and Rust, but they don't attract people to work on them.
And besides the GUI, all unixes were way more cutting edge than anything windows except NT. Only when that went mainstream with XP it became serious.
I know your 20 year timeframe is after XP's release, but I just wanted to point out there was a time when the unixes were way ahead. You could even get common software like WP, Lotus 123 and even internet explorer and the consumer outlook (i forget the name) for them in the late 90s.
Being stuck in Ubuntu 14.04 you can actually take a look out the window and see what you are missing by being stuck in the past. It hurts.
Sure, you won't get the niceties of modern developments, but at least you have access to all of the source code and a working development environment.
The biggest problem is fixing security flaws with patches that dont have 'simple' fixes. I imagine that they are going to have problems with accurately determining vulnerability in older code bases where code is similar, but not the same.
Imagine a piece of software that is on some LTS, but it's not that popular. Bash is going to be used extensively, but what about a library used by one package? And the package is used by 10k people worldwide?
Well, many of those people have moved on to a newer version of a distro. So now you're left with 18 people in the world, using 10 year old LTS, so who finds the security vulnerabilities? The distro sure doesn't, distros typically just wait for CVEs.
And after a decade, the codebase is often diverged enough, that vulnerability researchers, looking at newer code, won't be helpful for older code. They're basically unique codebases at that point. Who's going through that unique codebase?
I'd say that a forked, LTS apache2 (just an example) on a 15 year old LTS is likely used by 17 people and someone's dog. So one might ask, would you use software which is a security concern, let's say a http server or what not, if only 18 people in the world looked at the codebase? Used it?
And are around to find CVEs?
This is a problem with any rarely used software. Fewer hands on, means less chance of finding vulnerabilities. 15 year old LTS means all software is rare.
And even though software is rare, if an adversary finds out it is so, they can then play to their heart's content, looking for a vulnerability.
(Lucky for you if you excluded anything close to browsers and GUIs from your LTS offering.)
But that's not what happens here, this is probably mostly backporting security fixes to older version. I haven't done that to any meaningful amount, but why wouldn't you find a sense of purpose in it? And if you do, why wouldn't it be fun?
Would it be existing teams in the main functional areas (networking, file systems, user space tools, kernel, systemd &c) keeping the packages earmarked as 'legacy add-on' as they age out of the usual LTS, old LTS, oldold LTS and so on?
Or would it in fact be a special team so people spending most of their working week on the legacy add-on?
Does Canonical have teams that map to each release, tracking it down through the stages or do they have functional teams that work on streams of packages that age through?
if you venture even five feet into the world of enterprise software (particularly at non tech companies) you will discover that fifteen years isnt a very long time. when you spend many millions on a core system that is critical to your business you want it to continue working for many, many years.
Deleted Comment
>30-day trial for enterprises. Always free for personal use. >Free, personal subscription for 5 machines for you or any business you own
This "Pro" program also being free is a suprise to be sure, but a welcome one.
SecureMetrics will scan your system, find an "old" ssh version and flag you for non-compliance, even though your ssh was actually patched through LTS maintenance. You will then need to address all the vulnerabilities they think you have and provide "proof" that you are running a patched version (I've been asked for screenshots…).
Took us a while to find the right ones.
Even worse, someone is overzealous, because you will get SecureMetrics on your back even if you are below the PCI thresholds.
there's the CVE tracker you can use to ~argue~ establish that the versions you're using either aren't affected or, have been patched.
https://ubuntu.com/security/cves
https://ubuntu.com/security/CVE-2023-28531
so ymmv
Deleted Comment
https://wiki.debian.org/LTS/Extended
Containers reuse host system's new kernel, while inside I get Ubuntu 22.04. I don't see a good reason, if 22.04 will get 15-year life support, to upgrade it much. It's a perfect combination for me, keeping the project on 22.04 essentially forever, as long as my 22.04 build-container can still build the new version.
Imagine the world of pain when the time comes to upgrade the software to Ubuntu 37.04.
In a cluster mode, you can move container into another machine without downtime, back it up in full etc., also via one command.
In theory when using ZFS or btrfs you can do incremental backup of the snapshot (send the diff only), but I never tried it.
Should be mandatory for home automation systems. Support must outlive the home warranty.