IBM over the last two decades really seems to be a story of managed decline.
Its proprietary systems were riding high in the 90s (even if market-share wasn't the absolute largest, their "Big Iron" had a good reputation amongst 'serious' IT folks), but were superseded by linux and commodity hardware at some point in the 00s. They sold off the thinkpad business as non-core, and they sold off their commodity server business (x-series) at some point too.
Both hardware and software solutions have been de-emphasised in favour of 'services', and while that's fine in a business sense, it's so sad from the perspective of all that big blue has done for our industry over the years.
Yes, they now own RedHat, but large acquisitions are part of this story. Each one stems the decline for a while, but cost-cutting and streamlining inside big blue eventually manages the new addition into a shadow of its former self. Maybe this one will be different ... I hope so.
> They sold off the thinkpad business as non-core, and they sold off their commodity server business (x-series) at some point too.
In a sane world, they would be spun-off and let prosper as their own entities.
> Both hardware and software solutions have been de-emphasised in favour of 'services', and while that's fine in a business sense, it's so sad from the perspective of all that big blue has done for our industry over the years.
It looks like MBA philosophy screwing up everything by optimizing 'numbers' as if those numbers have no connection to real life. Very optimal in the short run, but catastophic irrelevancy in the long run. But hey - at least IBM shareholders got maximum returns for some time, and that's all that matters right...
For me, one of the main reasons that Linux replaced proprietary Unix in a lot of places is, ease of learning.
It's simple and cheap to get started with Linux and as a result there are lots of people who know about it, so it's very easy to hire people with Linux skills.
In comparison, getting started with Solaris/HPUX/AIX can be expensive, you might need a physical workstation, getting patches without paying might be tricky etc.
Mainframes have the same problem. I tried to learn more about mainframe security back in the early 2000's and it was really difficult to get any access to a mainframe to practice/look at things, despite working for a large bank which had multiple mainframes.
This is what I think. It's the old business model, which worked when everything was expensive, only really needed by big corps with deep pockets, and any machine a teen would have at home had nothing in common with the big irons at some insurance company. IBM would make big money, consulting firms would make big money, you being good with AIX would make big money.
Then with Linux, you could have what could run on any big server, for free, as a teen in your bedroom. You could poke at everything, look at the source code, ask around how you do this or that, since knowing how to do X wasn't some well-guarded secret to have an advantage over the competition, but something fun to share. Eventually those teens would get older and look for jobs or go to university, while at the same time Linux keeps on maturing, and now if you as a company want to build some system from the ground up, or just replace something ancient, you can pick that expensive well-established system from IBM, requiring expensive experts to maintain them, and program for them, expensive software, ... or go with that free OS that a lot of people know their way around with and ask for a much lower salary.
Of course, this didn't happen over night, especially the "it's free but there is nobody to yell it if it breaks" aspect of open source was very strange to $BIGCORP and seen as an unacceptable risk, but there was a steady shift towards that, also in large parts because it was pioneered by all those late 90s/early 2000s tech startups that were exactly created by those "Linux teens". Because that's what you tinkered with in college, not some proprietary OS that you couldn't even afford, or get updates for, or ask anyone for help if you got stuck.
Not so sure. I learned commercial Unix back in the late 1990s on discarded SPARC hardware which was available in skips and Yahoo auctions for virtually no money at all. In fact it was generally cheaper than the boxed Linux distributions you had to spend on because you only had a dialup.
The killer with the commercial Unixes is that the documentation was orders of magnitude better. That is true today still as well. Most Linux knowledge I have to sift through today is obtained from dubious quality manpages, partially incomplete or out of date documentation and random blog posts.
having gone from Linux to HPUX and now partially back to Linux as the OS i make my money supporting, im not sure the learning gap between Linux and Commercial Unix(tm) is big enough that there was ever a real problem training/recruiting Unix(tm) admins.
I think the the real reason why Linux supplanted commercial Unix(tm) lies in the hardware market, around the time when Linux got good enough to compete directly with commercial Unix on stability we also saw the x86_64 systems getting good enough to complete with Power/Sparc/Itanium based systems on most workloads.
And as the Hardware vendors challenging the commercial Unix market with cheaper Linux boxen, were often the same vendors who sold commercial Unix Boxen, the transition was often managed more then fought.
Wouldn't it be in these companies' best interest to release free "dev" versions of Solaris/HPUX/AIX/etc? I guess even then the problem is there's even a moat there to begin with. Company would have you fill out various info to get a license whereas you can download Ubuntu/Debian/CentOS without any such restriction.
I also wonder about mainframes and why IBM hasn't come out with some sort of "emulation layer" for X86 machines. Yes mainframes are expensive, but wouldn't you want to do everything you can to get mainframe software that people can learn with into as many hands as possible?
Aren't QNX and VxWorks intended for hard RT workloads? I'm not sure the PREEMPT_RT patchset goes that far.
One thing that Linux is doing, albeit only in makeshift, uncoordinated fashion, is making userspace μkernel-like implementation possible for things that used to be exclusive to the kernel. Combine this with full "containerization"/namespacing of all kernel interfaces, live snapshotting and migration of containerized workloads, maybe distributed shared memory allowing for even multiple threads of a single process to be run seamlessly on the same or different nodes with full location independence. This gives you pretty much everything that network-distributed OS's were designed to do in the 1990s, and allows Linux to extend seamlessly from small embedded to datacenter-scale workloads that used to be exclusive to proprietary OS's.
And to add: the footprint of devices running Linux with the PREEMPT_RT patches is generally much greater than the footprint of a device running something like VxWorks. WindRiver actually has its own PREEMPT_RTed distribution[1].
There’s room in the market for a commercially supported *Nix.
Linux is ‘good enough’ for a lot of people, but it’s interface is inconsistent (something a lot of OSes suffer from). A set of tools were all of the commands used the same argument structure in the same order would be hugely beneficial.
The market segments changed when business realized you didn't need to spend $$$$ on expensive UNIX hardware when cheap x86 systems for $ could do the same thing. It's just market dynamics changed.
Ironically is that it only got there, because many UNIX vendors saw in it a way to reduce their own UNIX development costs, thus helping in the process to kill their own products.
HP-UX is kind of an interesting tale in that it's always been underrepresented in the enthusiast community, perhaps because HPE never tried to put up hpux on intanium as an alternative to Linux on proliant, or maybe because it's core market seems to have been in the manufacturing sector where it supplanted hp3000 minicomputers and ran boring logistics software and factory control systems and never really made it big in education, research nor web hosting.
by 2004 HPUX was actually pulling off an successful migration from PA-Risc to Itanium and it lived on profitable for over a decade after that until HPE finally published an Roadmap for when they were going to end HPUX development which is currently scheduled to happen around 2025/26 with the last new itanium systems sold around 2018.
Irix was killed off by windows around the time the pc industry up with SGI's graphics capabilities. And i cant recall what happened to digital's true64(but it never really survived the merger of DEC into first Compaq and then HP).
> I think linux has been the great proprietary unix killer
For the server market, completely agree. But for the workstation/desktop, not so much, with macOS being the last viable alternative, which is still being developed.
It's worth noting that IBM i has a kind of dependency on AIX. IBM i has an AIX binary compatibility environment called AIX. Think of it like WSL, but for AIX binaries and on IBM i. Of course if they created this compatibility layer today they'd probably choose to make it compatible with Linux rather than AIX, but they chose AIX and now they're stuck with it.
This means AIX, or at least the AIX ABIs as supported by IBM i, has to be kept alive for as long as IBM i is alive. So either this bodes badly for IBM i or they consider the amount of ongoing maintenance that PASE needs to be so small it can be handled on an ongoing basis by the i or the new skeleton AIX team. I suspect the latter rather than them canning IBM i though.
IBM i PASE is just running (some of) the AIX userspace on top of a radically different kernel. Same basic idea as WSL1 (as opposed to WSL2). They could have it run the Linux userspace instead, but that would be a lot of work to emulate its differences from AIX, and would break backward compatibility with existing PASE applications. It isn’t clear what the benefit of making that change would be (as of today, as opposed to back when they first developed it-we can’t change the past). What it does mean though, is IBM i PASE is reliant on AIX user space development for its own progress (if there will be any)
In the industrial manufacturing world, the documentation, stability, and relative smallness and comprehensibility of FreeBSD are attractive. A public facing example is Beckhoff, who moved from WinCE to FreeBSD. https://www.beckhoff.com/en-us/products/ipc/software-and-too...
> I wonder. Is there any enterprise that looks to shift work into non Linux, non windows, in 2023?
Does Serverless (FaaS/CaaS/WASM) count?
There might be organisations looking to move some workloads to *BSD (for instance storage or networking - famously Netflix run FreeBSD for their networking).
With regards to Windows, is there anyone switching workloads to Windows? I was under the impression that doesn't really happen anymore, Windows Server being kind of a legacy product (MS retired the slimest deployment, Nano, and features in new releases are nothing special), Azure supporting Linux well and .NET Core supporting Linux well.
There is still a fair amount of small business dotNET software being written that kind of require windows server but that's mostly targeting standalone desktop application or spreadsheet abuses, but for anything that needs high end servers and advanced storage in order to meet performance Linux is pretty much all of the market right now.
There is some niche's in the network space where xBSD plus custom asic's plays a role, due to licensing concerns but more and more vendors are finding a way to do something similar with linux.
And as FaaS/CaaS in practice depends on a set of Linux kernel API's those deployments are still Linux clusters underneath all of the obfuscating complexity layers that 90% of people don't actually need, nor benefit from.
That's a good question. I think it's fair to call some of the true cloud native services an OS, in that you are loading programs into a framework, which stores state in a particular cloud-specific API database, and storing data in cloud-specific API objects (S3).
I was thinking more about the lower level though; that is, the OS of the bare metal.
Last year we signed one of the large Indian outsourcing firms as a customer. The team we worked with ran fleets of AIX boxes for their customers, running their legacy systems.
There was a strict requirement not to change the disk image the boxes were generated from.
Our most popular integration method is a cross platform golang binary. Unfortunately, we used some key dependencies that would not compile for AIX so we had to abandon that route.
We ended up extending our shell scripting integration to use the OpenSSL http client instead of the usual curl. It meant that when sending requests we literally have to prepare and concatenate all the headers, but it works, and we are monitoring all the background jobs on the very old machines, giving the team a way to address operational problems without waiting for reports from their customer.
Seems the plan is to milk captive customers as long as possible with minimal investment.
Also, a few years ago they announced that the XL series of compilers were being rebased on LLVM/clang. Of course they claimed it was to enable innovation or some similar PR mumbo-jumbo and not cost-cutting, but, well..
Ideally yes, but has anybody seen any indication that this is actually happening in this case? Until we see such a thing, I think people have reason to be skeptical.
> Reproducing LLVM requires some crazy motivation?
Oh, absolutely. While a LLVM monopoly isn't desirable either, unless you have some different vision of how to architect a compiler, reinventing the LLVM wheel probably isn't particularly useful.
Its proprietary systems were riding high in the 90s (even if market-share wasn't the absolute largest, their "Big Iron" had a good reputation amongst 'serious' IT folks), but were superseded by linux and commodity hardware at some point in the 00s. They sold off the thinkpad business as non-core, and they sold off their commodity server business (x-series) at some point too.
Both hardware and software solutions have been de-emphasised in favour of 'services', and while that's fine in a business sense, it's so sad from the perspective of all that big blue has done for our industry over the years.
Yes, they now own RedHat, but large acquisitions are part of this story. Each one stems the decline for a while, but cost-cutting and streamlining inside big blue eventually manages the new addition into a shadow of its former self. Maybe this one will be different ... I hope so.
In a sane world, they would be spun-off and let prosper as their own entities.
> Both hardware and software solutions have been de-emphasised in favour of 'services', and while that's fine in a business sense, it's so sad from the perspective of all that big blue has done for our industry over the years.
It looks like MBA philosophy screwing up everything by optimizing 'numbers' as if those numbers have no connection to real life. Very optimal in the short run, but catastophic irrelevancy in the long run. But hey - at least IBM shareholders got maximum returns for some time, and that's all that matters right...
Looking forward to see preempt_rt merged. This will certainly put a lot of pressure on QNX and VxWorks in the future.
It's simple and cheap to get started with Linux and as a result there are lots of people who know about it, so it's very easy to hire people with Linux skills.
In comparison, getting started with Solaris/HPUX/AIX can be expensive, you might need a physical workstation, getting patches without paying might be tricky etc.
Mainframes have the same problem. I tried to learn more about mainframe security back in the early 2000's and it was really difficult to get any access to a mainframe to practice/look at things, despite working for a large bank which had multiple mainframes.
Then with Linux, you could have what could run on any big server, for free, as a teen in your bedroom. You could poke at everything, look at the source code, ask around how you do this or that, since knowing how to do X wasn't some well-guarded secret to have an advantage over the competition, but something fun to share. Eventually those teens would get older and look for jobs or go to university, while at the same time Linux keeps on maturing, and now if you as a company want to build some system from the ground up, or just replace something ancient, you can pick that expensive well-established system from IBM, requiring expensive experts to maintain them, and program for them, expensive software, ... or go with that free OS that a lot of people know their way around with and ask for a much lower salary.
Of course, this didn't happen over night, especially the "it's free but there is nobody to yell it if it breaks" aspect of open source was very strange to $BIGCORP and seen as an unacceptable risk, but there was a steady shift towards that, also in large parts because it was pioneered by all those late 90s/early 2000s tech startups that were exactly created by those "Linux teens". Because that's what you tinkered with in college, not some proprietary OS that you couldn't even afford, or get updates for, or ask anyone for help if you got stuck.
The killer with the commercial Unixes is that the documentation was orders of magnitude better. That is true today still as well. Most Linux knowledge I have to sift through today is obtained from dubious quality manpages, partially incomplete or out of date documentation and random blog posts.
I think the the real reason why Linux supplanted commercial Unix(tm) lies in the hardware market, around the time when Linux got good enough to compete directly with commercial Unix on stability we also saw the x86_64 systems getting good enough to complete with Power/Sparc/Itanium based systems on most workloads.
And as the Hardware vendors challenging the commercial Unix market with cheaper Linux boxen, were often the same vendors who sold commercial Unix Boxen, the transition was often managed more then fought.
I also wonder about mainframes and why IBM hasn't come out with some sort of "emulation layer" for X86 machines. Yes mainframes are expensive, but wouldn't you want to do everything you can to get mainframe software that people can learn with into as many hands as possible?
One thing that Linux is doing, albeit only in makeshift, uncoordinated fashion, is making userspace μkernel-like implementation possible for things that used to be exclusive to the kernel. Combine this with full "containerization"/namespacing of all kernel interfaces, live snapshotting and migration of containerized workloads, maybe distributed shared memory allowing for even multiple threads of a single process to be run seamlessly on the same or different nodes with full location independence. This gives you pretty much everything that network-distributed OS's were designed to do in the 1990s, and allows Linux to extend seamlessly from small embedded to datacenter-scale workloads that used to be exclusive to proprietary OS's.
[1] https://www.windriver.com/products/linux
Linux is ‘good enough’ for a lot of people, but it’s interface is inconsistent (something a lot of OSes suffer from). A set of tools were all of the commands used the same argument structure in the same order would be hugely beneficial.
This is just my opinion.
Deleted Comment
by 2004 HPUX was actually pulling off an successful migration from PA-Risc to Itanium and it lived on profitable for over a decade after that until HPE finally published an Roadmap for when they were going to end HPUX development which is currently scheduled to happen around 2025/26 with the last new itanium systems sold around 2018.
Irix was killed off by windows around the time the pc industry up with SGI's graphics capabilities. And i cant recall what happened to digital's true64(but it never really survived the merger of DEC into first Compaq and then HP).
Sun's open-sourcing of Solaris probably extended its lifespan but Oracle isn't what turned it into a niche platform.
For the server market, completely agree. But for the workstation/desktop, not so much, with macOS being the last viable alternative, which is still being developed.
This means AIX, or at least the AIX ABIs as supported by IBM i, has to be kept alive for as long as IBM i is alive. So either this bodes badly for IBM i or they consider the amount of ongoing maintenance that PASE needs to be so small it can be handled on an ongoing basis by the i or the new skeleton AIX team. I suspect the latter rather than them canning IBM i though.
AIX 5L was released in 2001 and the ‘L’ stood for Linux.
So the writing has been on the wall for a very long time.
I wonder. Is there any enterprise that looks to shift work into non Linux, non windows, in 2023?
Does Serverless (FaaS/CaaS/WASM) count?
There might be organisations looking to move some workloads to *BSD (for instance storage or networking - famously Netflix run FreeBSD for their networking).
With regards to Windows, is there anyone switching workloads to Windows? I was under the impression that doesn't really happen anymore, Windows Server being kind of a legacy product (MS retired the slimest deployment, Nano, and features in new releases are nothing special), Azure supporting Linux well and .NET Core supporting Linux well.
There is some niche's in the network space where xBSD plus custom asic's plays a role, due to licensing concerns but more and more vendors are finding a way to do something similar with linux.
And as FaaS/CaaS in practice depends on a set of Linux kernel API's those deployments are still Linux clusters underneath all of the obfuscating complexity layers that 90% of people don't actually need, nor benefit from.
I was thinking more about the lower level though; that is, the OS of the bare metal.
Yeah maybe now it's ok to move to Linux. But hey, sure, you do you
There was a strict requirement not to change the disk image the boxes were generated from.
Our most popular integration method is a cross platform golang binary. Unfortunately, we used some key dependencies that would not compile for AIX so we had to abandon that route.
We ended up extending our shell scripting integration to use the OpenSSL http client instead of the usual curl. It meant that when sending requests we literally have to prepare and concatenate all the headers, but it works, and we are monitoring all the background jobs on the very old machines, giving the team a way to address operational problems without waiting for reports from their customer.
Also, a few years ago they announced that the XL series of compilers were being rebased on LLVM/clang. Of course they claimed it was to enable innovation or some similar PR mumbo-jumbo and not cost-cutting, but, well..
Deleted Comment
Ideally yes, but has anybody seen any indication that this is actually happening in this case? Until we see such a thing, I think people have reason to be skeptical.
> Reproducing LLVM requires some crazy motivation?
Oh, absolutely. While a LLVM monopoly isn't desirable either, unless you have some different vision of how to architect a compiler, reinventing the LLVM wheel probably isn't particularly useful.