https://en.wikipedia.org/wiki/IBM_i#TIMI
To illustrate why the AS/400 had its market niche, the Cali Cartel had a very successful installation of an AS/400. Apparently they used it to do both "business analytics" and back-office tasks.
https://www.vice.com/en/article/the-cartel-supercomputer-of-...
The software translation layer has been a feature of the platform dating back to the System/38 days, and was specifically intended to allow the CPU architecture to change without breaking software compatibility.
IBM i also has the "PASE" layer, which is a binary compatibility layer for AIX. Those applications do not use the translation layer.
For many years, the 64-bit extension of the original S/360/370/390 architecture was emulated in the software layer via the static binary translation – just like the i Series AS/400 have been doing since the inception, and there was no native S/360 implementation in silicon for a fairly long time.
If my understanding is correct, with TELUM processors, IBM has gone back to implementing the ISA in silicon, although the available details on TELUM are scarce.
I would expected CDE as a first class citizen and maybe OpenLook.
And it says that it it maily for 32bit SPARC and 32bit X86 and later that "Important: 32-bit hardware support now completely removed.".
Nice effort though.
Ditto with market control, it's not some permanent crown you achieve. Companies have to keep performing to keep their market share.
E.g., if you opened an account at a major bank, and your transactions started failing, would you keep banking there?
A lot of people who land in that situation do continue banking there since they are either tied into that bank through loans/debt, or lack the time/energy to move elsewhere.
But this does make me wonder, how much of a difference is there really between the A chips and the M chips. Clearly they are similar enough if either can run iPadOS or Mac. Or is this a case of the operating systems having shared components that make this easier?
But then it does beg the question, why have the distinction in the first place if they are going to use the chips in other hardware. Originally I thought the distinction was that the M series was meant to not give the impression that the Mac line was "underpowered" running mobile chips like on the iPhone.
I suspect that was the original intention. My understanding is that the higher end M chips are essentially multiple lower end M chips glued together. I suspect that the jump from A-series to M-series is similar.
Perhaps one use is to compete with GPUs, but even a multi core CPU is not likely to compete with a GPU in terms of number of arithmetic/vector units.