The reason mainframe persists is it's a pretty slick development and deployment environment. A lot things you might cobble together as dependencies - like maybe a database or a message queue, or observability facilities or even deployment strategies like hot-hot deployments - they're all just built in to the platform. That means they're trivial to consume and they're fully supported by one vendor. It's like the worlds most comprehensive application development framework.
Going back to the hardware that everyone likes to focus on, it's less radically different from normal servers today than it was historically. The mainframe today is a 19 inch rack like any other. By that i mean it is not only a 19 inch rack like your x64/ARM servers are but also the same power density (32 or 64 amp racks), cooling requirements etc.
The most interesting bit is the software, not the hardware. There are cool hardware aspects too - but focus on them and you miss the real reason these things are popular in certain environments.
Most of this Cloud technology we like to go all atwitter about is a retread or a redesign of software that's been on mainframes for thirty years. The old farts must laugh themselves silly looking at us acting like we discovered a large white object orbiting the earth.
Oh yeah, in a past life I was a CPU logic designer for walk-in, refrigerated mainframes. I was always amused by youngsters thinking they had invented things like pipelines and branch prediction. Seymour Cray and Gene Amdahl were prolific inventors.
I'm not old enough (or been in the appropriate industry) to have worked with mainframes. I started in the heyday of the workstations (Sun, HP-UX, AIX, SGI) as a reaction to the annoyances of mainframes.
Why pay for disk and compute access when you can have a workstation! Why have limited access to the system when you can have your own! Was the vibe.
It is amusing/sad that in the AWS world we're right back to basically mainframes. Centrally hosted massive compute that you don't really have full control or access and have to pay for use time.
I wonder when the next swing back to personal computing arrives?
All this is true but you forget the cost. Mainframe do not allow to start small. I have no idea how much the cheapest mainframe cost but it's porbably more than a dirt cheap x86 server with a bunch of free software so important for student projects, training, hobbies, start-up and cost constrained industries.
> Going back to the hardware that everyone likes to focus on, it's less radically different from normal servers today than it was historically. The mainframe today is a 19 inch rack like any other.
The Main Frame of yore was ... one of the 19 inch racks the computer was made out of (the main one, typically with the ALU and registers in it). And those racks were used because that was how phone systems were built (phone systems had a lot more repeated / regular structure than something like a computer, though of course everything has converged). The "frame" nomenclature came from the phone system too.
So "normal servers today" got the rack from the mainframe, the mainframe didn't adopt it because of how datacenters were built. All the familiar stuff (raised floors, chillers, backup power, fire suppression, controlled access doors and so on) come from the mainframe world.
The granddaddy of mainframes, System/360, was never in 19" racks. IBM first introduced the use of 19" racks in their midrange lines. What we currently call IBM z didn't adopt standard racks until the 21st century.
The hardware is quite radically different. It's not the mechanical form factor or bulk electrical and cooling of it, but the silicon.
The processor checkpoints state, and if an error is detected it can be rolled back and the checkpoint moved to another physical processor and re-started transparently to software. Stores are tracked all the way down to "RAID memory" (actually called RAIM), whereas in other CPUs they get punted off at the L1 cache. Practically every component and connection is redundant. There are stories of mainframe drawers being hauled to particle accelerators and the beam switched on while they're running. Quite amazing machines.
Not to say that has more value than the software (I don't know one way or the other), but the hardware is no gimmick.
I think its hardware is what enables such reliable software.
Hot plug CPUs, memory modules, and DASDs. All survive the failure scenario gracefully.
Good luck trying to do that on a rack of x86 servers.
I think as in most engineering things. They do a different trade-off and cater to a different niche while x86 servers are the mass market consumables that most workloads -should- be using. The mainframe CPUs go for the widest IO capabilities, insane cache sizes and hardware offload because that’s what their target audience(banks, insurance companies, airline ticket systems) need.
Their parallel sysplex clustering solution is an engineering marvel but also tightly coupled to their hardware.
In some ways, the IBM mainframe is the Apple of niche critical enterprise computing. They are also the ancestor of cloud computing in paving the way for “pay per use” cost models and multi tenancy.
x86 just made it so the whole computer could drop out of the “cluster” which greatly simplifies the hardware design but the result is NOT a “virtual mainframe” as the software becomes exponentially more complicated if you want “all power”.
It persists in many places due to the deeply ingrained belief that, for one reason or another, it would be technically, practically, or economically impossible to migrate functionality off the mainframe. Somehow the people in charge of these systems have managed to convince large enterprises of this for decades. And now we are in a situation where their long held beliefs have become true because nobody is around that understands how or why the software works the way it does. It's a disgrace.
> It persists in many places due to the deeply ingrained belief that, for one reason or another, it would be technically, practically, or economically impossible to migrate functionality off the mainframe.
It is never technical or pratical reasons, always economic reasons. To use contemporary vocabulary, you "just" need to replicate a "multi tenant HA cloud environment" to migrate from a mainframe at great cost.
> And now we are in a situation where their long held beliefs have become true because nobody is around that understands how or why the software works the way it does.
Statistically unlikely, most of the business software that goes onto said mainframe is in fact very simple and straightforward and can be easly ported out, however, the real deal is all the integration and high availability logic built-in into "the mainframe platform". You're not porting that out without replicating said "multi tenant HA cloud environment" at great cost.
Or the people in charge are of the belief that things work and they're a tiny part of the overall organization's costs--and any massive migration will run over schedule and budget, won't necessarily be better at the end, and will be unnecessary and disruptive. They may or may not be correct but it's not an irrational belief whatever those advocating for moving everything to a distributed platform think.
Have you ever migrated a decades old application off of a mainframe?
Most of these kinds of systems have decades of codified knowledge that applies the legal requirements of many disparate jurisdictions in these systems. Laws vary from city to city, county to county and state to state, and that's only for the US. Some of these systems implement code that applies legal requirements for dozens of nations.
In general, you're going to have to spend at least a year just to document the existing system. Another six months to a year generating test cases and scripts for validation. And that's before even starting to build the new system. And that is ALL optimistic, assuming everything goes right.
I had to once write a relatively simple application for "vacation bidding" where in employees will get a position to request certain weeks off during the year. This was a legal requirement and happened to involve the company, two other companies that had been merged in and three unions. Each with different rules regarding how seniority is calculated. The contractual specifics were over two thousand pages long. That's just one very small piece of software in one of these very large companies. It took the better part of a year to implement. Now imagine taking that a couple decades later after it's evolved and migrating to an entirely new platform and paradigm. Now imagine the cost to do that.
Migrating a suite of programs written w/ COBOL, CICS, DB2 from z/OS to Linux is fairly straightforward. MicroFocus COBOL compiles IBM COBOL, DB2 runs on Linux or you can swap out another RDBMS. The CICS part requires a simple runtime easily implemented in C plus a new scanner to replace the calls in your programs.
I was part of a 4 person team that knocked this out in under a year. 1,000's of mainframe programs running w/o change (at the time) on Unix. This was in the 90s.
The greatest hurdle for a project to be replaced is that if its works. I assure you that the users of IBM mainframes would be looking at at least a multi million dollar project to perform a decommission and if even approved it would be first on the chopping block to pause / eliminate the project.
I have worked on an RFP for migration. It persists because the cost for migrating can be quite high -- many mainframe environments have many applications with complex sets of dependencies (after decades, not surprising). No one doubts that it could be migrated, but it is not a trivial undertaking.
When folks have replaced significant enough chunks with other apps, or otherwise made the prospect less daunting, then it may happen. It just comes down to cost and risk management.
Notice they didn't mention how incredibly outdated all these applications are? Our financial money transfer system is an ancient joke. The IRS has systems that are hopelessly out of date and limited in capabilities.
It's really easy and safe to sign a contract for a newer mainframe; porting a legacy app is fraught. Middle-upper management goes for what's safe, not what could torpedo their career.
The hardware might be whiz-bang but the legacy applications are slow, outdated, and extremely hampered in functionality.
In my state, the birth records system is electronic, and so is our RMV. But in order to get a RealID, I had to get a paper copy printed out to bring to the RMV because they can not, or will not, integrate the two systems. Meanwhile other countries have things like electronic ID cards with personal certs so you can electronically sign documents and identify yourself.
The whole thing is a giant puff piece for IBM, reading like a sales presentation transcript.
Inline SQL is something I've always missed elsewhere. Not sure it's viable when you've got a billion different incompatible databases supported on a platform, so some of the limitations of mainframes have advantages.
I've always looked at IBM like an even more extreme version of Microsoft. I'd probably have us on their path if the cost and knowledge weren't a massive barrier to entry. I know IBM has some cloud thing for us baby startups, but I sure as hell can't figure out how to use it or if our customers are even aware of it.
Vertical integration of the business into one magic box is a perpetual dream of mine. Bonus points if I can yell at exactly one vendor if anything goes wrong.
I never worked on z/OS, but I did work on AS400 (Series I, or whatever it's called now).
I think the main things missing is how much IBM really brings to the table here.
> If one crashes or has to go down for maintenance, the other partitions are unaffected.
Effectively, IBM often does that for you. The machine detects an issue, and calls IBM, who sends someone out (in my day it was immediately), and they fix it. Then it's fixed and they leave and most of your staff has no idea they were there at all.
Plus, there's not enough that can be said about using a dedicated stack of hardware and software owned by one company. If there's an issue, it's IBM's. They are the ones who need to fix it. No trying to get HP vs Microsoft to agree to take the blame (which can take literal weeks). Just call IBM, and they take care of it. (In theory)
Oxide seems to be trying to build a similar arrangement with customers. That's half their motivation for switching to open firmware for the little computers hidden in your machine. There's a big game of fingerpointing these days where you call your vendor and they blame one of their vendors and can't/won't hunt down the issue for you.
The ways I've heard that explained sound exhausting. Paying anyone who you can say, "your machine broke come fix it" and they actually do, is probably worth the money. Right now Cloud providers and IBM are the only ones really providing that service. I suspect history will say that people were not running to the Cloud so much as running away from bullshit hardware vendors.
Yes! There are lots of ways in which we are nothing like these big IBM systems (we are, for example, entirely open -- like POWER, but very unlike AS/400 or Z series machines), but we definitely admire the robustness of these machines!
Indeed, arguably to a fault, as I likened us to the AS/400 in a VC pitch[0] -- which (despite the potential accuracy of the historical analogue) is as ill-advised as it sounds...
There was one other aspect of these machines I missed. They run forever.
We had a machine in there that by the time I left was running nonstop for 10 years. I was their only IT person for 5 of those years, and I basically visited them a few times a year to work on MSAccess reports, and never touched the server (I may have never even logged into it!). It just chugged along running their entire business with no maintenance.
Throughout my career, the term i heard most often for this type of scenario was: "Which neck to choke when stuff fails...", or something like "...the least number of necks to choke when stuff fails...", etc. lol :-D
I've always heard that as "scheduled uptime" or "unscheduled outages". When I worked in a mainframe shop, they used to IPL (reboot) the mainframe every Sunday morning. That down time was never considered as part of the SLA.
If you zoom out, the ecosystem is very comparable to running on AWS or similar. It's an opinionated environment with proprietary answers for how to do things like scheduling, short-lived tasks, long-lived tasks, storage allocation, networks, monitoring, forced version upgrades, etc.
I suspect that for most organizations that use mainframes today, there are so many integration points and so much data is involved that the economics that drove a lot of early-on timesharing no longer apply.
It's one of the major ways to get a mainframe these days, even companies you'd expect to have one on premises might actually have a lease on one running in IBM datacenter with VPN to internal network.
I believe the real reason for its survivability is the fact that you can pull a tape from the seventies and those binaries will run without any modification.
It's not only that you can easily recompile your COBOL from the '70s, the binary is still compatible. You've never been pushed to migrate to another technology.
Imagine the effort and 'knowledge' included in those evolved programs. The banks don't even know, and are conscious of it, how many laws and regulations they have encoded in there.
As someone stated in another comment, the software is the impressive part.
This is both good and bad. You have to consider bugs a kind of feature like anything else. One of my coworkers showed me a bug report he'd opened 30 years prior that IBM still refused to fix because people depended on the broken behavior. So porting off the mainframe also means bringing along those quirks or rewriting to specs that provably don't regress performance and behavior. Writing or rewriting software is easy, but migrations despite how they first appear are not really "green field" development.
I've never used or even seen a mainframe in 26 years in the tech industry. My brother in law works for a bank and basically the business runs on it.
The hardware and software are certainly impressive but does anyone use a mainframe for a new project and not just upgrading or expanding an existing system?
I'm in integrated circuit design and we have compute clusters with thousands of CPUs. For some jobs we use a single machine with 128 CPUs and over 2TB RAM. Some steps get split over 30 machines in parallel. All of this EDA / chip design software migrated from Sun and HP workstations in the 1980's and 90's to Linux x86 in the 2000's. I think some of it ran on mainframes in the 70's but does anyone use mainframes for scientific style calculations or is it just financial transaction / database kind of stuff?
You'll most likely find it in large companies that operated in the 60s or 70s that haven't switched to anything new, mostly because their core business runs on it.
I know of two companies, and at least one still use it, had several summer jobs there. They make sheet metal rolls by flattening out train cart sized hunks of steel, and while the mainframe system didn't run the machines (operators and PLC handled that) it kept track of everything, inventory, logistics and planning. I used it to plan rail shipments, where to put each roll of sheet metal on a train and loaded them up.
Oh, fun times. I've been on the other side of that business in my past life, where I had to "revive" a business critical program written in VB3 (yes for Windows 3.x) after a computer migration that was used to calculate the weight of an aluminum or steel coil/roll via its dimensions so it could be input into the PLC for the feeder mechanism at the beginning of a production line that did forming/extruding of metals.
So on one end mainframes, on the other ends software written for DOS and Windows 3.x still being used in (at the time the 2010s) to keep critical infrastructure for manufacturing running.
That could have been a mainframe, but I think factories are much more likely to be using AS/400 aka IBM i. That runs on regular IBM Power servers these days.
> I've never used or even seen a mainframe in 26 years in the tech industry.
40 years here. Although in theory I've used a mainframe in college, but it wasn't IBM (Burroughs) and just seemed like "a computer" at the time. I also worked a bit on a terminal emulator for ICL mainframes, so probably at least logged in. So 40 years of saying "I wonder if the mainframe people already have a solution to this?"
Probably mainframes haven't been much used for HPC since the transition to Cray in the 70s.
I started my tech career as a student worker at my school district's office. They had a Unisys mainframe managing student and employee records; around 1998, they replaced the old system which was about 3 feet tall and maybe 20 feet long, with a 4U dual processor Pentium Pro running NT 4 with a Unisys emulator. Seemed to work just about as well, but the operator console seemed a lot less fun. Still interfaced with the giant impact printer to print out grades and payroll.
"Today’s IBM mainframe CPUs are based on a highly evolved version of the POWER architecture that IBM has been developing for over 30 years. "
I've heard a lot of largely clueless people who weren't aware of the differences between the zeries and pseries say something like this, but its generally been entirely false (especially 15-30 years ago which overlaps with some time I myself spent at IBM). Given the rest of the article I wouldn't presume the author is in this category.
So has something changed? or is the implication stretching the truth? I mean I guess you could strip a POWER core down so it only runs s390 microcode/etc, but that likely won't actually yield a performant machine and the process of evolving it would likely fundamentally change the microarch of whatever RISCish core they started with.
I mean they are entirely different Arches, in the past utilizing entirely different microarches. I can see some sharing of a RTL for maybe a vector unit, or cache structure, or a group doing layout, but that doesn't make the zeries processors any more derivative of POWER than Itanium was derivative of x86, etc.
PS: the bit about zos partitions supporting linux seems like a bit of confusion too, earlier its correct about the LPARs being capable of running linux directly, but ZOS isn't providing the lpar functionality, and is generally just another guest alongside linux, ztpf, and various other more esoteric "OSs" that can run natively. There is a unix system services in zos but that isn't a linux kernel/etc.
I wonder if the article meant that IBM had worked on the POWER architecture for over 30 years? IBM did work on the eCLipz Project [0][1], combining / sharing tech from IBM Power/pseries, AS/400/iseries, and Zseries. This was around 2005-ish. I assume that collaboration has continued, but I don't know if that counts as 'based on ...'.
"The z10 processor was co-developed with and shares many design traits with the POWER6 processor, such as fabrication technology, logic design, execution unit, floating-point units, bus technology (GX bus) and pipeline design style, i.e., a high frequency, low latency, deep (14 stages in the z10), in-order pipeline.
However, the processors are quite dissimilar in other respects, such as cache hierarchy and coherency, SMP topology and protocol, and chip organization. The different ISAs result in very different cores – there are 894 unique z10 instructions, 75% of which are implemented entirely in hardware. The z/Architecture is a CISC architecture, backwards compatible to the IBM System/360 architecture from the 1960s. "[2]
For the POWER
Yeah, I don't know where the author is getting the POWER arch connection.
I thought the IBM Z Architecture was the CISC based System 360 / 390 architecture from the 1960's. At least that is what I remember my one friend who has some mainframe experience was telling me.
> Mainframes descended directly from the technology of the first computers in the 1950s. Instead of being streamlined into low-cost desktop or server use, though, they evolved to handle massive data workloads.
I think the first sentence is 100% correct, but the second one not so much: current desktops and servers (not to mention laptops, tablets, smartphones etc. etc.) evolved from the first microcomputers introduced in the 1970s with the idea of having a computer (albeit initially a not very capable one) that anyone could afford. These then quickly evolved during the 1980s and 1990s to cover most of the applications for which you would have needed a mainframe a few years earlier.
It's still somewhat true. Desktop CPUs have hardware acceleration for things like video decoding, mainframes have hardware acceleration for things like encryption / decryption, compression / decompression, fixed-decimal arithmetic, etc.
Each person has a box in which they are essentially the sole tenant, versus a big box that has to have a bunch of sophistication to handle multitenancy.
Going back to the hardware that everyone likes to focus on, it's less radically different from normal servers today than it was historically. The mainframe today is a 19 inch rack like any other. By that i mean it is not only a 19 inch rack like your x64/ARM servers are but also the same power density (32 or 64 amp racks), cooling requirements etc.
The most interesting bit is the software, not the hardware. There are cool hardware aspects too - but focus on them and you miss the real reason these things are popular in certain environments.
E.g. how many variants of IRC have we had now? Usenet? VMs?
Why pay for disk and compute access when you can have a workstation! Why have limited access to the system when you can have your own! Was the vibe.
It is amusing/sad that in the AWS world we're right back to basically mainframes. Centrally hosted massive compute that you don't really have full control or access and have to pay for use time.
I wonder when the next swing back to personal computing arrives?
The Main Frame of yore was ... one of the 19 inch racks the computer was made out of (the main one, typically with the ALU and registers in it). And those racks were used because that was how phone systems were built (phone systems had a lot more repeated / regular structure than something like a computer, though of course everything has converged). The "frame" nomenclature came from the phone system too.
So "normal servers today" got the rack from the mainframe, the mainframe didn't adopt it because of how datacenters were built. All the familiar stuff (raised floors, chillers, backup power, fire suppression, controlled access doors and so on) come from the mainframe world.
The processor checkpoints state, and if an error is detected it can be rolled back and the checkpoint moved to another physical processor and re-started transparently to software. Stores are tracked all the way down to "RAID memory" (actually called RAIM), whereas in other CPUs they get punted off at the L1 cache. Practically every component and connection is redundant. There are stories of mainframe drawers being hauled to particle accelerators and the beam switched on while they're running. Quite amazing machines.
Not to say that has more value than the software (I don't know one way or the other), but the hardware is no gimmick.
Hot plug CPUs, memory modules, and DASDs. All survive the failure scenario gracefully.
Good luck trying to do that on a rack of x86 servers.
I think as in most engineering things. They do a different trade-off and cater to a different niche while x86 servers are the mass market consumables that most workloads -should- be using. The mainframe CPUs go for the widest IO capabilities, insane cache sizes and hardware offload because that’s what their target audience(banks, insurance companies, airline ticket systems) need.
Their parallel sysplex clustering solution is an engineering marvel but also tightly coupled to their hardware.
In some ways, the IBM mainframe is the Apple of niche critical enterprise computing. They are also the ancestor of cloud computing in paving the way for “pay per use” cost models and multi tenancy.
It is never technical or pratical reasons, always economic reasons. To use contemporary vocabulary, you "just" need to replicate a "multi tenant HA cloud environment" to migrate from a mainframe at great cost.
> And now we are in a situation where their long held beliefs have become true because nobody is around that understands how or why the software works the way it does.
Statistically unlikely, most of the business software that goes onto said mainframe is in fact very simple and straightforward and can be easly ported out, however, the real deal is all the integration and high availability logic built-in into "the mainframe platform". You're not porting that out without replicating said "multi tenant HA cloud environment" at great cost.
Most of these kinds of systems have decades of codified knowledge that applies the legal requirements of many disparate jurisdictions in these systems. Laws vary from city to city, county to county and state to state, and that's only for the US. Some of these systems implement code that applies legal requirements for dozens of nations.
In general, you're going to have to spend at least a year just to document the existing system. Another six months to a year generating test cases and scripts for validation. And that's before even starting to build the new system. And that is ALL optimistic, assuming everything goes right.
I had to once write a relatively simple application for "vacation bidding" where in employees will get a position to request certain weeks off during the year. This was a legal requirement and happened to involve the company, two other companies that had been merged in and three unions. Each with different rules regarding how seniority is calculated. The contractual specifics were over two thousand pages long. That's just one very small piece of software in one of these very large companies. It took the better part of a year to implement. Now imagine taking that a couple decades later after it's evolved and migrating to an entirely new platform and paradigm. Now imagine the cost to do that.
I was part of a 4 person team that knocked this out in under a year. 1,000's of mainframe programs running w/o change (at the time) on Unix. This was in the 90s.
When folks have replaced significant enough chunks with other apps, or otherwise made the prospect less daunting, then it may happen. It just comes down to cost and risk management.
The hardware might be whiz-bang but the legacy applications are slow, outdated, and extremely hampered in functionality.
In my state, the birth records system is electronic, and so is our RMV. But in order to get a RealID, I had to get a paper copy printed out to bring to the RMV because they can not, or will not, integrate the two systems. Meanwhile other countries have things like electronic ID cards with personal certs so you can electronically sign documents and identify yourself.
The whole thing is a giant puff piece for IBM, reading like a sales presentation transcript.
https://learn.microsoft.com/en-us/dotnet/csharp/programming-...
https://hackage.haskell.org/package/postgresql-typed-0.6.2.4...
Vertical integration of the business into one magic box is a perpetual dream of mine. Bonus points if I can yell at exactly one vendor if anything goes wrong.
I think the main things missing is how much IBM really brings to the table here.
> If one crashes or has to go down for maintenance, the other partitions are unaffected.
Effectively, IBM often does that for you. The machine detects an issue, and calls IBM, who sends someone out (in my day it was immediately), and they fix it. Then it's fixed and they leave and most of your staff has no idea they were there at all.
Plus, there's not enough that can be said about using a dedicated stack of hardware and software owned by one company. If there's an issue, it's IBM's. They are the ones who need to fix it. No trying to get HP vs Microsoft to agree to take the blame (which can take literal weeks). Just call IBM, and they take care of it. (In theory)
The ways I've heard that explained sound exhausting. Paying anyone who you can say, "your machine broke come fix it" and they actually do, is probably worth the money. Right now Cloud providers and IBM are the only ones really providing that service. I suspect history will say that people were not running to the Cloud so much as running away from bullshit hardware vendors.
[0] https://www.youtube.com/watch?v=5P5Mk_IggE0&t=2216s
That being said usually I am the one replacing that part on my own infrastructure which is usually a 30 minute drive an hour to maybe 3 of my time
advantage - one throat to choke
disadvantage - they've got you by the balls when it comes time to pay the licensing and maintenance fees
We had a machine in there that by the time I left was running nonstop for 10 years. I was their only IT person for 5 of those years, and I basically visited them a few times a year to work on MSAccess reports, and never touched the server (I may have never even logged into it!). It just chugged along running their entire business with no maintenance.
Very wrong. Five nines is five minutes and 13 seconds of cumulative downtime in a year[0].
Three seconds of downtime in a year is seven nines[1].
[0] - https://uptime.is/five-nines
[1] - https://uptime.is/99.99999
I suspect that for most organizations that use mainframes today, there are so many integration points and so much data is involved that the economics that drove a lot of early-on timesharing no longer apply.
As someone stated in another comment, the software is the impressive part.
The hardware and software are certainly impressive but does anyone use a mainframe for a new project and not just upgrading or expanding an existing system?
I'm in integrated circuit design and we have compute clusters with thousands of CPUs. For some jobs we use a single machine with 128 CPUs and over 2TB RAM. Some steps get split over 30 machines in parallel. All of this EDA / chip design software migrated from Sun and HP workstations in the 1980's and 90's to Linux x86 in the 2000's. I think some of it ran on mainframes in the 70's but does anyone use mainframes for scientific style calculations or is it just financial transaction / database kind of stuff?
I know of two companies, and at least one still use it, had several summer jobs there. They make sheet metal rolls by flattening out train cart sized hunks of steel, and while the mainframe system didn't run the machines (operators and PLC handled that) it kept track of everything, inventory, logistics and planning. I used it to plan rail shipments, where to put each roll of sheet metal on a train and loaded them up.
So on one end mainframes, on the other ends software written for DOS and Windows 3.x still being used in (at the time the 2010s) to keep critical infrastructure for manufacturing running.
40 years here. Although in theory I've used a mainframe in college, but it wasn't IBM (Burroughs) and just seemed like "a computer" at the time. I also worked a bit on a terminal emulator for ICL mainframes, so probably at least logged in. So 40 years of saying "I wonder if the mainframe people already have a solution to this?"
Probably mainframes haven't been much used for HPC since the transition to Cray in the 70s.
Deleted Comment
I've heard a lot of largely clueless people who weren't aware of the differences between the zeries and pseries say something like this, but its generally been entirely false (especially 15-30 years ago which overlaps with some time I myself spent at IBM). Given the rest of the article I wouldn't presume the author is in this category.
So has something changed? or is the implication stretching the truth? I mean I guess you could strip a POWER core down so it only runs s390 microcode/etc, but that likely won't actually yield a performant machine and the process of evolving it would likely fundamentally change the microarch of whatever RISCish core they started with.
I mean they are entirely different Arches, in the past utilizing entirely different microarches. I can see some sharing of a RTL for maybe a vector unit, or cache structure, or a group doing layout, but that doesn't make the zeries processors any more derivative of POWER than Itanium was derivative of x86, etc.
PS: the bit about zos partitions supporting linux seems like a bit of confusion too, earlier its correct about the LPARs being capable of running linux directly, but ZOS isn't providing the lpar functionality, and is generally just another guest alongside linux, ztpf, and various other more esoteric "OSs" that can run natively. There is a unix system services in zos but that isn't a linux kernel/etc.
"The z10 processor was co-developed with and shares many design traits with the POWER6 processor, such as fabrication technology, logic design, execution unit, floating-point units, bus technology (GX bus) and pipeline design style, i.e., a high frequency, low latency, deep (14 stages in the z10), in-order pipeline.
However, the processors are quite dissimilar in other respects, such as cache hierarchy and coherency, SMP topology and protocol, and chip organization. The different ISAs result in very different cores – there are 894 unique z10 instructions, 75% of which are implemented entirely in hardware. The z/Architecture is a CISC architecture, backwards compatible to the IBM System/360 architecture from the 1960s. "[2] For the POWER
[0] https://www.realworldtech.com/eclipz/ [1] https://news.ycombinator.com/item?id=18494225 [2] https://en.wikipedia.org/wiki/IBM_z10
I thought the IBM Z Architecture was the CISC based System 360 / 390 architecture from the 1960's. At least that is what I remember my one friend who has some mainframe experience was telling me.
I think the first sentence is 100% correct, but the second one not so much: current desktops and servers (not to mention laptops, tablets, smartphones etc. etc.) evolved from the first microcomputers introduced in the 1970s with the idea of having a computer (albeit initially a not very capable one) that anyone could afford. These then quickly evolved during the 1980s and 1990s to cover most of the applications for which you would have needed a mainframe a few years earlier.
https://en.wikipedia.org/wiki/AES_instruction_set
https://en.wikipedia.org/wiki/Intel_SHA_extensions
Each person has a box in which they are essentially the sole tenant, versus a big box that has to have a bunch of sophistication to handle multitenancy.