Readit News logoReadit News
CraigJPerry · 3 years ago
The reason mainframe persists is it's a pretty slick development and deployment environment. A lot things you might cobble together as dependencies - like maybe a database or a message queue, or observability facilities or even deployment strategies like hot-hot deployments - they're all just built in to the platform. That means they're trivial to consume and they're fully supported by one vendor. It's like the worlds most comprehensive application development framework.

Going back to the hardware that everyone likes to focus on, it's less radically different from normal servers today than it was historically. The mainframe today is a 19 inch rack like any other. By that i mean it is not only a 19 inch rack like your x64/ARM servers are but also the same power density (32 or 64 amp racks), cooling requirements etc.

The most interesting bit is the software, not the hardware. There are cool hardware aspects too - but focus on them and you miss the real reason these things are popular in certain environments.

hinkley · 3 years ago
Most of this Cloud technology we like to go all atwitter about is a retread or a redesign of software that's been on mainframes for thirty years. The old farts must laugh themselves silly looking at us acting like we discovered a large white object orbiting the earth.
dbcurtis · 3 years ago
Oh yeah, in a past life I was a CPU logic designer for walk-in, refrigerated mainframes. I was always amused by youngsters thinking they had invented things like pipelines and branch prediction. Seymour Cray and Gene Amdahl were prolific inventors.
queuebert · 3 years ago
I posit that most tech companies make money by poorly recreating old technologies for use by younger people ignorant of the old ones.

E.g. how many variants of IRC have we had now? Usenet? VMs?

jjav · 3 years ago
I'm not old enough (or been in the appropriate industry) to have worked with mainframes. I started in the heyday of the workstations (Sun, HP-UX, AIX, SGI) as a reaction to the annoyances of mainframes.

Why pay for disk and compute access when you can have a workstation! Why have limited access to the system when you can have your own! Was the vibe.

It is amusing/sad that in the AWS world we're right back to basically mainframes. Centrally hosted massive compute that you don't really have full control or access and have to pay for use time.

I wonder when the next swing back to personal computing arrives?

vyrotek · 3 years ago
Can confirm. My old father who works on these mainframes laughs every time.
skywal_l · 3 years ago
All this is true but you forget the cost. Mainframe do not allow to start small. I have no idea how much the cheapest mainframe cost but it's porbably more than a dirt cheap x86 server with a bunch of free software so important for student projects, training, hobbies, start-up and cost constrained industries.
jjtheblunt · 3 years ago
SUN used to say The Network is the Computer...that always reminds me of what is called "cloud" the last 10+ years.
nikau · 3 years ago
just you wait, one day google cloud pub/sub will have a queue pause feature!
gumby · 3 years ago
> Going back to the hardware that everyone likes to focus on, it's less radically different from normal servers today than it was historically. The mainframe today is a 19 inch rack like any other.

The Main Frame of yore was ... one of the 19 inch racks the computer was made out of (the main one, typically with the ALU and registers in it). And those racks were used because that was how phone systems were built (phone systems had a lot more repeated / regular structure than something like a computer, though of course everything has converged). The "frame" nomenclature came from the phone system too.

So "normal servers today" got the rack from the mainframe, the mainframe didn't adopt it because of how datacenters were built. All the familiar stuff (raised floors, chillers, backup power, fire suppression, controlled access doors and so on) come from the mainframe world.

electroly · 3 years ago
The granddaddy of mainframes, System/360, was never in 19" racks. IBM first introduced the use of 19" racks in their midrange lines. What we currently call IBM z didn't adopt standard racks until the 21st century.
throwawaylinux · 3 years ago
The hardware is quite radically different. It's not the mechanical form factor or bulk electrical and cooling of it, but the silicon.

The processor checkpoints state, and if an error is detected it can be rolled back and the checkpoint moved to another physical processor and re-started transparently to software. Stores are tracked all the way down to "RAID memory" (actually called RAIM), whereas in other CPUs they get punted off at the L1 cache. Practically every component and connection is redundant. There are stories of mainframe drawers being hauled to particle accelerators and the beam switched on while they're running. Quite amazing machines.

Not to say that has more value than the software (I don't know one way or the other), but the hardware is no gimmick.

reacharavindh · 3 years ago
I think its hardware is what enables such reliable software.

Hot plug CPUs, memory modules, and DASDs. All survive the failure scenario gracefully.

Good luck trying to do that on a rack of x86 servers.

I think as in most engineering things. They do a different trade-off and cater to a different niche while x86 servers are the mass market consumables that most workloads -should- be using. The mainframe CPUs go for the widest IO capabilities, insane cache sizes and hardware offload because that’s what their target audience(banks, insurance companies, airline ticket systems) need.

Their parallel sysplex clustering solution is an engineering marvel but also tightly coupled to their hardware.

In some ways, the IBM mainframe is the Apple of niche critical enterprise computing. They are also the ancestor of cloud computing in paving the way for “pay per use” cost models and multi tenancy.

bombcar · 3 years ago
x86 just made it so the whole computer could drop out of the “cluster” which greatly simplifies the hardware design but the result is NOT a “virtual mainframe” as the software becomes exponentially more complicated if you want “all power”.
mberning · 3 years ago
It persists in many places due to the deeply ingrained belief that, for one reason or another, it would be technically, practically, or economically impossible to migrate functionality off the mainframe. Somehow the people in charge of these systems have managed to convince large enterprises of this for decades. And now we are in a situation where their long held beliefs have become true because nobody is around that understands how or why the software works the way it does. It's a disgrace.
ElectricalUnion · 3 years ago
> It persists in many places due to the deeply ingrained belief that, for one reason or another, it would be technically, practically, or economically impossible to migrate functionality off the mainframe.

It is never technical or pratical reasons, always economic reasons. To use contemporary vocabulary, you "just" need to replicate a "multi tenant HA cloud environment" to migrate from a mainframe at great cost.

> And now we are in a situation where their long held beliefs have become true because nobody is around that understands how or why the software works the way it does.

Statistically unlikely, most of the business software that goes onto said mainframe is in fact very simple and straightforward and can be easly ported out, however, the real deal is all the integration and high availability logic built-in into "the mainframe platform". You're not porting that out without replicating said "multi tenant HA cloud environment" at great cost.

ghaff · 3 years ago
Or the people in charge are of the belief that things work and they're a tiny part of the overall organization's costs--and any massive migration will run over schedule and budget, won't necessarily be better at the end, and will be unnecessary and disruptive. They may or may not be correct but it's not an irrational belief whatever those advocating for moving everything to a distributed platform think.
tracker1 · 3 years ago
Have you ever migrated a decades old application off of a mainframe?

Most of these kinds of systems have decades of codified knowledge that applies the legal requirements of many disparate jurisdictions in these systems. Laws vary from city to city, county to county and state to state, and that's only for the US. Some of these systems implement code that applies legal requirements for dozens of nations.

In general, you're going to have to spend at least a year just to document the existing system. Another six months to a year generating test cases and scripts for validation. And that's before even starting to build the new system. And that is ALL optimistic, assuming everything goes right.

I had to once write a relatively simple application for "vacation bidding" where in employees will get a position to request certain weeks off during the year. This was a legal requirement and happened to involve the company, two other companies that had been merged in and three unions. Each with different rules regarding how seniority is calculated. The contractual specifics were over two thousand pages long. That's just one very small piece of software in one of these very large companies. It took the better part of a year to implement. Now imagine taking that a couple decades later after it's evolved and migrating to an entirely new platform and paradigm. Now imagine the cost to do that.

ibiza · 3 years ago
Migrating a suite of programs written w/ COBOL, CICS, DB2 from z/OS to Linux is fairly straightforward. MicroFocus COBOL compiles IBM COBOL, DB2 runs on Linux or you can swap out another RDBMS. The CICS part requires a simple runtime easily implemented in C plus a new scanner to replace the calls in your programs.

I was part of a 4 person team that knocked this out in under a year. 1,000's of mainframe programs running w/o change (at the time) on Unix. This was in the 90s.

jmartrican · 3 years ago
Before we even get to the costs of a migration, what is the reason for the migration?
zitterbewegung · 3 years ago
The greatest hurdle for a project to be replaced is that if its works. I assure you that the users of IBM mainframes would be looking at at least a multi million dollar project to perform a decommission and if even approved it would be first on the chopping block to pause / eliminate the project.
adamc · 3 years ago
I have worked on an RFP for migration. It persists because the cost for migrating can be quite high -- many mainframe environments have many applications with complex sets of dependencies (after decades, not surprising). No one doubts that it could be migrated, but it is not a trivial undertaking.

When folks have replaced significant enough chunks with other apps, or otherwise made the prospect less daunting, then it may happen. It just comes down to cost and risk management.

KennyBlanken · 3 years ago
Notice they didn't mention how incredibly outdated all these applications are? Our financial money transfer system is an ancient joke. The IRS has systems that are hopelessly out of date and limited in capabilities. It's really easy and safe to sign a contract for a newer mainframe; porting a legacy app is fraught. Middle-upper management goes for what's safe, not what could torpedo their career.

The hardware might be whiz-bang but the legacy applications are slow, outdated, and extremely hampered in functionality.

In my state, the birth records system is electronic, and so is our RMV. But in order to get a RealID, I had to get a paper copy printed out to bring to the RMV because they can not, or will not, integrate the two systems. Meanwhile other countries have things like electronic ID cards with personal certs so you can electronically sign documents and identify yourself.

The whole thing is a giant puff piece for IBM, reading like a sales presentation transcript.

xahhkakappy11 · 3 years ago
Inline SQL is something I've always missed elsewhere. Not sure it's viable when you've got a billion different incompatible databases supported on a platform, so some of the limitations of mainframes have advantages.
SoftTalker · 3 years ago
pl/SQL has good support for inline SQL.
bob1029 · 3 years ago
I've always looked at IBM like an even more extreme version of Microsoft. I'd probably have us on their path if the cost and knowledge weren't a massive barrier to entry. I know IBM has some cloud thing for us baby startups, but I sure as hell can't figure out how to use it or if our customers are even aware of it.

Vertical integration of the business into one magic box is a perpetual dream of mine. Bonus points if I can yell at exactly one vendor if anything goes wrong.

rbanffy · 3 years ago
I have tried numerous times to get a z/OS LPAR and an AIX VM without any luck. It's a complete mystery to me.
redandblack · 3 years ago
Used to be great for vector processing for numerical analysis, although not sure anymore
js8 · 3 years ago
Since z13 vector instructions are supported again. But I doubt anybody uses them.
larrik · 3 years ago
I never worked on z/OS, but I did work on AS400 (Series I, or whatever it's called now).

I think the main things missing is how much IBM really brings to the table here.

> If one crashes or has to go down for maintenance, the other partitions are unaffected.

Effectively, IBM often does that for you. The machine detects an issue, and calls IBM, who sends someone out (in my day it was immediately), and they fix it. Then it's fixed and they leave and most of your staff has no idea they were there at all.

Plus, there's not enough that can be said about using a dedicated stack of hardware and software owned by one company. If there's an issue, it's IBM's. They are the ones who need to fix it. No trying to get HP vs Microsoft to agree to take the blame (which can take literal weeks). Just call IBM, and they take care of it. (In theory)

hinkley · 3 years ago
Oxide seems to be trying to build a similar arrangement with customers. That's half their motivation for switching to open firmware for the little computers hidden in your machine. There's a big game of fingerpointing these days where you call your vendor and they blame one of their vendors and can't/won't hunt down the issue for you.

The ways I've heard that explained sound exhausting. Paying anyone who you can say, "your machine broke come fix it" and they actually do, is probably worth the money. Right now Cloud providers and IBM are the only ones really providing that service. I suspect history will say that people were not running to the Cloud so much as running away from bullshit hardware vendors.

bcantrill · 3 years ago
Yes! There are lots of ways in which we are nothing like these big IBM systems (we are, for example, entirely open -- like POWER, but very unlike AS/400 or Z series machines), but we definitely admire the robustness of these machines! Indeed, arguably to a fault, as I likened us to the AS/400 in a VC pitch[0] -- which (despite the potential accuracy of the historical analogue) is as ill-advised as it sounds...

[0] https://www.youtube.com/watch?v=5P5Mk_IggE0&t=2216s

Melatonic · 3 years ago
Not always true - there are lots of hardware vendors that do have pretty decent support and automatically call home and a part is same day shipped.

That being said usually I am the one replacing that part on my own infrastructure which is usually a 30 minute drive an hour to maybe 3 of my time

neverartful · 3 years ago
"using a dedicated stack of hardware and software owned by one company"

advantage - one throat to choke

disadvantage - they've got you by the balls when it comes time to pay the licensing and maintenance fees

Yasuraka · 3 years ago
One has to wonder if the most optimal throat-choking to ball-holding ratio can be modeled
larrik · 3 years ago
There was one other aspect of these machines I missed. They run forever.

We had a machine in there that by the time I left was running nonstop for 10 years. I was their only IT person for 5 of those years, and I basically visited them a few times a year to work on MSAccess reports, and never touched the server (I may have never even logged into it!). It just chugged along running their entire business with no maintenance.

mxuribe · 3 years ago
Throughout my career, the term i heard most often for this type of scenario was: "Which neck to choke when stuff fails...", or something like "...the least number of necks to choke when stuff fails...", etc. lol :-D
kkielhofner · 3 years ago
"They’re designed to process large amounts of critical data while maintaining a 99.999 percent uptime—that’s three seconds of outage per year."

Very wrong. Five nines is five minutes and 13 seconds of cumulative downtime in a year[0].

Three seconds of downtime in a year is seven nines[1].

[0] - https://uptime.is/five-nines

[1] - https://uptime.is/99.99999

ooterness · 3 years ago
Looks like they've updated the article to correct this. It now says, "a bit over five minutes' worth of outage per year".
coleca · 3 years ago
I've always heard that as "scheduled uptime" or "unscheduled outages". When I worked in a mainframe shop, they used to IPL (reboot) the mainframe every Sunday morning. That down time was never considered as part of the SLA.
kmoser · 3 years ago
Wow, even Windows boxes can run longer than a week without having to be rebooted. I figured a mainframe would be able to last almost indefinitely.
eeegnu · 3 years ago
I would guess that they did the calculations, but interpreted it as "5 nines after the decimal point", when it's really "5 nines in total".
ptman · 3 years ago
0.99999/1 (no %)
tyingq · 3 years ago
If you zoom out, the ecosystem is very comparable to running on AWS or similar. It's an opinionated environment with proprietary answers for how to do things like scheduling, short-lived tasks, long-lived tasks, storage allocation, networks, monitoring, forced version upgrades, etc.
jmartrican · 3 years ago
I wonder if IBM can offer a mainframe in the cloud. Its everything a mainframe provides but all off-site and priced for my size.
ghaff · 3 years ago
You mean timesharing? :-)

I suspect that for most organizations that use mainframes today, there are so many integration points and so much data is involved that the economics that drove a lot of early-on timesharing no longer apply.

p_l · 3 years ago
It's one of the major ways to get a mainframe these days, even companies you'd expect to have one on premises might actually have a lease on one running in IBM datacenter with VPN to internal network.
madmulita · 3 years ago
I believe the real reason for its survivability is the fact that you can pull a tape from the seventies and those binaries will run without any modification. It's not only that you can easily recompile your COBOL from the '70s, the binary is still compatible. You've never been pushed to migrate to another technology. Imagine the effort and 'knowledge' included in those evolved programs. The banks don't even know, and are conscious of it, how many laws and regulations they have encoded in there.

As someone stated in another comment, the software is the impressive part.

technofiend · 3 years ago
This is both good and bad. You have to consider bugs a kind of feature like anything else. One of my coworkers showed me a bug report he'd opened 30 years prior that IBM still refused to fix because people depended on the broken behavior. So porting off the mainframe also means bringing along those quirks or rewriting to specs that provably don't regress performance and behavior. Writing or rewriting software is easy, but migrations despite how they first appear are not really "green field" development.
lizknope · 3 years ago
I've never used or even seen a mainframe in 26 years in the tech industry. My brother in law works for a bank and basically the business runs on it.

The hardware and software are certainly impressive but does anyone use a mainframe for a new project and not just upgrading or expanding an existing system?

I'm in integrated circuit design and we have compute clusters with thousands of CPUs. For some jobs we use a single machine with 128 CPUs and over 2TB RAM. Some steps get split over 30 machines in parallel. All of this EDA / chip design software migrated from Sun and HP workstations in the 1980's and 90's to Linux x86 in the 2000's. I think some of it ran on mainframes in the 70's but does anyone use mainframes for scientific style calculations or is it just financial transaction / database kind of stuff?

Hikikomori · 3 years ago
You'll most likely find it in large companies that operated in the 60s or 70s that haven't switched to anything new, mostly because their core business runs on it.

I know of two companies, and at least one still use it, had several summer jobs there. They make sheet metal rolls by flattening out train cart sized hunks of steel, and while the mainframe system didn't run the machines (operators and PLC handled that) it kept track of everything, inventory, logistics and planning. I used it to plan rail shipments, where to put each roll of sheet metal on a train and loaded them up.

tristor · 3 years ago
Oh, fun times. I've been on the other side of that business in my past life, where I had to "revive" a business critical program written in VB3 (yes for Windows 3.x) after a computer migration that was used to calculate the weight of an aluminum or steel coil/roll via its dimensions so it could be input into the PLC for the feeder mechanism at the beginning of a production line that did forming/extruding of metals.

So on one end mainframes, on the other ends software written for DOS and Windows 3.x still being used in (at the time the 2010s) to keep critical infrastructure for manufacturing running.

pdw · 3 years ago
That could have been a mainframe, but I think factories are much more likely to be using AS/400 aka IBM i. That runs on regular IBM Power servers these days.
dboreham · 3 years ago
> I've never used or even seen a mainframe in 26 years in the tech industry.

40 years here. Although in theory I've used a mainframe in college, but it wasn't IBM (Burroughs) and just seemed like "a computer" at the time. I also worked a bit on a terminal emulator for ICL mainframes, so probably at least logged in. So 40 years of saying "I wonder if the mainframe people already have a solution to this?"

Probably mainframes haven't been much used for HPC since the transition to Cray in the 70s.

osullivj · 3 years ago
Also 40 years, 25 in banking. Done several Unix to Z/OS, and Windows to VME integration projects.
toast0 · 3 years ago
I started my tech career as a student worker at my school district's office. They had a Unisys mainframe managing student and employee records; around 1998, they replaced the old system which was about 3 feet tall and maybe 20 feet long, with a 4U dual processor Pentium Pro running NT 4 with a Unisys emulator. Seemed to work just about as well, but the operator console seemed a lot less fun. Still interfaced with the giant impact printer to print out grades and payroll.

Deleted Comment

StillBored · 3 years ago
"Today’s IBM mainframe CPUs are based on a highly evolved version of the POWER architecture that IBM has been developing for over 30 years. "

I've heard a lot of largely clueless people who weren't aware of the differences between the zeries and pseries say something like this, but its generally been entirely false (especially 15-30 years ago which overlaps with some time I myself spent at IBM). Given the rest of the article I wouldn't presume the author is in this category.

So has something changed? or is the implication stretching the truth? I mean I guess you could strip a POWER core down so it only runs s390 microcode/etc, but that likely won't actually yield a performant machine and the process of evolving it would likely fundamentally change the microarch of whatever RISCish core they started with.

I mean they are entirely different Arches, in the past utilizing entirely different microarches. I can see some sharing of a RTL for maybe a vector unit, or cache structure, or a group doing layout, but that doesn't make the zeries processors any more derivative of POWER than Itanium was derivative of x86, etc.

PS: the bit about zos partitions supporting linux seems like a bit of confusion too, earlier its correct about the LPARs being capable of running linux directly, but ZOS isn't providing the lpar functionality, and is generally just another guest alongside linux, ztpf, and various other more esoteric "OSs" that can run natively. There is a unix system services in zos but that isn't a linux kernel/etc.

sillywalk · 3 years ago
I wonder if the article meant that IBM had worked on the POWER architecture for over 30 years? IBM did work on the eCLipz Project [0][1], combining / sharing tech from IBM Power/pseries, AS/400/iseries, and Zseries. This was around 2005-ish. I assume that collaboration has continued, but I don't know if that counts as 'based on ...'.

"The z10 processor was co-developed with and shares many design traits with the POWER6 processor, such as fabrication technology, logic design, execution unit, floating-point units, bus technology (GX bus) and pipeline design style, i.e., a high frequency, low latency, deep (14 stages in the z10), in-order pipeline.

However, the processors are quite dissimilar in other respects, such as cache hierarchy and coherency, SMP topology and protocol, and chip organization. The different ISAs result in very different cores – there are 894 unique z10 instructions, 75% of which are implemented entirely in hardware. The z/Architecture is a CISC architecture, backwards compatible to the IBM System/360 architecture from the 1960s. "[2] For the POWER

[0] https://www.realworldtech.com/eclipz/ [1] https://news.ycombinator.com/item?id=18494225 [2] https://en.wikipedia.org/wiki/IBM_z10

lizknope · 3 years ago
Yeah, I don't know where the author is getting the POWER arch connection.

I thought the IBM Z Architecture was the CISC based System 360 / 390 architecture from the 1960's. At least that is what I remember my one friend who has some mainframe experience was telling me.

rob74 · 3 years ago
> Mainframes descended directly from the technology of the first computers in the 1950s. Instead of being streamlined into low-cost desktop or server use, though, they evolved to handle massive data workloads.

I think the first sentence is 100% correct, but the second one not so much: current desktops and servers (not to mention laptops, tablets, smartphones etc. etc.) evolved from the first microcomputers introduced in the 1970s with the idea of having a computer (albeit initially a not very capable one) that anyone could afford. These then quickly evolved during the 1980s and 1990s to cover most of the applications for which you would have needed a mainframe a few years earlier.

dralley · 3 years ago
It's still somewhat true. Desktop CPUs have hardware acceleration for things like video decoding, mainframes have hardware acceleration for things like encryption / decryption, compression / decompression, fixed-decimal arithmetic, etc.
peterfirefly · 3 years ago
Plenty of x86 CPUs have had crypto instructions in the last decade or so.

https://en.wikipedia.org/wiki/AES_instruction_set

https://en.wikipedia.org/wiki/Intel_SHA_extensions

hinkley · 3 years ago
Centralization versus distribution.

Each person has a box in which they are essentially the sole tenant, versus a big box that has to have a bunch of sophistication to handle multitenancy.