Readit News logoReadit News
ThinkBeat · 2 years ago
The AS/400 was the most reliable server I have ever worked with. They tended to just work month after month.

I am not saying it is the most reliable server in the world, because there are a lot I have not worked with, but it was far more reliable than anything else I have worked on.

IF something did go wrong, they would at times call IBM and ask for service before the customer had any knowledge of an issue.

My mother became the AS/400 sys admin at her company and she had no idea about servers or technical sides of computers. She changed the backup tapes on a schedule and that was it .

Many places nobody knew where the box was since nobody interacted with it on a physical level. Green screen terminals or terminal apps on Windows was the norm.

Some business had them for inventory / sales / accounting and so on.

Many places nobody knew where the box was since nobody interacted with it on a physical level. Green screen terminals or terminal apps on Windows was the norm.

There were a couple of simple games (not from IBM) you could download and play, but it wasn't worth it.

There were some drawbacks, but Id love to have one to play with.

elorant · 2 years ago
It surely worked, but when it didn’t it took ages to resolve the issue. I remember back in mid nineties, IBM had brought a whole team to troubleshoot an issue and they stayed at the company where I worked for almost a week to find out what went wrong.
Pamar · 2 years ago
Well, exactly the same thing happened in a project I worked on in mid-2000s.

Except it was not AS400 or anything hardware... it was a problem with an Oracle product, and took a bit over one week with higher and higher gurus being parachuted in by Oracle Europe.

And in the early 90s I had another similar episode, this time with DEC/Vax. Not sure anymore but I think they finally had to patch the OS.

My point: nothing made by humans can guarantee 101% uptime. The main difference is how far any vendor will go to solve your problems.

ThinkBeat · 2 years ago
I never saw that anywhere I was, but every system will have its share of issues for sure.

Something similar happened to a client a couple of years ago. They wanted to set up their "devops pipeline" on AWS.

First they tried to solve it in house. After two months they gave up and brought in an AWS specialist. After two weeks, we had two more AWS specialist inclduing one really expensive. 1.5 months later we had the pipeline going, but fragile as hell.

I have not been able to piece together how it could have possibly taken so long. I didn not work on this part of the project myself at all.

Given that the entire back-edn was in C#, with VSProject configurations the whole bit, it would have taken less than week on Azure, if you kept some of the presets.

But yeah Azure CI/CD can suck as well and its less configurable. Which is a curse or a benefit depending on your standpoint.

Deleted Comment

randrus · 2 years ago
In the 90s I worked across the aisle from our AS/400 dev - after an office power outage we’d spend hours running fsck on our unixen and he’d take a long lunch. Every time.
bongodongobob · 2 years ago
As someone who works heavily in infra, months of uptime is absolutely the norm. If you find yourself having to reboot servers or they are crashing, you're doing something wrong.
Suppafly · 2 years ago
...or they are microsoft servers and you have to reboot them once a month to apply security updates still for some reason.
tyingq · 2 years ago
Some of that reliability is just the homogeneous software ecosystem. Where most applications developed for the AS/400 are using roughly the same stack for "app server", "database", and so on. That closed ecosystem means core bugs get ironed out via customer feedback pretty quickly. There's a limited amount of unique combinations of software parts and config.
senectus1 · 2 years ago
totally agree, we had one at the minesite I was admin at. Ran JD.Edwards ERP on it. was rock. solid. in a time when NT4 was king.

Also had a Domino mail server (for lotus notes. It was way ahead of its time (at the time) and also super rock solid. Neither ever gave me any trouble... even when the Domino server ran out of disk space.. it never died or corrupted. just let me know and when i fixed it it carried on like there was no issue.

Deleted Comment

HankB99 · 2 years ago
> There were some drawbacks, but Id love to have one to play with.

One of the shops I worked at had an AS/400 and related equipment sitting on a pallet by the elevator bank. I could have had it for free.

Drawbacks? I had no way to transport a pallet of computer equipment. And IIRC it required 3 phase power so care and feeding would have been well beyond my means. (But I still have and occasionally use the IBM Model M that they were going to discard and another time I carried a retired Sun pizza box home on the train.)

safeimp · 2 years ago
> IF something did go wrong, they would at times call IBM and ask for service before the customer had any knowledge of an issue.

I always assumed their call home service was triggered by SNMP but may be wrong? Regardless, IBM exposed a lot of metrics via SNMP so it was always an easy way to query it for metrics and/or accept traps from their devices/OS’s for failures

chasil · 2 years ago
The modern "i" series machines are just POWER processors on a standard server.

It's unlikely that anything under a 14nm process node is used for the CPU, if that.

IBM has deep knowledge in high-availability systems, but it's unlikely that you could call these more advanced than a modern 5nm Ryzen server.

saiya-jin · 2 years ago
You don't understand where AS-400 real power comes from. DB integrated directly into filesystem, you optimize your work for how the whole system has been designed and get massive benefits and robustness.

Performance between junior basic code and heavily optimized complex queries even on just Oracle can be 1000x easily, just ask any data warehouse guy (seen it few times myself even though I don't do warehouses). You can maybe add another 0 for AS-400 there in extreme cases.

Yes if you end up doing things wrong and expect that CPU to do some heavy math then yes this will be dusted quickly. Otherwise, not so much.

But this goes against most modern 'SV principles' so I don't expect much love from younger generations here. Business love that though, and secretly wish all IT could be as reliable and predictable as them although that ship has sailed long time ago. One of those 'they don't do them like good old times anymore'.

sillywalk · 2 years ago
POWER 10 is on 7nm.

It also has a ton of RAS features.[0]

ECC beyond SECDED, processor instruction retry, CRC on all fabrics part of which can fail and the system will degrade to half-bandwidth. Hot/Swap and/or Redudant everything to include the LCD panel on the OP Panel.

I'm actually curious what type of RAS stuff EPYC and XEON have, and hope somebody can link to the info.

[0] https://www.ibm.com/downloads/cas/2RJYYJML

internet101010 · 2 years ago
I use AS/400 in 2024, though not that much anymore because we only have a few things left in there that haven't been migrated. What we have gone to is less reliable and getting data out of it is always a herculean effort that requires a bunch of service tickets and meetings.

One of the great features of AS/400 is I think shift+esc, which allows you to quickly view the list of tables (or files as they call them) that are being used to populate the current screen. This should be a standard function for Tableau/Power BI workbooks that have live database connections.

I have a love-hate relationship with it. It's like an old pickup with 1M miles that refuses to die.

ako · 2 years ago
How do you use that info? Validate that it’s using the correct data, or as info when writing a new app? I often wished I had the same when building something on top of sap/salesforce/any other app that would show the source of every field on a page, ideally also showing the api that would give that data.
internet101010 · 2 years ago
When people say "I want the <insert metric> from G42" with G42 being a certain screen, it's helpful to be able to quickly see what all is populating that screen. It's not exact but over the course of 30 years a lot of tables can get created and 7-8 character limits on table and field names doesn't help.

I am not sure if it is a requirement when creating a table to have descriptions but they are all there in my org. So go to the screen to get the tables and then join it with a "give me the list of all tables, table descriptions, columns, column descriptions" SQL query. EZ.

After seeing how useful these descriptions are, I firmly believe that every organization should make them a requirement even if the rdb itself does not. Just simple form of documentation at time of creation that allows for context while still giving you the ability to enforce rigid naming structures.

ljoshua · 2 years ago
What I find most interesting from this clip is that, if you just swapped out some of the product names and acronyms, it would largely sound like a technical presentation that could be given today. “Here’s a database, we want to connect it with SQL to another product for visualizing the data, and we’re going to add some automation to take care of business process X.”

Surely things have gotten faster and (in some cases!) more efficient, but we are doing the same thing 30 years later that we were excited about in 1993. In a way that’s both comforting and darkly funny.

duxup · 2 years ago
Choosing software, prioritizing the right things, and connecting it all is hard. We're not good at it and we have to re-connect the dots every-time we shuffle the deck chairs.

I think I've worked on the problem of "we want our sales guys to get an email whenever the sale they made ships" ... an un-countable amount of times. Without fail the first and biggest hurdle is that the customer does not reliably capture or connect their sales people's email addresses in any kind of reliable way. I'm sure they do when it comes to paying bonuses, but not for sending the email when the thing ships. It makes no sense, it should be easy, but they don't. So we work on it on and on and then with the next customer and so on.

And honestly I'm not sure any of these sales folks read the dang email.

StevePerkins · 2 years ago
The Mother of All Demos was given in 1968, and still warrants its moniker today.

https://en.wikipedia.org/wiki/The_Mother_of_All_Demos

asveikau · 2 years ago
The fact that this is server side and data storage probably contributes to this. Consumer grade client devices in 1993 were large and bulky, didn't even do multitasking or memory protection very well. Now they're cheap, small, ubiquitous, have lots of very high level programming interfaces.
mdgrech23 · 2 years ago
I work in the auto industry. Cars have changed so much but at the end of the day they're still just internal combustion engines that spin wheels. I suppose EVs represent true innovation but I'm still dubious on their future.
lizknope · 2 years ago
https://en.wikipedia.org/wiki/Cadillac_Type_51

> The 1916 Type 53 was the first car to use the same control layout as modern automobiles- with the gear lever and hand brake in the middle of the front two seats, a key started ignition, and three pedals for the clutch, brake and throttle in the modern order.

I've seen a clip on Top Gear where they drove some early cars and the controls were completely different like levers for acceleration and braking.

How long would it take someone familiar with this 108 year old Cadillac to learn how to drive a modern car from point A to point B? 1 minute?

douglasisshiny · 2 years ago
Why are you dubious about the future of EVs?
Max-q · 2 years ago
Now that internal combustion powered cars are down to less than 20% of new car sales in quite some countries, and has been for several years, I think we can safely say that the EV is not a fad and that the fossil technology is going the same way as the steam engine in the 60s. The modern electric motor is a superior technology in almost any way. The battery, however, is just good enough, and for the replacement to be complete, better battery technology will be needed. For those last 15-20%, the energy density of Li-Ion is just not good enough.
eitally · 2 years ago
Honestly, the world was simpler then (with fewer layers of abstraction separating the database from the presentation) and I suspect the majority of the time it was far more efficient then than now.

... speaking as someone who was in the job of enterprise BI where data sources were primarily Progress databases on AS400, back from around 2000-2010, when we finally migrated to a Linux / PostgreSQL stack.

sillywalk · 2 years ago
Out of curiosity, why weren't you using the built-in DB2 on AS400?
schoen · 2 years ago
I had a summer internship in high school with an AS/400 application development shop. I found the machine and its development environment annoying and unpleasant compared to Unix, but the support from IBM was incredible. Absurdly bureaucratic, but also so thorough and detailed and accessible.

I still have this memory that IBM sent out an Engineering Change for the AS/400 that consisted of a twist tie to help customers who had purchased an Ethernet card for it coil their Ethernet cables more reliably. (The twist tie of course then having a specific IBM part number.) I would love to be able to substantiate this memory.

IBM was also seemingly very open to supporting new technologies, both hardware and software, on the AS/400, including some that were invented decades after the machine was introduced. Usually for a fee, of course!

easton · 2 years ago
IBM lists a cable tie as a part for Power5 here: https://www.ibm.com/docs/en/power5?topic=catalog-system-part...

I don't know if 9119-590 is compatible with AS/400 though.

neilv · 2 years ago
That twist-tie ECO process sounds aerospace-grade.

I saw things like that from IBM, just from their later Power AIX workstations.

For mission-critical computing companies, there was lots of process intended not to have anything slip through the cracks.

The first memory that comes to mind was actually DEC (or was it HP?), who, to send a single sheet of paper (e.g., for a license key), would routinely use an entire shipping box. Paper would be shrink-wrapped or poly-bagged, with a sheet of cardboard. Plus a packing list, itemizing not only that one sheet of paper, but also itemizing all the shipping supplies to be used.

Not very efficient in some sense, but if it avoided a single mission-critical incident for a customer, I suppose it was worthwhile.

euroderf · 2 years ago
It would certainly put a roomful of people on alert that something important was up. Making it harder to foul up.
bank_daddy · 2 years ago
I supported core banking software and did some RPG II development on the AS/400 in the early 1990s. That system lives on in banking and has since been rebranded to iSeries and then Power (and the OS/400 operating system rebranded to "i").

Chances are if you bank in a community or regional bank, the core banking software is run on Power hardware. Long live the AS/400!

sakopov · 2 years ago
Yep, Banks and insurance companies love AS/400.
EricE · 2 years ago
Casino's too! IBM sponsored an AS/400 lab at the Clark County (Las Vegas) community college in the 90's when I went there.
tr33house · 2 years ago
In 93, I was a baby. I'm now at least a Senior Eng in many orgs. It's impressive, at the very least, that Satya stayed in Microsoft that long
Cthulhu_ · 2 years ago
It feels like few people stay at one company for more than a few years these days, but then conversely, it also feels like companies are set up in a way that makes most people replaceable.

Note that my take is biased, I've been a "consultant" for most of my career which is a glorified temp, and you end up in projects and organizations that hires temps. I tried an old fashioned product company once, it wasn't for me (nothing in common with my colleagues who were 20-30 years my senior, and they and the company were happy just plodding along until retirement)

mynameisash · 2 years ago
When I moved to the PNW, it was for Amazon, and during my interview loop, I asked everyone, "How long have you been at the company?" They all had pretty much the same answer: three months, five months, eight months, and some that had been there for a few years.

But I decided to get out of Dodge and interviewed at Microsoft, and I asked everyone the same question. The responses were shocking to me: five years, ten years, 12 years, 22 years. One person even told me that he hadn't been with the company very long: only 18 years. And he wasn't being coy.

I've been at Microsoft for more than ten years, and I still feel like the new guy. I started at the tail end of the Ballmer days, and I'm sure it was a real grind back then, but I'm glad to see a company that -- in my experience -- treats people well enough that they'll stick around.

greggsy · 2 years ago
In complex technical environments, it often doesn’t make sense to have an in-depth expert assigned to every system. You get a consultant in to set it all up and get the gears moving, but you can often get less experienced, or more versatile, people in to keep it running.

Also, the demand for skills has been silly for the past 20-odd years, so there’s less incentive stick around, other than money. Loyalty rarely pays off, unless you’re talking shares.

opportune · 2 years ago
Microsoft is one of the few companies that seems to do a good job allowing employees to rise through the ranks from top to bottom. Most of the Partners I have met were cultivated internally. Facebook is also like this, with the added bonus of allowing much faster progression for high performers than most other companies (I know someone who became an E6 Engineering Manager/Tech Lead 3 years out of college. I don’t think it was necessarily for “bullshit” either, his work was very fundamental and important to the company).

But this is rather rare and most companies have a soft ceiling for growth internally. At Google for example, for years they have been filling most Director positions externally, and so most employees find it very hard to get there and progress past that. Progression is also often subject to norms that make the sheer number of promotions required to make it high up in the company impossible.

robertlagrant · 2 years ago
> but then conversely, it also feels like companies are set up in a way that makes most people replaceable

How else would you set up a business that's robust to people moving on to new things?

ponector · 2 years ago
Also the best strategy for the last 10+ years was to move around frequently. Change jobs every year (or two if you can change projects inside one job). As the result you have much more salary and experience.

But everyone will bullshit you to stay as long as possible with little salary rise.

But with current market it is better to stay for longer if you have "okay" project.

vsnf · 2 years ago
What's even more impressive than his tenure is his trajectory from sales grunt to CEO.
TillE · 2 years ago
I dunno exactly what a "technical marketing manager" is, but Nadella had a master's in computer science at that point, so I doubt he was just a sales guy.
ryandrake · 2 years ago
That's what struck me, too. Here's a guy who looks and talks like some rando young really smart technical dude, like the hundreds I've met over my career. He doesn't appear to have those Ivy Leaguer mannerisms, not some tweed blazer-wearing, popped collar, McKinsey BankingConsultant, who you always expect to end up as all your C-level execs. This guy doing the demo could have been you or I. Yet here I still am in my late 40s still an IC worker bee at the bottom of the org chart, and here he is running the most valuable company in the world. How can you not believe in randomness?
mytailorisrich · 2 years ago
It's actually a tried-and-tested way to go up the ladder.
tambourine_man · 2 years ago
Is it? It really depends on the organization. I've seen leadership being hired from the outside many times and it's devastating for morale.
hulitu · 2 years ago
trully impressive taking into account Microsoft's toxic culture of disposing after yearly review of those who did not performed so well.
beastman82 · 2 years ago
He seems pretty polished at even 26 and he's CEO now so I doubt they were an issue at any point.
osrec · 2 years ago
Impressive in that he was probably quite political and thick-skinned?
burnerburnson · 2 years ago
Personally I find it depressing. He could have retired years ago to work on a passion project. Why is he still dealing with the soul-sucking internal politics of a mega-corp? Unless he somehow enjoys walking on egg shells all day, it doesn't make sense to me.
KMag · 2 years ago
I was surprised Sun/Oracle's JVM (or Apple, after their 3rd architecture migration) never took a page from AS/400's TIMI (Technology Independent Machine Interface) and compile an architecture-independent representation to native code at installation time.

As the Android Runtime later demonstrated, nothing prevents you from distributing an architecture-independent representation, AoT-compiling to native code at installation time, instrumenting the native code, and then re-optimizing at runtime and/or in a background batch process if your instrumentation statistics deviate substantially from what was available the last time you re-optimized.

Apart from the extra disk space for both bytecode and native representations, something like TIMI allows for the best of both worlds as far as AoT and JIT. (It's also nice that TIMI doesn't require a garbage collector to be running.)

I'm not aware of modern AS/400 dynamically re-optimizing binaries, but doing so wouldn't break anything. Given the conservative nature of mainframe users, I imagine dynamic re-compilation would need to be an opt-in feature.

skissane · 2 years ago
> I was surprised Sun/Oracle's JVM (or Apple, after their 3rd architecture migration) never took a page from AS/400's TIMI (Technology Independent Machine Interface) and compile an architecture-independent representation to native code at installation time.

GraalVM supports AOT compilation – https://www.graalvm.org/latest/reference-manual/native-image...

But, I think "TIMI" has significantly less value today than it once did. Due to PASE (the AIX compatibility environment), there is ever more code on contemporary IBM i systems which runs outside of the MI bytecode layer. Shifting IBM i to something other than POWER would require recompiling all that code from source.

sillywalk · 2 years ago
Beyond recompiling, wouldn't porting 'i' to another architecture, say x86-64 or ARM64, would require the addition some sort of PowerPC AS tags active mode - instructions to set the "this is a valid pointer bits" ?

Deleted Comment

RandallBrown · 2 years ago
If I'm understanding correctly, Apple actually did do something similar with Bitcode. The developer would submit one app as serialized LLVM intermediate representation, then Apple would recompile it and serve the correct version for your architecture in the App Store.
Koshkin · 2 years ago
For those who would like to experience an AS/400:

https://pub400.com/

rootusrootus · 2 years ago
If you're really into it, there are usually a few on eBay. A few years back I almost picked one up for my dad as a fun gift; he worked for IBM for 25 years and much of the later years were focused on the AS/400. He spoke quite fondly of it.
nxobject · 2 years ago
I wonder what the licensing situation would be – especially if they'd wiped the disk. Although I'm now tempted to look that up...
sillywalk · 2 years ago
Here's a deep dive book on the architecture of the AS/400 up to the PowerPC port, written by Frank Soltis, one of the main people behind the system.

https://archive.org/details/insideas4000000solt

WorksOfBarry · 2 years ago
combine that with [1] and you're set for development

https://marketplace.visualstudio.com/items?itemName=HalcyonT...