Readit News logoReadit News
gumby · 3 years ago
It's crazy and destructive that we are still using the unix paradigm in the cloud.

In the 70s we have transparent network fileststems, and by the 80s I had a more advanced cloud-native environment at PARC than is available today.* The Lispms were not quite as cloud native as that, but still you could have the impression that you just sat down at a terminal and had immediate window into an underlying "cloud" reality, with its degree of "nativeness" depending on the horsepower of the machine you were using. This is quite different from, say, a chromebook which is more like a remote terminal to a mainframe.

I was shocked when I encountered a Sun workstation: what an enormous step backwards. The damned thing even ran sendmail. Utterly the wrong paradigm, in many ways much worse than mainframe computing. Really we haven't traveled that far since those days. Cloud computing really still it "somebody else's computer."

There's no "OS" (in the philosophical sense) for treating remote resources truly abstractly, much less a hybrid local/remote. Applications have gone backwards to being PC-like silos. I feel like none of the decades of research in these areas is reflected in the commercial clouds, even though the people working there are smart and probably know that work well.

* Don't get me wrong: these environments only ran on what are small, slow machines by today's standards and mostly only ran on the LAN.

rsync · 3 years ago
"It's crazy and destructive that we are still using the unix paradigm in the cloud."

  # ssh user@rsync.net "test -f fileThatExists"
  #echo $?
  0
... from my cold, dead hands ...

Dead Comment

Dead Comment

Deleted Comment

otabdeveloper4 · 3 years ago
"Treating remote resources truly abstractly" doesn't work in practice. Too many points of failure in our systems, and you really, really don't want to paper them over with abstraction if you want to build a fault-tolerant system.
gumby · 3 years ago
That has been true for as long as I have been programming. But:

Similar statements have been made about high level programming languages. Nowadays most devs don’t understand how the CPU works, but write on the top of a tower of abstractions and nobody bats an eye. Many of those abstractions are quite complex!

I can imagine that the same could apply to certain kinds of network activities. Look at how ppl use http as a magical secure and robust inter process communication channel with no understanding of how it works at all.

Lambda is a half a baby step in this direction.

Another problem is tollbooths. The phone system uses a lot of bandwidth simply to charge the customer money. My phone company charges me overseas rates if I make a phone call outside the country, even if both endpoints are using WiFi, not the cellular network! I’m afraid of the same with fine-grained and abstract distributed computing, but perhaps the magical hand wave abstractions I posit above can help.

This tollbooth nightmare btw is the dream of the web3 bros.

crazygringo · 3 years ago
S3 is a transparent network filesystem.

Unix is only the paradigm for computing servers. Which makes sense because different apps have wildly different ways of scaling.

There's no effective paradigm for abstracting away 1,000's of CPUs in a general purpose way.

I really don't have any idea what you're looking for here, that is possible, that cloud services don't already do.

fragmede · 3 years ago
The paradigm for abstracting away a thousand CPUs is AWS Lambda/GCP or Azure functions/K8s' implementation of serverless. It's not a total drop in replacement because a plain lift-and-shift can't change your paradigm, but cloud functions are very much a Cloud 2.0 (or at least 1.5) paradigm.
gwm83 · 3 years ago
S3, yes a network accessible FS. Unix - only is a telling word - it is the /defining/ paradigm for that. Is there a better one yet? 1000 CPUs? I mean, uh Hadoop, spark, etc etc. What?
fulafel · 3 years ago
Unix did make it work much better a bit later, organizations used to run NFS and single sign on with Kerberos to workstations, automated from scratch reprovisioning of workstations (so you could just reinstall from tftp server with org specific SW customizations included if previous user had messed the box up), smooth remote access to all machines including gui apps, etc.

It just went away due to Microsoft crowding it out.

hinkley · 3 years ago
People forget that a good deal of “cloud” logic existed in a form on mainframes as well.
vbezhenar · 3 years ago
Mainframes are very expensive. You can buy mainframe with very fast CPU and RAM interconnect and scale it by buying more hardware. Or you can spend 100x less and buy a number of server blades. Interconnect will be very slow, so you can't just run some kind of abstracted OS, you need to run separate OS on every server blade, you need to design your software with that slow interconnect in mind. But in the end it's still 100x cheaper and it's worth it.

Also mainframes have growth limit. You can buy very powerful ones, but you can't buy one that's as powerful as entire datacenter of server blades.

gwm83 · 3 years ago
amen, and those principles still serve us well today. software developers are still as bad at writing code, maybe worse.
m463 · 3 years ago
I completely understand what you're saying.

I'm sort of reminded of how the US government is the worst (except for all the rest), when having an absolute ruler should be so MUCH more efficient. Problems would be fixed by fiat.

Or maybe, why does lisp persist with its horrible user-unfriendly syntax?

:)

I guess we will just have to invent it. (and you should do your part by reminding people with examples of old systems that elegantly solved the papercuts of today)

Deleted Comment

mellavora · 3 years ago
> why does lisp persist with its horrible user-unfriendly syntax?

why persists lisp if user-unfriendly else horrible syntax?

sounds more like python :)

imwillofficial · 3 years ago
“by the 80s I had a more advanced cloud-native environment at PARC than is available today.*”

This statement is entirely false as admired in your footnote.

gumby · 3 years ago
Architecturally and conceptually more advanced. There's a lot of literature about that environment so you can read what I was referring to.
zasdffaa · 3 years ago
Abstractions usually (always?) have a cost because physics.

> The damned thing even ran sendmail

and?

> Cloud computing really still it "somebody else's computer."

That's the definition of 'the cloud'. Unless you run it locally in which case it's your computer. What's your point.

> There's no "OS" (in the philosophical sense) for treating remote resources truly abstractly

It's unclear what you're asking for. Treating stuff truly abstractly is going to get you appalling and horribly variable scalability. If you're aware of that, why don't you tell us what you want to see instead.

Edit: ok, this is from gumby, now I recognise the name. This guy actually knows what he's talking about, so please tell us what things you would like to see implemented.

gumby · 3 years ago
>> Cloud computing really still it "somebody else's computer."

> That's the definition of 'the cloud'. Unless you run it locally in which case it's your computer.

Forget the stupid framing of idiotic marketers in the early 00s and go back to the original “cloud” definition (that engineers were still using in those ‘00s but was distorted for a buck).

The term was introduced (by Vince Cerf perhaps) in the original Internet protocol papers, literally with a picture of a cloud with devices at the edge. It was one of the revolutionary paridigms of IP: you push a packet into the cloud (network) but don’t need to / can’t look into it and the network worries about how to route the picket — on a per-packet basis! You don’t say “the cloud is other peoples’ routers”.

Today’s approach to remote computing requires developers to know too much about the remote environment. It’s like the bad old days of having to know the route to connect to another host.

gwm83 · 3 years ago
Excuse me sir, are you going to pay for those?
gumby · 3 years ago
You don’t pay per packet even though a huge amount of computation is done on devices between you and the machine you’re connecting to in order to transmit each one.

See my comment about tollbooths above.

pjc50 · 3 years ago
> Utterly the wrong paradigm

When you emerge from the jungle, you may notice that not only UNIX conquered the world but even ""worse"" paradigms of Windows and iOS have proliferated. You have to ask why the situation that is so much worse is so popular: is it really everyone else who is wrong?

pdimitar · 3 years ago
Appeal to the currently incumbent solution is not convincing. Very often the majority of people simply choose the lesser evil -- not what's of the best quality or with the biggest productivity.

Or need I remind you that hangings at sunrise and sunset were commonplace and people even brought their kids to them?

I'm sure back then people defended it as well, and it's likely that if you heard their arguments you'd facepalm.

gumby · 3 years ago
There are still a lot of fossil-powered automobiles being driven around by people. Doesn't mean they are the future.
gonehome · 3 years ago
Have you heard of urbit?
klibertp · 3 years ago
I did. Its use of intentionally obscure language that makes APL seem readable and consistent in comparison just because "only smart folks should be able to code in this" is something I simply can't accept. And I love obscure langauges!
gwm83 · 3 years ago
Good replies by others here. "crazy and destructive" that you have no idea about how computers work today, or how computers are still computers. Your ignorance about things like Sun workstations as it relates to literally everything today, I mean you have no idea about modern computing lol
scandox · 3 years ago
As a developer I have an adversarial relationship with the Cloud even though I use it all the time. The reason for that is money / billing.

As soon as a credit card is in the relationship exploration and experimentation is over for me.

My local Linux machine may go on fire but it will never send me an invoice no matter what I do.

cube00 · 3 years ago
A cynical view would be the billing is designed to trip you up.

As an example, if you use Azure with a Visual Studio subscription which includes credit, once the credit is used all of your services are suspended and no further charges are incurred.

As a pay-as-you-go customer this option does not exist. You can set a billing "alert" but that doesn't stop the charges.

fragmede · 3 years ago
It's kind of weird that it's not just a built in toggle button to the system but GCP has the primitives to let you suspend the system when a threshold is met.

https://cloud.google.com/billing/docs/how-to/notify#cap_disa...

gosukiwi · 3 years ago
Yeah that is quite scary. I'd feel much better if I could put a limit on the billing. Just shut everything down if I go beyond X amount of money.
gw99 · 3 years ago
It’s even worse than that really because it only takes a small slip into some of the cloud-native services and that adversarial relationship is entirely unavoidable and unportable and you are stuck with it. Which is exactly what is demanded by the providers to get the best cost-benefit relationship in the short term. Of course the human race is entirely cursed by short-term thinking.

The true cost of all technology is only apparent when you get the exit fee invoice.

mgraczyk · 3 years ago
Interestingly, at Google the typical developer workflow (google3) is very cloud native.

Most devs write code in VS code in the browser. Many (most?) devs don't have a physical desktop any more, just a cloud VM. The code lives in a network mounted filesystem containing a repository. The repository is hosted remotely (everyone can see edits you make to any file nearly immediately). Builds are done remotely with a shared object cache. Tests typically run in the cloud (forge).

Facebook has similar infrastructure, although more pieces run locally (builds were mostly done on your VM circa 2020)

For my personal projects, I try to do most development on a cloud instance of some kind, collocated with the rest of the infrastructure.

hulitu · 3 years ago
> Many (most?) devs don't have a physical desktop any more,

That would explain the (bad) design of their software.

CSDude · 3 years ago
I prefer the ability to run and debug locally coupled with a good IDE. I know VSCode is popular, people customize the shit out of Vim, but IntelliJ just works for me when I'm writing Java, Kotlin or Typescript/React. Refactor and debug is not comparable. And I know most think its hard on resources, but we have 200k lines of code yet and it works with 16GB M1 Air very well leaving more than enough spare resources for the system.
mgraczyk · 3 years ago
What? Doesn't even make sense. Why would lacking a physical desktop cause developers to make bad software?
sahila · 3 years ago
Not sure this follows. Their designs might be bad(?), but certainly for any UI driven applications, they do do use the native and emulated devices.

What OP means is that you ssh into a cloud machine for development.

subradios · 3 years ago
Having heard complaints of Google developers, the problem with this is the limitation of Chromium and the browser more generally. Browsers are utterly terrible at letting users script their own shortcut etc.
exitheone · 3 years ago
It's perfectly possible and in fact quite pleasant to work with intellij inside google. At least for JVM languages.

Disclaimer: I work for google

gwm83 · 3 years ago
The problem is that developers have no idea how to run systems at a scale larger than their local Mac and iPhone.
fdewrewrewf · 3 years ago
> VS code at Google

MS have done a fantastic job of getting developers everywhere hooked on VS Code, whether they are writing for the Windows ecosystem or not.

shafyy · 3 years ago
I've also switched all my dev work to Gitpod a year ago and I don't want to go back to developing locally anymore. I curse and swear every time I need to work on a project locally.
vegasje · 3 years ago
I've had interest in trying this dev flow out, but I haven't been able to determine how it would work for multiple projects that work in concert.

For example, a web dashboard project with its own backend that also communicates with an API, which is a separate project.

Does Gitpod (or Codespaces) support projects (repositories) that work together?

api · 3 years ago
I can't think about the cloud without immediately grasping its huge downsides: absolutely no privacy at all, data lock-in, forced migration, forced obsolescence, and things just vanishing if the rent is not continuously paid.

I have files on my computer from the 1990s and 2000s. If we lived in the cloud-centric world those projects that I did back then would probably be gone forever since I'm not sure I would have kept paying rent on them.

There's also no retrocomputing in the cloud. I can start DOSBox on my laptop and run software from the DOS era. That will never be possible in the cloud. When it's gone it's gone. When a SaaS company upgrades, there is no way to get the old version back. If they go out of business your work might be gone forever, not just because you don't have the data but because the software has ceased to exist in any runnable form.

It all seems like an ugly dystopia to me. I don't think I'm alone here, and I think these things are also factors that keep development and a lot of other things local in spite of the advantages of "infinite scalability" and such.

I'm not saying these things are unsolvable. Maybe a "cloud 2.0" architecture could offer solutions like the ability to pull things down and archive them along with the code required to access them and spin up copies of programs on demand. Maybe things like homomorphic encryption or secure enclaves (the poor man's equivalent) can help with privacy.

... or maybe having a supercomputer on my lap is fine and we don't need this. Instead what we need is better desktop and mobile OSes.

furyofantares · 3 years ago
> I have files on my computer from the 1990s and 2000s. If we lived in the cloud-centric world those projects that I did back then would probably be gone forever since I'm not sure I would have kept paying rent on them.

On the other hand, I don't have any of my 90s/2000s projects because I would occasionally lose a hard drive before transferring everything to my new machine, or would occasionally transfer not-everything and then later regret it.

I guess dropbox isn't "the cloud", but I haven't lost anything since I started paying for dropbox when it came out, and things wouldn't just vanish if the rent is not continuously paid.

I sure wouldn't mind more cloud services that improve and add to the local computing experience rather than deliver themselves only through a browser and a web connection.

api · 3 years ago
With local stuff you can lose it. With cloud you will lose it eventually if it's dependent on any form of SaaS that you don't control.
gabereiser · 3 years ago
I agree with you that a cloud 2.0 architecture is needed. I don’t agree with you that you can’t run DOSBox in the cloud. You totally can. In fact, you can containerize a dosbox app and forward the output over websockets or tcp. I have files from 1990s and 2000s as well. I keep backups, as everyone should when dealing with cloud/internet/not-my-machine.
api · 3 years ago
I can run DOSBox in the cloud. What I can't do is run an old version of Google Docs, Salesforce, Notion, or Alexa.

I can run old commercial software that I paid for in DOSBox or a VM because I have the software, even if it's just in binary form. I have the software and the environment and I can run it myself.

That's the difference. The cloud is far more closed than closed-source commercial software.

I can also run the software with privacy. When I run something locally there's nobody with back-end access that can monitor every single thing I do, steal my data, scan my data to feed into ad profile generators or sell to data brokers, etc.

togs · 3 years ago
I used to agree about paying rent for my old files until I realized that it costs me anyways to ensure those files are available over a long time.
deathanatos · 3 years ago
First,

> I never ever again want to think about IP rules. I want to tell the cloud to connect service A and B!

Dear God this 1000 times. My eyes bleed from IP-riddled firewalls foisted upon my soul by security teams.

If I could also never NAT again, that'd be nice.

> Why do I need to SSH into a CI runner to debug some test failure that I can't repro locally?

Hey I can answer that one. Because an infra team was tasked with "make CI faster" and couldn't get traction getting the people responsible for the tests to write better tests (and often, just hit a brick wall getting higher ups to understand: "CI is slow" does not mean the CI system is slow. CI's overhead is negligible), and instead did the only thing generally available: threw money at the problem.

Now CI has a node that puts your local machine to shame (and in most startups, it's also running Linux, vs. macOS on the laptop) (hide the bill), and is racing those threads much harder.

I've seen people go "odd, this failure doesn't reproduce for me locally" and then reproduced it, locally, often by guessing it is a race, and then just repeated the race enough times to elicit it.

Also, sometimes CI systems do dumb things. Like Github Actions has stdin as a pipe, I think? It wreaks havoc with some tools, like `rg`, as they think they're in a `foo | rg` type setup and change their behavior. (When the test is really just doing `rg …` alone.)

Also, dev laptops have a lot of mutated state, and CI will generally start clean.

Those last two are typically hard failures (not flakes) but they can be tough to debug.

> Do we need IP addresses, CIDR blocks, and NATs, or can we focus on which services have access to what resources?

We need IP addresses, but there's not really a need for devs to see them. Nobody understands PTR records though. CIDR can mostly die, and no, NAT could disappear forever in Cloud 2.0, and good riddance.

Let me throw SRV records in there so that port numbers can also die.

Because it's bothering me: that graph is AWS services, not EC2 services.

wink · 3 years ago
> Now CI has a node that puts your local machine to shame

A nice problem to have, I only know the opposite side. Developer laptop being twice the speed of CI.

deathanatos · 3 years ago
I'll admit it depends a bit. We're moving to Github Actions and their runners are … slow. There are custom runners, but they're a PITA to set up. There's a beta for bigger runners, but you have to be blessed by Github to get in right now, apparently.
tomphoolery · 3 years ago
Saying that Spotify is "producer-friendly" must be couched in the context of the times. 100% of $0 is still $0, and at the time most people were just pirating music so you weren't making anything off of recordings. If Spotify wanted to give you literally fractions of a cent instead of $0, you were going to take that. I wouldn't say it was ever really friendly to producers...mostly to consumers and, in order to be friendly to consumers, they had to win over record labels. And I think Spotify made a _lot_ of compromises in order to do that, including taking money that should really be going to producers and paying off the RIAA/labels so they continue to put their catalogs on there.

Source: I was a producer when Spotify started and I still am.

dmitriid · 3 years ago
Spotify pays up to 70% of their revenue to copyright holders.

So, your beef should be with them, not Spotify. Which you should already be aware of if you truly are a music producer.

However, it costs you nothing to bash Spotify. It may cost you your career to bash the actual greedy leeches who control the money flows in music.

minusf · 3 years ago
there are countless musicians who own the copyright to their own stuff and get peanuts from spotify even for non-trivial number of streams.

until spotify pays from MY subscription the artists I listen to, they will not see money from me. bandcamp all the way.

fleddr · 3 years ago
I started my career in simpler times. Developers would produce a zip and handed it over to an admin guy. Dev and Infra/Ops clearly separated. No CI, sometimes not even a build step.

I understand the power and flexibility of the cloud but the critical issue is the dependency on super humans. Consider a FE or mobile app developer. They already greatly struggle just to keep up with development in their field. Next, you add this massive toolset on top of it, ever-changing and non-standardized.

A required skillset overload, if you will. Spotify concluded the same internally. They have an army of developers and realized that you can't expect every single one of them to be such "superhuman". They internally built an abstraction on top of these services/tools, to make them more accessible and easy to use.

bak3y · 3 years ago
And you're glossing over the pain points that drove the industry to coin DevOps - those times when the zip didn't contain everything it needed to run in production properly and the admin guy had to call the dev multiple times in the middle of the night because their app didn't start properly on deployment. Or the install/startup procedure wasn't documented properly. Or it changed and the document didn't get updated. Or there was a new, required environment variable that didn't get mentioned in documentation anywhere. Or a new, required library was on the dev's local workstation and not on the server. etc etc
fleddr · 3 years ago
Never had such issues, you can still do decent coordination in such hand-overs. Honestly the only issue was the inflexibility of the hardware.
CharlieDigital · 3 years ago
> Developers would produce a zip and handed it over to an admin guy

This is literally what the cloud is now for a fraction of the cost of the admin guy.

Current gen serverless containers basically deliver that promise of ease of use, scalability, and low cost.

For me, Google Cloud Run, Azure Container Apps, and AWS App Runner fulfill the promise of the cloud. Literally any dev can start building on these platforms with virtually no specialized knowledge.

https://www.youtube.com/watch?v=GlnEm7JyvyY

bak3y · 3 years ago
And implement them poorly and then wish for an admin guy or SRE when things go sideways at 3AM and production is down.
moduspol · 3 years ago
I'm just not sure how you define what goes into that zip in a way that does not make it substantially harder to solve tough problems than it would be to be familiar with cloud services.

Of course it'll cover you up to a point. If it's a CRUD web app that runs on a single server (or multiple stateless ones) and uses a relational database, you can have a zip file whose contents cover your needs. But if you have anything that justifies Kafka, Cassandra, or distributed storage, the "I'll just throw it over the fence to ops" paradigm isn't likely to fit as well.

asim · 3 years ago
I grew up in that era. Where symlinks and sighup to hot reload with zero downtime was an innovation!
JayStavis · 3 years ago
Maybe there is a name for this phenomenon, but it feels like when we add so much productivity via layers of abstraction, even more person-effort gets allocated to the higher levels of abstraction. Because 1. that's where people are most productive / happy / compensated / recognized / safe 2. businesses can confidently project return on investment

How many engineers get to work on a part of the stack that has some room for fundamental breakthroughs or new paradigms? The total number has maybe grown in the last 50 years, but not the proportion?

It's hard to justify an engine swap once there's so much investment riding on the old one, so just not a lot of people are researching how to make that new OS.

That is until a Tesla comes around and shows the market what could be better/faster/cheaper.

fhd2 · 3 years ago
Probably not the name you're looking for, but I typically talk about this stuff in terms of local and global maxima. Low-risk optimisation efforts typically get trapped on some local maximum over time, while bold efforts get closer to the global one - the minority that doesn't fail, that is. Applies to build vs buy decisions and business in general quite nicely.

From what I've seen, businesses and projects usually become less risk averse the more established they are - they are economically incentivised towards that.

The silver lining for me is that there is always room for disruptors in this scenario.

codetrotter · 3 years ago
> businesses and projects usually become less risk averse the more established they are - they are economically incentivised towards that.

You mean the other way around, right? Businesses and projects usually become more risk averse the more established they are.