A lot of people miss the fact that Plan 9 was a real distributed operating system. It's not just UNIX with a couple features ("ooh everything is a file" "ooh UTF8"). You can effortlessly execute any program across multiple hosts on a network. You can use any resource of any host on the network, including files, processes, graphics, networks, disks. It all just works like magic, using a single message-oriented protocol.
If Linux worked like this today, you would not need Kubernetes. You could run systemd on a single host and it would just schedule services across all the hosts on the network. Namespaces would already be independent and distributed, so containers would only be used for their esoteric features (network address translation, chroot environment, image layers). Configurations, files, secrets, etc would just be regular files that any process in any container on any host could access using regular permissions models. About 50 layers of abstraction BS would disappear.
I think because nobody's actually seen how it can work, they can't imagine it being this simple.
Yes, and it's helpful to remember why it's a distributed system. Plan 9 was created to support people working together in groups at the project or department level. The Plan 9 creators - the original Unix guys - liked the idea of a time-shared computer, where there is just one system to administer and everyone can easily access all the files and other resources. Then it became feasible to use many computers instead of just one, but they wanted to use them with not much more administrative effort than a single time-shared computer, and no additional barriers to sharing files etc. So in the original Plan 9 installations the computers used as terminals were stateless - you could walk up to any terminal and log in to your own customized environment that mounted just the files systems and other resources you wanted.
Also, they made use of specialized computers - the ones with nice displays were terminals, there were compute servers with powerful CPUs and file servers with big disks. Some computers were quite specialized, like the ones with WORM drives that supported the Venti versioned file system, that provided seamless automatic backups and even a sort of version control.
Now Plan 9 lives on, used (as far as I know) mostly by lone individuals. So now the Plan 9 terminal, file server, and compute server usually all run on the same computer. It works, but it's not the original vision.
I think one of the reasons Linux but not Plan 9 took off, besides licensing, is that this vision of a project-scale distributed system fell out of style. Many of the people who adopted Linux in the 1990s wanted a largely self-contained computer they could run themselves, they didn't want a terminal to connect to a distributed system. The original Plan 9 stateless terminals don't really fit in a world where everyone is carrying around their own laptop.
So now we have a world with a lot of mostly self-contained individual computers, that use cloud services far away run by huge corporations. The intermediate scale organized around projects and small groups isn't explicitly supported by the computer systems themselves. Plan 9 can live on in this world, but it's not the world it was originally designed for.
Part of that magic is the trust in the network computer. This could work very well for a corporate setting with thin clients working with a distributed cluster of network services where everything is owned by the corporation.
I'm not so sure that this model of trust works with the way computers have evolved since then.
Comparing the issues E.G. X11 has vs modern workarounds for direct user IO for games I also wonder how the security model and composition of file layers could negatively impact the experience.
Taking the ideas of Plan 9 as inspiration, the more realtime elements could be filtered in kernel and message passed to other processes via a single centralized security model. That might also include exposing shared memory via a memory mapped file, or possibly via a higher level message passing abstraction.
> Part of that magic is the trust in the network computer
All network connections are authenticated. Thanks to the way everything in Plan 9 is implemented using it in a network will be closer to how a VPN works: your "view" of the network will be of only trusted computers, and going to the outside world will go through a machine that can act as a firewall.
> That might also include exposing shared memory via a memory mapped file, or possibly via a higher level message passing abstraction.
Distributed shared memory needs quite a bit more than the simple "read" and "write" primitives that something like 9P provides. You basically need to replicate a low-level coherency protocol in software. Of course, expect it to be quite slow.
>So we went to dinner, Ken figured out the
bit-packing, and when we came back to the lab after dinner we called
the X/Open guys and explained our scheme. We mailed them an outline
of our spec, and they replied saying that it was better than theirs (I
don't believe I ever actually saw their proposal; I know I don't
remember it) and how fast could we implement it? I think this was a
Wednesday night and we promised a complete running system by Monday,
which I think was when their big vote was.
>So that night Ken wrote packing and unpacking code and I started
tearing into the C and graphics libraries. The next day all the code
was done and we started converting the text files on the system
itself. By Friday some time Plan 9 was running, and only running,
what would be called UTF-8. We called X/Open and the rest, as they
say, is slightly rewritten history.
Plan 9 elicits in my head the same kind of thoughts as Lisp -- I find them both extremely appealing on an intellectual level, and can't help but wonder why there isn't more of them around in the "real world".
As a software developer, my main use of my computer is writing / building / running code and surfing the web, mostly in read-only mode but also to interact with others via stuff like slack. I have always wondered how difficult it would be to make Plan 9 a viable platform for these requirements, and why this is not so today (or maybe it is and I just don't know where to look): does the difficulty lie in porting programs that already run on other operating systems? Are there other, deeper reasons? Is it possible today to run stuff like vim, gcc / g++, python, java, node, zig, rust, firefox on Plan 9? If not, is it possible to port these, or are there fundamental architectural reasons against it?
Note: I am willing and happy to try other paradigms, such as acme for editing, but also find it quite baffling that, if it is technically possible, you could not install vim / emacs / vscode alongside acme.
> Can't help but wonder why there isn't more of them around in the "real world".
I think the biggest strength (and weakness) of the ideas both present in LISP and Plan9 is the consistency and internal integration. And, I see two big challenges with that.
One is technical: It is not that easy to reinvent everything and do it in such a way that makes it more consistent than existing systems. If we believe Conway's law such an effort would require a team as small as possible, optimally just a single person. Note that Plan9 for example does not fully integrate, the programming language and standard library are not composed of the same building blocks as the underlying system, there is a divide there.
The second is economical / political:
While such a consistency and internal integration is desired by the users and developers, it is not very beneficial to business. Image if all components were actually integrated with one another. How would management divide that into projects? How would you make marketable products from it? How would you implement your branding and vendor lock-in? Where would SaaS and subscription models fit in?
I cannot tell if you are responding tongue in cheek (in what seems to be a Plan 9 tradition) or being serious. My points are not that any of these tools are better or worse than the others; I am trying to say that I would like to play with Plan 9 as a working environment, but cannot because it lacks most of the tools I want / need to use. I don't use Go. I use C, some C++, and Typescript (so, Node). Why would I have to avoid those? Most importantly, if I have to (seriously) avoid Firefox, what is the alternative that will allow me to surf the modern web, which (again) I NEED to do daily?
If the answer is "nope, Plan 9 is not intended for this type of user" then I guess that's fine, although sad to me, because I cannot play around with something that is appealing to me. And, again, if this is the case, it would be interesting (to me) to understand WHY: why is there no Firefox (or any modern browser) for Plan 9. From another comment, I learned there IS vim ported to it, so I guess that means it is fundamentally possible to port medium-complexity software. Maybe nobody else cares about having these things in Plan 9, which again, is fine. Cheers.
Plan9 and Inferno are shining examples of not being opened early enough. They could conquer the world, but their licences were not open enough at the critical time, and technically inferior but open Linux ate their lunch, along with the dinner.
I don't understand how the "everything is a file" could apply for everything in practice. For example, Linux has ioctl(), which in practice are like a side channel. Granted, Linux doesn't apply this API philosophy too thoroughly.
I guess the "everything is a file" might have multiple meanings. For example:
(1) Everything is represented by a (file) descriptor
(2) Same as (1) and the descriptor has a file-like API (think read(), seek(), write(), etc.)
(3) Everything is an "byte addressable blob of bytes"
Meaning (1) is OK. But it doesn't tell nothing about the API the (file) descriptor itself would use. It could be a fixed set (like meaning (2)), or be variable depending on something else (like the (file) descriptor type).
Meaning (2) looks like too restrictive and inefficient to me and is the one I really have trouble accepting as a general OS primitive.
Meaning (3) surely can't be used for everything in practice, right? It's just to generic like "every computer architecture can be emulated by a universal turing machine." And it also seems too inefficient. But it could be very useful if the blob of bytes had an API like (2) or any other, including having an API depending of the "file type".
Is option (3) that folks are meaning when talking about "everything is/should be a file"?
It's more like #2, but without ioctl() and similar brain-damaged abortions. If you open /dev/mouse, for example, you get a stream of events (encoded as blobs of bytes), which you can get by read(), not a byte-addressable blob of bytes.
But lots of things in Plan9 present an interface that isn't just a single file. The 8½ window system, for example, presents not only /dev/mouse but also /dev/cons (character-oriented I/O), /dev/bitblt (to which you write requests for 2-D accelerated drawing operations), /dev/rcons (keystroke-oriented character I/O), and /dev/screen (the contents of the window—which is just a byte-addressable blob of bytes). http://doc.cat-v.org/plan_9/4th_edition/papers/812/ explains in more detail.
And, of course, file storage servers similarly provide an arbitrary tree of files, and when you remotely log into a CPU server, you mount your local filesystem on the CPU server so that processes running on the CPU server can access your local files transparently—including /dev/screen, /dev/mouse, and /dev/bitblt, if you so choose.
That's actually really cool.. This democratization of useful information probably opens the door for lots of interesting interactions between distributed systems.
Even on my local Linux system I wouldn't know how to get hold of the mouse data without using an X Windows API (or SDL on a console only app before X is run).
And what's the advantage vs an API via function calls? It's the same thing no? Calling 0x42 with a given ABI vs interpreting bytes as an ABI seems oddly similar.
I think "everything is a file" also means: everything is addressable by a file path. Per-process namespaces are part of what makes this possible.
In a way it's similar to HTTP REST, which is also organized by file paths, except instead of the HTTP verbs GET, POST etc. you get open, read, write as your verbs.
It may be useful to see Russ Cox's "Tour of ACME" video [1]. ACME is a Plan-9 text editor, which applies the "everything is a file" philosophy pretty deeply. It doesn't answer your ioctl question (that I remember), but maybe it'll give you a better example of how other things can be accessed as files.
It's more or less 3. Not everything is a blob but everything is a stream of bytes. It think the confusion for most of us (me) initially is that a file means a blob that you read in, change and then write out more or less atomically. But the Unix originaters understood files as streams. So, for example, the input stream from your mouse is a file. Everything is a file really means everything is a stream.
I agree with you. A type amounts to the sum of operations that are valid on an object conforming to that type.
A file object is a very basic, general type, that allows open, read bytes, write bytes, close, maybe seek, maybe some ops are restricted (read-only, write-only) etc.
I don’t think it is generally appreciated how far it gets you to have a unifying simple interface. You can always add a complex one, you know?
It's interesting both of you had different answers. I realize that the streams interface was a Sys V thing but would the Unix's "everything is a file" generally be option (2) in the OPs comment then? I feel like I've heard the phrase so much and just always assumed it was (2.)
This is great news altogether - I've been dabbling with Plan9 for a fair bit (mostly on Raspberry Pis of late as they are nicer "disposable" machines and I have plenty of them), so am hopeful that this will lead to more modern versions (especially something whose UX does not rely on mouse chording, which is a chore on modern machines).
If inferno took up, it could fulfill the original promise of Java. Many approaches are quite similar between them — but Inferno also had working relocation of processes between hosts, and well-working IPC ("RMI") out of the box.
If Linux worked like this today, you would not need Kubernetes. You could run systemd on a single host and it would just schedule services across all the hosts on the network. Namespaces would already be independent and distributed, so containers would only be used for their esoteric features (network address translation, chroot environment, image layers). Configurations, files, secrets, etc would just be regular files that any process in any container on any host could access using regular permissions models. About 50 layers of abstraction BS would disappear.
I think because nobody's actually seen how it can work, they can't imagine it being this simple.
Also, they made use of specialized computers - the ones with nice displays were terminals, there were compute servers with powerful CPUs and file servers with big disks. Some computers were quite specialized, like the ones with WORM drives that supported the Venti versioned file system, that provided seamless automatic backups and even a sort of version control.
Now Plan 9 lives on, used (as far as I know) mostly by lone individuals. So now the Plan 9 terminal, file server, and compute server usually all run on the same computer. It works, but it's not the original vision.
I think one of the reasons Linux but not Plan 9 took off, besides licensing, is that this vision of a project-scale distributed system fell out of style. Many of the people who adopted Linux in the 1990s wanted a largely self-contained computer they could run themselves, they didn't want a terminal to connect to a distributed system. The original Plan 9 stateless terminals don't really fit in a world where everyone is carrying around their own laptop.
So now we have a world with a lot of mostly self-contained individual computers, that use cloud services far away run by huge corporations. The intermediate scale organized around projects and small groups isn't explicitly supported by the computer systems themselves. Plan 9 can live on in this world, but it's not the world it was originally designed for.
I'm not so sure that this model of trust works with the way computers have evolved since then.
Comparing the issues E.G. X11 has vs modern workarounds for direct user IO for games I also wonder how the security model and composition of file layers could negatively impact the experience.
Taking the ideas of Plan 9 as inspiration, the more realtime elements could be filtered in kernel and message passed to other processes via a single centralized security model. That might also include exposing shared memory via a memory mapped file, or possibly via a higher level message passing abstraction.
All network connections are authenticated. Thanks to the way everything in Plan 9 is implemented using it in a network will be closer to how a VPN works: your "view" of the network will be of only trusted computers, and going to the outside world will go through a machine that can act as a firewall.
Distributed shared memory needs quite a bit more than the simple "read" and "write" primitives that something like 9P provides. You basically need to replicate a low-level coherency protocol in software. Of course, expect it to be quite slow.
What's a good way to try it? Cluster of raspberry pi's, or just any given home-lab setup?
Plan 9 Foundation: https://p9f.org/
Wikipedia: https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs
There's still a pretty active community around Plan 9, too.
9Front, a popular fork of Plan 9: http://9front.org/
Interviews with some Plan 9 community members: https://0intro.dev/
Some videos: [A Tour of Acme](https://www.youtube.com/watch?v=dP1xVpMPn8M), [Peertube channel of sigrid's](https://diode.zone/video-channels/necrocheesecake/videos)
I personally learned about it from GSoC many years ago.
https://www.cl.cam.ac.uk/~mgk25/ucs/utf-8-history.txt
>So we went to dinner, Ken figured out the bit-packing, and when we came back to the lab after dinner we called the X/Open guys and explained our scheme. We mailed them an outline of our spec, and they replied saying that it was better than theirs (I don't believe I ever actually saw their proposal; I know I don't remember it) and how fast could we implement it? I think this was a Wednesday night and we promised a complete running system by Monday, which I think was when their big vote was.
>So that night Ken wrote packing and unpacking code and I started tearing into the C and graphics libraries. The next day all the code was done and we started converting the text files on the system itself. By Friday some time Plan 9 was running, and only running, what would be called UTF-8. We called X/Open and the rest, as they say, is slightly rewritten history.
As a software developer, my main use of my computer is writing / building / running code and surfing the web, mostly in read-only mode but also to interact with others via stuff like slack. I have always wondered how difficult it would be to make Plan 9 a viable platform for these requirements, and why this is not so today (or maybe it is and I just don't know where to look): does the difficulty lie in porting programs that already run on other operating systems? Are there other, deeper reasons? Is it possible today to run stuff like vim, gcc / g++, python, java, node, zig, rust, firefox on Plan 9? If not, is it possible to port these, or are there fundamental architectural reasons against it?
Note: I am willing and happy to try other paradigms, such as acme for editing, but also find it quite baffling that, if it is technically possible, you could not install vim / emacs / vscode alongside acme.
I think the biggest strength (and weakness) of the ideas both present in LISP and Plan9 is the consistency and internal integration. And, I see two big challenges with that.
One is technical: It is not that easy to reinvent everything and do it in such a way that makes it more consistent than existing systems. If we believe Conway's law such an effort would require a team as small as possible, optimally just a single person. Note that Plan9 for example does not fully integrate, the programming language and standard library are not composed of the same building blocks as the underlying system, there is a divide there.
The second is economical / political: While such a consistency and internal integration is desired by the users and developers, it is not very beneficial to business. Image if all components were actually integrated with one another. How would management divide that into projects? How would you make marketable products from it? How would you implement your branding and vendor lock-in? Where would SaaS and subscription models fit in?
[1]: https://vmsplice.net/9vim.html
>Gcc/g++
Just get plan9's C.
>Python
They have mercurial, so yes.
>Node, Firefox.
Avoid that, seriously.
You have a Go compiler, BTW.
If the answer is "nope, Plan 9 is not intended for this type of user" then I guess that's fine, although sad to me, because I cannot play around with something that is appealing to me. And, again, if this is the case, it would be interesting (to me) to understand WHY: why is there no Firefox (or any modern browser) for Plan 9. From another comment, I learned there IS vim ported to it, so I guess that means it is fundamentally possible to port medium-complexity software. Maybe nobody else cares about having these things in Plan 9, which again, is fine. Cheers.
I guess the "everything is a file" might have multiple meanings. For example:
(1) Everything is represented by a (file) descriptor
(2) Same as (1) and the descriptor has a file-like API (think read(), seek(), write(), etc.)
(3) Everything is an "byte addressable blob of bytes"
Meaning (1) is OK. But it doesn't tell nothing about the API the (file) descriptor itself would use. It could be a fixed set (like meaning (2)), or be variable depending on something else (like the (file) descriptor type).
Meaning (2) looks like too restrictive and inefficient to me and is the one I really have trouble accepting as a general OS primitive.
Meaning (3) surely can't be used for everything in practice, right? It's just to generic like "every computer architecture can be emulated by a universal turing machine." And it also seems too inefficient. But it could be very useful if the blob of bytes had an API like (2) or any other, including having an API depending of the "file type".
Is option (3) that folks are meaning when talking about "everything is/should be a file"?
EDIT: formatted the meaning list correctly.
But lots of things in Plan9 present an interface that isn't just a single file. The 8½ window system, for example, presents not only /dev/mouse but also /dev/cons (character-oriented I/O), /dev/bitblt (to which you write requests for 2-D accelerated drawing operations), /dev/rcons (keystroke-oriented character I/O), and /dev/screen (the contents of the window—which is just a byte-addressable blob of bytes). http://doc.cat-v.org/plan_9/4th_edition/papers/812/ explains in more detail.
And, of course, file storage servers similarly provide an arbitrary tree of files, and when you remotely log into a CPU server, you mount your local filesystem on the CPU server so that processes running on the CPU server can access your local files transparently—including /dev/screen, /dev/mouse, and /dev/bitblt, if you so choose.
Even on my local Linux system I wouldn't know how to get hold of the mouse data without using an X Windows API (or SDL on a console only app before X is run).
In a way it's similar to HTTP REST, which is also organized by file paths, except instead of the HTTP verbs GET, POST etc. you get open, read, write as your verbs.
A side channel is then just a directory entry.
[1] https://www.youtube.com/watch?v=dP1xVpMPn8M
A file object is a very basic, general type, that allows open, read bytes, write bytes, close, maybe seek, maybe some ops are restricted (read-only, write-only) etc.
I don’t think it is generally appreciated how far it gets you to have a unifying simple interface. You can always add a complex one, you know?
This is great news altogether - I've been dabbling with Plan9 for a fair bit (mostly on Raspberry Pis of late as they are nicer "disposable" machines and I have plenty of them), so am hopeful that this will lead to more modern versions (especially something whose UX does not rely on mouse chording, which is a chore on modern machines).