I know the last Multics site shutdown in 2000, DND in Halifax, with 5 CPUs..
I know the last Multics site shutdown in 2000, DND in Halifax, with 5 CPUs..
I know the last Multics site shutdown in 2000, DND in Halifax, with 5 CPUs..
1960s - the military realizes that a single computer can not handle data from different levels of classification. (This was related to planning classified flight operations during the Viet Nam conflict, the flights themselves had to avoid enemy SAM sites (the knowledge of which was Top Secret, even more secret than the flights)), etc... and those were different levels of classification). Research to solve this problem was done, and progress was underway to build this into Multics... when Unix took off, and distracted everyone. There have been some niche secure systems available, but widespread knowledge of them didn't happen. Security of that level wasn't seen as necessary, and eventually was seen as impossible anyway. Note that the solution to general purpose secure computing was found, and proven to work, decades ago!
1970s - general purpose personal computing came along, again without security in mind. BBSs arose, along with UUCP, FidoNet, etc. in the public sphere.... ARPAnet in the Military/Educational area.
1980s - the IBM XT (or clone) with MS-DOS and dual floppy diskettes was the pinnacle of secure general purpose computing. The shareware revolution happened, and most PC users were happy to "buy" $2-3 floppy disks in bulk with various programs from strangers at computer shows, and just try things out.
Why was it secure? A floppy diskette full of data is a course grained "capability". You know (because you insert/remove them, and attach write protect labels) exactly which disks are in the system, can make backups of them easily, and it's effectively impossible to mess up your computer with a bad program.
You also had BBSs from which you could download software to try out. This was peak computer user freedom, even though the machines were slow and the diskettes weren't perfectly reliable. You could just try things, without worry. Nobody has that freedom any more, no matter what OS they run.
The Windows Era - The adoption of hard drives and GUI interfaces brought an end to users having transparent and full knowledge of where and how their data was stored. The need to "install" software transformed what was once a matter of copying a boot floppy into an impossible to replicate system state. Hard drives were expensive, and fixed... you couldn't just copy them freely, like you could with diskettes. This was the first step downhill into the descent.
Still, at this point, there were some great tools introduced at this point. With the Mac, you had Hypercard, on the Windows machines, you could get Visual Basic, or Delphi, and build applications to do CRUD or interact with custom hardware fairly easily. Documentation was included, complete, comprehensive, and amazing.
Then the .NET era happened. This made software slower, there was always a new .NET library to load, and things crashed far more often. While it might have been a good move in preparation for the migration away from the Intel instruction set, that has taken decades, not years, and the framework has been through several incompatible iterations along the way. We lost VB6 and Delphi and Hypercard along the way.
Simultaneously, the Internet was released for commercial use. Eventually, we came to have systems with persistent internet, but operating systems intended for the classroom or small corporate environment. Any thought of security was layered on top, not built in.
Then the web hit, and we shifted from high performance, easy to build and distribute desktop applications to a model where everything is shoved through a stateless protocol through firewalls and proxies to end users on machines they don't fully control, own, or understand. It's a huge mess, and it can't be cleaned up because none of the computers at the edges are secure enough to run random code.
We could fix this... and I've been trying to push that message wherever it seems like the ideas might take hold.... if we abandoned the flawed concept of ambient authority that underlies Windows, MacOS, Linux, etc... and went with one that defaults to no access, such as the ever delayed Hurd, or Genode, then it would at least be possible to get back the ability to run mobile code without risk.
Once that almost impossible task is done, then we can take the code generating tools we built for Windows back in the 1980-90s, like Visual Basic 6, and Delphi, and recast them to generate code to run directly on the phones, tablets, laptops, desktops, etc. The end user can easily manage security with the powerbox facilities that capabilities based OSs provide. (They look just like the file open/save dialogs we're all used to, but then only provide access to those files to the application).
Note that this is NOT the same as "permission management" on your tablet/smartphone.
We could be heading towards a bright secure future, where we all own our own hardware again, and things just work, quickly, without bloat, without virus scanners, the way we want them to...
or not
I think we've got a 0.1% chance for the former at this point it time. I'll do whatever I can to get that up to 0.2%
Hint: It's not Windows. It's not Linux. It's not Mac OS. It's not FreeBSD. It's not illumos...
Multics (as noted above) was written in PL/I.
"With a single-level storage the entire storage of a computer is thought of as a single two-dimensional plane of addresses, pointing to pages. Pages may be in primary storage (RAM) or in secondary storage (disk); however, the current location of an address is unimportant to a process. The operating system takes on the responsibility of locating pages and making them available for processing. If a page is in primary storage, it is immediately available. If a page is on disk, a page fault occurs and the operating system brings the page into primary storage. No explicit I/O to secondary storage is done by processes: instead, reads from secondary storage are done as the result of page faults; writes to secondary storage are done when pages that have been modified since being read from secondary storage into primary storage are written back to their location in secondary storage."
IOW this is classic VM behaviour. Which bit of the wiki article is revelatory?
(NB. I've actually used multics).
With single level store, the program pages are mapped in, not copied. Writing to the page alters the disk image. All processes running the same program share the memory pages.
It would be fun to have an FPGA implementation of the hardware, and/or a port of the system to modern (or relatively modern) processors that are generally available (perhaps some x86 processors that support segmentation.)
The X86 segmentation facility resembles Multics segments in name only.
Also: PL/I may not have been beautiful, but at least it had a safe memory model!
https://multics-wiki.swenson.org/index.php/Main_Page
There are also several public access Multics systems up and running.
OK, but at the time they started working on Unix, Multics had not yet been delivered. Nor was it clear that it would ever be delivered. So the idea that an OS could be successfully written in a high-level language was not yet proven.
AFAIK only Multics used 4 9-byte characters on the PDP-10s; I believe 5 7-bit ASCII characters fairly common later on in the PDP7/10 lifetime.