Readit News logoReadit News
unused0 commented on We'd be better off with 9-bit bytes   pavpanchekha.com/blog/9bi... · Posted by u/luu
aidenn0 · 7 months ago
The PDP-10 didn't really have bytes; it had 36-bit words.

AFAIK only Multics used 4 9-byte characters on the PDP-10s; I believe 5 7-bit ASCII characters fairly common later on in the PDP7/10 lifetime.

unused0 · 7 months ago
Multics ran on Honeywell 6180 and DPS8/M machines. They had 36 bit words like the PDP-10. They also had instructions that would operate o in 6 or 9 bit characters in the word
unused0 commented on Multics Simulator   multicians.org/simulator.... · Posted by u/teleforce
sillywalk · 3 years ago
Does anybody know how "fast" a real DPS8M was? Compared to say a VAX? Is that a fair comparison?

I know the last Multics site shutdown in 2000, DND in Halifax, with 5 CPUs..

unused0 · 3 years ago
The VAX had byte addressable memory, the DPS8/M word 36 bit word addressable, so I suspect that byte oriented instructions might have an edge on the VAX. Contrawise, the DPS8/M memory bus width was 72 bits, so it might have had an edge in double wide operations. I suspect the dominate factor would be that the DPS8/M was core memory at 1 us access time; I don't know the memory bandwidth of the VAX but I would assume it was faster.
unused0 commented on Multics Simulator   multicians.org/simulator.... · Posted by u/teleforce
sillywalk · 3 years ago
Does anybody know how "fast" a real DPS8M was? Compared to say a VAX? Is that a fair comparison?

I know the last Multics site shutdown in 2000, DND in Halifax, with 5 CPUs..

unused0 · 3 years ago
According to wikipedia, the 6000s ran about 1 MIP and the DPS8/Ms topped out about 1.7 MIPS. Talking with people that worked on Multics, they generally say the 6000s were about 1 MIP.
unused0 commented on Ask HN: For older devs, do you feel like you have missed your prime time?    · Posted by u/abdnafees
mikewarot · 3 years ago
I'm 59... (I got long covid, my brain is fuzzy, and I can't work more than a few minutes at a time, all things considered, I'd going ok, not great), On the other hand, I feel that the world has taken several wrong turns. I'm interested in correcting course, but feel my ability to help that happen is almost nil. Here's the chain of events, as I see them.

1960s - the military realizes that a single computer can not handle data from different levels of classification. (This was related to planning classified flight operations during the Viet Nam conflict, the flights themselves had to avoid enemy SAM sites (the knowledge of which was Top Secret, even more secret than the flights)), etc... and those were different levels of classification). Research to solve this problem was done, and progress was underway to build this into Multics... when Unix took off, and distracted everyone. There have been some niche secure systems available, but widespread knowledge of them didn't happen. Security of that level wasn't seen as necessary, and eventually was seen as impossible anyway. Note that the solution to general purpose secure computing was found, and proven to work, decades ago!

1970s - general purpose personal computing came along, again without security in mind. BBSs arose, along with UUCP, FidoNet, etc. in the public sphere.... ARPAnet in the Military/Educational area.

1980s - the IBM XT (or clone) with MS-DOS and dual floppy diskettes was the pinnacle of secure general purpose computing. The shareware revolution happened, and most PC users were happy to "buy" $2-3 floppy disks in bulk with various programs from strangers at computer shows, and just try things out.

Why was it secure? A floppy diskette full of data is a course grained "capability". You know (because you insert/remove them, and attach write protect labels) exactly which disks are in the system, can make backups of them easily, and it's effectively impossible to mess up your computer with a bad program.

You also had BBSs from which you could download software to try out. This was peak computer user freedom, even though the machines were slow and the diskettes weren't perfectly reliable. You could just try things, without worry. Nobody has that freedom any more, no matter what OS they run.

The Windows Era - The adoption of hard drives and GUI interfaces brought an end to users having transparent and full knowledge of where and how their data was stored. The need to "install" software transformed what was once a matter of copying a boot floppy into an impossible to replicate system state. Hard drives were expensive, and fixed... you couldn't just copy them freely, like you could with diskettes. This was the first step downhill into the descent.

Still, at this point, there were some great tools introduced at this point. With the Mac, you had Hypercard, on the Windows machines, you could get Visual Basic, or Delphi, and build applications to do CRUD or interact with custom hardware fairly easily. Documentation was included, complete, comprehensive, and amazing.

Then the .NET era happened. This made software slower, there was always a new .NET library to load, and things crashed far more often. While it might have been a good move in preparation for the migration away from the Intel instruction set, that has taken decades, not years, and the framework has been through several incompatible iterations along the way. We lost VB6 and Delphi and Hypercard along the way.

Simultaneously, the Internet was released for commercial use. Eventually, we came to have systems with persistent internet, but operating systems intended for the classroom or small corporate environment. Any thought of security was layered on top, not built in.

Then the web hit, and we shifted from high performance, easy to build and distribute desktop applications to a model where everything is shoved through a stateless protocol through firewalls and proxies to end users on machines they don't fully control, own, or understand. It's a huge mess, and it can't be cleaned up because none of the computers at the edges are secure enough to run random code.

We could fix this... and I've been trying to push that message wherever it seems like the ideas might take hold.... if we abandoned the flawed concept of ambient authority that underlies Windows, MacOS, Linux, etc... and went with one that defaults to no access, such as the ever delayed Hurd, or Genode, then it would at least be possible to get back the ability to run mobile code without risk.

Once that almost impossible task is done, then we can take the code generating tools we built for Windows back in the 1980-90s, like Visual Basic 6, and Delphi, and recast them to generate code to run directly on the phones, tablets, laptops, desktops, etc. The end user can easily manage security with the powerbox facilities that capabilities based OSs provide. (They look just like the file open/save dialogs we're all used to, but then only provide access to those files to the application).

Note that this is NOT the same as "permission management" on your tablet/smartphone.

We could be heading towards a bright secure future, where we all own our own hardware again, and things just work, quickly, without bloat, without virus scanners, the way we want them to...

or not

I think we've got a 0.1% chance for the former at this point it time. I'll do whatever I can to get that up to 0.2%

unused0 · 3 years ago
Multics achieved security by building it into the h/w. That caused the h/w to be more expensive and slower. The system market these days is all about price/performance ratio and the collective decision not to include security as a performance metric.
unused0 commented on Privilege drop, separation, and restricted-service operating mode in OpenBSD   sha256.net/privsep.html... · Posted by u/brynet
chungy · 3 years ago
Now name the OS that isn't written in C.

Hint: It's not Windows. It's not Linux. It's not Mac OS. It's not FreeBSD. It's not illumos...

unused0 · 3 years ago
IIRC,Primos was written in FORTRAN (with language extensions including the ability to pass a statement number as a parameter, allowing longjmp() like behavior).

Multics (as noted above) was written in PL/I.

unused0 commented on Ask HN: Is there any functionality of OSes like Multics, Burroughs MCP you miss?    · Posted by u/kokojumbo
unused0 · 3 years ago
Multics: Security.
unused0 commented on The Talos II, Blackbird POWER9 systems support tagged memory   devever.net/~hl/power9tag... · Posted by u/hlandau
zasdffaa · 3 years ago
I don't understand that. Quite literally it is (or seems to be). In the 'design' section:

"With a single-level storage the entire storage of a computer is thought of as a single two-dimensional plane of addresses, pointing to pages. Pages may be in primary storage (RAM) or in secondary storage (disk); however, the current location of an address is unimportant to a process. The operating system takes on the responsibility of locating pages and making them available for processing. If a page is in primary storage, it is immediately available. If a page is on disk, a page fault occurs and the operating system brings the page into primary storage. No explicit I/O to secondary storage is done by processes: instead, reads from secondary storage are done as the result of page faults; writes to secondary storage are done when pages that have been modified since being read from secondary storage into primary storage are written back to their location in secondary storage."

IOW this is classic VM behaviour. Which bit of the wiki article is revelatory?

(NB. I've actually used multics).

unused0 · 3 years ago
Non-single level stores are copy-on-demand. Typically, a process runs a program by starting with an empty address space and mapping the executable code into that space. The program is started, the first instruction fetched; the address space is empty a page fault occurs and the page is copied in. If the page is modified, that is local to the process and the disk image is unchanged. Each process has its own copy of writable pages.

With single level store, the program pages are mapped in, not copied. Writing to the page alters the disk image. All processes running the same program share the memory pages.

unused0 commented on So! You want to use Multics? (1979) [pdf]   bitsavers.org/pdf/virgini... · Posted by u/khaledh
musicale · 3 years ago
I've had it running in an emulator for a while, but not natively sadly.

It would be fun to have an FPGA implementation of the hardware, and/or a port of the system to modern (or relatively modern) processors that are generally available (perhaps some x86 processors that support segmentation.)

unused0 · 3 years ago
An FPGA implementation is underway.

The X86 segmentation facility resembles Multics segments in name only.

unused0 commented on So! You want to use Multics? (1979) [pdf]   bitsavers.org/pdf/virgini... · Posted by u/khaledh
musicale · 3 years ago
I want to use Multics - but I'm stuck with Unix! (Well, BSD and Linux I guess...)

Also: PL/I may not have been beautiful, but at least it had a safe memory model!

unused0 · 3 years ago
Run your own Multics:

https://multics-wiki.swenson.org/index.php/Main_Page

There are also several public access Multics systems up and running.

unused0 commented on The case for a modern language   jeang3nie.codeberg.page/c... · Posted by u/bshanks
AnimalMuppet · 4 years ago
> Unix was written in C because Thompson and Ritchie had been working on Multics, which was written in PL/1 in the 1960s. So the idea of an OS written in a high level language was hardly obscure and had nothing to do with C.

OK, but at the time they started working on Unix, Multics had not yet been delivered. Nor was it clear that it would ever be delivered. So the idea that an OS could be successfully written in a high-level language was not yet proven.

unused0 · 4 years ago
The Burroughs system for the B5000 was written in Algol and preceded Multics.

u/unused0

KarmaCake day18April 7, 2019View Original