Readit News logoReadit News
kelnos · a year ago
When I was much younger I used to find this funny and entertaining, but nowadays I just find it boring. It always seems fashionable to hate on things, and I'm just tired of the depressing, defeatist attitudes that support that fashion.

I do wonder, though, if this were (re)written today, how much of it would be the same, how much of it would be outdated, and how much new stuff the haters would come up with.

The chapter on file systems is mostly no longer relevant (some file systems do still suck, but the defaults are pretty solid now). NFS still sucks (IMO it sucks more than it used to), but far fewer people need to use it nowadays. C and C++ are still unfortunately prevalent, but there are at least quite a few systems programming alternatives, and they're gaining ground. Sendmail isn't the only game in town anymore, and I expect most outfits use something else these days, and USENET is a distant memory for most people, so there go another two chapters.

But then the terminal/TTY situation has barely changed, and how all that works is just as (if not more) divorced from the reality of daily usage. Security has improved, but most people still have a god-mode root account, and most of the security improvements have come out of necessity; the world of networked computing is much more "dangerous" today than it was in the early 90s. Documentation is still often poor, and many systems still seem designed more for programmers than less-technical users.

I wonder what they'd think of systemd and Wayland!

skissane · a year ago
I think some of its criticisms (see e.g. pages numbered 20-21, PDF pages 60-61) remain valid:

(1) Unlike OpenVMS/TOPS-20/LispMachines/etc, mainstream POSIX file systems lack built-in versioning (there are a small number of POSIX systems which implemented it, e.g. SCO OpenServer's HTFS, but for whatever reason more mainstream systems like Linux or *BSD never did)

(2) Directly related to (1), the fact that the unlink() system call is normally irreversible, and the rm command is (normally) a direct interface to unlink()

(3) The simplicity of the interface between the shell and the commands it runs (just a list of strings, only slightly better than the DOS/Windows approach of a single string), and the potential issues induced by the fact that wildcards are implemented by the shell not the command itself.

Regarding (3), OpenVMS DCL, IBM OS/400 and Microsoft PowerShell are all examples of systems where option parsing/etc happens in the shell, and the command is passed a parsed command line structure instead. However, although they are better in this regard, they have other disadvantages (the first two are super-proprietary; PowerShell is open source, but weighed down by the heaviness of .Net)

I think a lot of historical issues with Unix are due to the fact that shared libraries didn't exist until much later, so seemingly obvious things like "put wildcard parsing in a shared library like getopt() is" weren't possible at the start. Also, Unix has never had any kind of standard structured data format (JSON would work, but it didn't exist for the first 30 years of Unix's existence), which is a problem for ideas like passing command arguments as structured data.

> But then the terminal/TTY situation has barely changed,

POSIX TTYs are a horrific pile of kludges, especially when one considers stuff like ECMA-48 (which isn't technically part of the POSIX TTY stack, but de facto is). Someone should just redo it as something more sensible, like exchanging null-terminated JSON packets. But getting everyone to agree on that is probably too hard.

nusuth31416 · a year ago
From the point of view of an end user, I remember versioning on the Vax using EDT when I spent hours on end entering data from our local emergency department in quite large text files. Over the years, every now and then I have been looking for this functionality on Windows.

Dropbox and OneDrive do this nowadays, in that I can just right click a file and see the different versions and change to a previous one. I work with long documents I write, and this functionality has saved the day a few times.

msla · a year ago
> Unlike OpenVMS/TOPS-20/LispMachines/etc, mainstream POSIX file systems lack built-in versioning

We have git, which is strictly better.

> Directly related to (1), the fact that the unlink() system call is normally irreversible, and the rm command is (normally) a direct interface to unlink()

You'd be complaining about how a really_really_unlink_now_i_mean_it() syscall was irreversible, too.

> and the potential issues induced by the fact that wildcards are implemented by the shell not the command itself.

I like not having to rely on arbitrary commands implementing (or not) wildcards.

(Plus, even in the best of worlds, all commands would implement wildcards by linking in the same library, which brings us back to square one.)

anthk · a year ago
OpenDCL for Unix:

https://github.com/johnsonjh/PC-DCL

patch: https://0x0.st/XoDG.patch

         git clone https://github.com/johnsonjh/PC-DCL
         cd PC-DCL
         wget -O corr.patch https://0x0.st/XoDG.patch
         git apply corr.patch
         make NDEBUG=1 
Enjoy.

eadmund · a year ago
> Also, Unix has never had any kind of standard structured data format (JSON would work, but it didn't exist for the first 30 years of Unix's existence), which is a problem for ideas like passing command arguments as structured data.

JSON is a serialisation of structured data, not structured data itself.

Its data model is not great, either: maps have no inherent canonical serialisation (one has to assert things such as ‘keys are sorted in Unicode lexicographical order’), and there is no way to shadow map value.

A list-based s-expression format would be preferable, as it immediately lends itself to canonicalisation and associative lists support shadowing (e.g. ((a 123) (b 456) (a 789))).

anthk · a year ago
I think the OpenVMS's DCL interface has a libre implementation. I've seen things like mpsh and the ITS debugger/shell (DDT) ported to Unix...
Bene592 · a year ago
For (1) there's stuff like BTRFS snapshots, or you could use Git on top of the FS
yjftsjthsd-h · a year ago
> (1) Unlike OpenVMS/TOPS-20/LispMachines/etc, mainstream POSIX file systems lack built-in versioning (there are a small number of POSIX systems which implemented it, e.g. SCO OpenServer's HTFS, but for whatever reason more mainstream systems like Linux or *BSD never did)

Linux does have that in NILFS2, it's just that almost nobody cares to use it: https://www.kernel.org/doc/html/latest/filesystems/nilfs2.ht... / https://man.archlinux.org/man/nilfs.8.en

anthk · a year ago
On Unix and the rest of the comment: plan9/9front superseded it well:

- 9p+encryption on top instead of NFS, much better.

- C under plan9 it's far better and easier than POSIX. Also, we had Golang under Unix which almost made that better C philosophy into Unix with a better systems language.

- Usenet/IRC works and you'll get far more trolls under the web.

- The terminal makes things better in most cases than freezing UI's or Emacs, see my another post. But 9front doesn't use terminals, it's graphical from the start and composable.

- On security, plan9/9front uses namespaces and factotum plus decoupled servers/devices for hardware, a much better design.

- On documentation, the rest of the OSes have it far worse. But it was almost the same case with ITS and Macsyma/Maclisp, where you had a reference book and not starting guides to ease the learning of the language. GNU Texinfo made at least an Elisp intro a Maxima it's far better docummented wth on-line guides and examples.

- SystemD it's a disaster, and Wayland destroys any scriptabilily/hacks with wmctrl/xdotool/custom WM/DE's or something simple like remapping keys on broken keyboards (I use the "<>" key as "\ |" as my physical one it's broken, and I already have < and > near 'm') and it works.

jclulow · a year ago
> 9p+encryption on top instead of NFS, much better

NFS with IPsec for authentication and privacy seems similar in principle, with the added benefit that it's widely available.

rho4 · a year ago
Went through the same progression with Dilbert comics. Colleagues with a positive general attitude are priceless.
boxed · a year ago
> The chapter on file systems is mostly no longer relevant (some file systems do still suck, but the defaults are pretty solid now)

Really? Afaik filesystems still do `rm` immediately and with no recourse. GUIs on top like Finder and Explorer do something more sane though, but that doesn't save us terminal users.

POSIX shell expansion is just as crazy as it has ever been too.

Those are the two gigantic foot guns I can recall from memory from having read this 20 years ago.

isametry · a year ago
I know this is a discussion of Unix in general, but on your own Mac, you can get `trash` packages for the terminal [0] [1].

I use the former, I haven’t tried the latter. But afaict, they should be pretty much identical – they both supply `trash <path>` which could, for most intents and purposes, probably be aliased as `rm`.

One thing to note is that none of these tools seem to support the “Put Back” feature of Finder. Trashed files don’t remember their original locations. But I’ll personally still choose that over being nervous before every `rm`.

[0] – https://formulae.brew.sh/formula/trash [1] – https://formulae.brew.sh/formula/macos-trash

msla · a year ago
I can't get over the idea that this is a bunch of people who were angry their favorite proprietary systems got killed by open standards and, ultimately, open source. Compatibility and not being dependent on a single company are good things!
rini17 · a year ago
When the book was written most of the UNIX world was proprietary too, GNU and BSD existed but were marginal.
lubutu · a year ago
> NFS still sucks (IMO it sucks more than it used to)

Any chance you could elaborate?

kelnos · a year ago
Several of the criticisms the book lists are still true today. File locking is unreliable, deletions are weird, security is either garbage (in that you set it up in a way where there's very little security) or trash (in that you have to set up Kerberos infrastructure to make it work, and no one wants to have to do that).

Perhaps I was a bit hyperbolic about it sucking more nowadays. At least you can use TCP with it and not UDP, and you can configure it so you can actually interrupt file operations when the server unexpectedly goes away and doesn't come back, instead of having to reboot your machine to clear things out. But most of what the book says is still the NFS status quo today, 30 years later.

nurettin · a year ago
> C and C++ are still unfortunately prevalent

come now, they look nothing like they were 30 years ago.

anthk · a year ago
Don't confuse C and C++ because C99 covers a lot and it's 25 years old.
gnabgib · a year ago
(1994) First submitted 11 years ago[0], discussions in 2014[1](128 points, 50 comments), 2017[2](382 points, 308 comments), 2019[3](284 points, 158 comments), 2022[4](189 points. 86 comments), 4 months ago(141 points, 139 comments) and 28 days ago (52 points, 45 comments)

[0]: https://news.ycombinator.com/item?id=5125613

[1]: https://news.ycombinator.com/item?id=7726115

[2]: https://news.ycombinator.com/item?id=13781815

[3]: https://news.ycombinator.com/item?id=19416485

[4]: https://news.ycombinator.com/item?id=31417690

wolverine876 · a year ago
Nobody on HN submitted the Unix Haters Handbook until 2013? I find that hard to believe.
jwilk · a year ago
flanked-evergl · a year ago
Quite often I have had to help someone on Windows, who prefers Windows to Linux, to use their Windows computers properly. I have never seen someone who prefers Windows to Linux help someone to use their Linux computers properly. Anecdotal, sure, but things be how they be.
ndsipa_pomu · a year ago
It's to be expected as Linux is more niche on the desktop and so Linux people (like me) tend to be either enthusiasts or have some expertise with different systems. A similar example would be that you can see more pilots helping people fix their cars than you see a driver helping to fix a plane (despite there being more planes stuck on the ground than there are cars stuck in the sky).
flanked-evergl · a year ago
To be clear, I'm talking about software engineers, which is where the analogy then breaks down because this it would be closer to Toyota engineers helping Ford engineers. I don't expect someone who is an accountant to know as much as a software engineer, but if a software engineer tells me they prefer Windows to Linux, but they can't use Windows, then I suspect their problem is not caused by any aspect of their operating system.
adl · a year ago
I prefer to use Windows as my daily driver. I also help people (friends, family, co-workers, etc.) with Linux (desktop or server) all the time. I have almost 30 years of experience using Linux. (using it since 1995)
gnufx · a year ago
I saved a requirement from an old MIT AI lab job on Usenet, perhaps from the early '90s: "Applicants must also have extensive knowledge of C and UNIX, although they should also have sufficiently good programming taste to not consider this an achievement."
arp242 · a year ago
Dupe:

The Unix-Haters Handbook (1994) [pdf] - https://news.ycombinator.com/item?id=38464715 - Nov 2023 (139 comments)

rawgabbit · a year ago
Is there a similar article that explains the origins of Linux and its design choices?
msla · a year ago
I can't get over the idea that this is a bunch of people who were angry their favorite proprietary systems got killed by open systems and, ultimately, open source. Compatibility and not being dependent on a single company are good things!
spc476 · a year ago
It was printed in 1994. Linux and BSD (either FreeBSD or 386BSD, I can't recall which) where around, but Open Source (TM) as we now know it wasn't. Other, proprietary Unix systems were still viable and in use (AIX, IRIX, SunOS/Solaris, HP-UX and SCO just to name a few). Running Unix was not cheap [1]. Most X servers were proprietary and cost money (I had some friends that sold X servers for a variety of OSes and video cards at the time). The criticisms at the time were, in my opinion, decent enough but not enough to stop to duopoly we have now with POSIX and Windows.

[1] In college, early 90s---I got to use IRIX on a machine that cost $30,000 in 1991. PCs caught up and past it pretty much by the mid-late 90s. Also, I did some consulting work at a bank which used SCO. They paid God knows how much for every conceivable package available for SCO. The base system? Pretty much just a shell and some commands in `/bin`. Compiler? Pay. Network? Pay. TCP/IP for said network? Pay.

WesolyKubeczek · a year ago
> The base system? Pretty much just a shell and some commands in `/bin`. Compiler? Pay. Network? Pay. TCP/IP for said network? Pay.

Vladimir Barmin had a very hilarious writeup about getting SCO to work at all (http://lib.ru/UNIXOID/scomastdie.txt, in Russian).

lelanthran · a year ago
>> proprietary systems got killed by open systems and, ultimately, open source.

> It was printed in 1994. Linux and BSD (either FreeBSD or 386BSD, I can't recall which) where around, but Open Source (TM) as we now know it wasn't.

I think the "open systems" phrase is more important than the "open source" phrase in the GP's comment.

In 1994, Unix systems, even though proprietary, were still more open than the competing systems. Sure, it'll be another 5 years or so before the writing was on the wall for all non-FLOSS systems, but in 1994 it was both a) easier and cheaper to get your hands on a Unix system, and b) easier and cheaper to program it.

Sure, they weren't open-source, but they were a hell of a lot more open to hacking than the (for example) Lisp systems, or VMS, etc.

anthk · a year ago
You could use GNU alternatives for that, most people installed GNU coreutils/*utils, Emacs and GCC and I think Irix supported networking and TCP/IP in base without additions. But, yes, the rest of the Unixen were like that if not worse.
ahefner · a year ago
"were around". Not "where around".

Sorry to nitpick but I've seen this grammatical error in comments here three times just this morning and it begins to grate.

tivert · a year ago
> I can't get over the idea that this is a bunch of people who were angry their favorite proprietary systems got killed by open systems and, ultimately, open source.

Could the reason that idea is so hard to get over be because it's just wrong?

I don't think these people "hated" UNIX because it was "open," I think they hated it because they thought it was bad.

msla · a year ago
They didn't hate Unix because it was open, but their hatred of a more open system does make them look a bit ridiculous: Any system tied to a single company is doomed anyway, either to the company dying or the company discontinuing it and/or turning it into something you can't stomach. (Microsoft lives, but how happy are MS-DOS partisans these days?) It kinda taints their technical points with a whiff of fanboyism, a naïve partisanship and attachment to entities that didn't give a shit about them.
lelanthran · a year ago
> I can't get over the idea that this is a bunch of people who were angry their favorite proprietary systems got killed by open systems and, ultimately, open source.

I think that they were angry that the inferior product was winning/had won. The "worse is better" essay didn't come out until a decade later, IIRC.

To be clear, the Unix systems were worse in all the ways they point out, but better in the way that mattered - multiple companies provided Unix, and the skills from one Unix to another were easily transferable.

The "winning" system won not because it was superior, but because it was easier to hire for. Need an operator for your SCO Unix? Someone experienced with Sun boxes could onboard faster than someone experienced with VMS.

panick21_ · a year ago
Its not as simple really. X Server being work on so much and being partially open was because DEC and co were in a panic that Sun would turn NeWS into a second NFS.

NeWS also could and did run on some other Unixes.

So really a lot of the history has to do with complex pissing context between different vendors that eventually escalated into full blown Unix wars.

DannyBee · a year ago
They weren't angry their systems got killed by open systems. They were angry that the replacements sucked.
msla · a year ago
They were better in some ways and worse in others. ITS had PCLSRing and no concept of subdirectories, not to mention SIXLTR NAMING; Lisp machines were great as long as they didn't have to be fast or run continuously without GC; VMS was freaking VMS are you serious? The snide derision doesn't help, especially coming from people who were being snide on the behalf of for-profit companies that never gave a shit about them.

Deleted Comment

gattilorenz · a year ago
In my mental image, most Unices used in industry and research in 1994 are not open.

What am I missing?

smackeyacky · a year ago
Your recollection matches mine. I think I might have had Yggdrasil installed on my home PC for doing after hours support on Solaris machines, but back then if I could have afforded a sparc workstation I would have bought one in a heartbeat.

Company owned Unices still dominated the landscape in 1994

Bene592 · a year ago
Nothing, the book is mostly about proprietary Unices
msla · a year ago
Open standards, not open source.
gryn · a year ago
It's not about objectivity it about teams/tribes if another team is winning then you're losing. Simple as that, you can see it in a lot of places. MMOs,console wars, phones, sports, politics, ...
anthk · a year ago
Well:

- ksh it's far better than sh, and I think Perl fixed the needs of a "medium" system scripting language.

- Usenet it's still fun and a far better source than the web/stack overflow for some programming languages. Slrnpull it's a godsend.

- Current X it's far better, but XPra/x2Go should have been part of X.org, with far better features over the network

- GNU/Emacs it's the alternative from MIT/ITS/PDP-10 to Unix's 'worse it's better/KISS', but it's slower, error prone and easily deleted to much the .emacs file by itself with M-x customize -it deleted my (use-package) functions -. If Emacs' customize set the variables into the (use-package) funcs, it would be a great start.

We need a "Emacs haters handbook" (and I like Emacs itself as a concept, but it needs polishing:

(defun rant-start ()

" - GNUS it's dog slow on -current day- sized mail boxes (> 100MB) and it will last an hour on big mail lists/Usenet spools. Mbsync/slrnpull helps but as I said anywhere else Maildir support in GNU's it's broken and it will only show some directories, if any. Even if 'new', 'cur' and 'tmp' are already there.

- RMail should had supported Maildir long ago. No, movemail it's not an option and the current mailboxes will choke on the Unix MBOX format.

- Unfocussing the minibuffer prompt shouldn't cancel it. For Mastodon password prompts (or any other one), switching to a pane/window in Exwm (almost mandatory) will force you to repeat the login process on mastodon.el from the start. That's atrocious.

- DIsplaying "big" images (relatively) it's slow, dog slow. IDK how pdf-tools does it (it works really fast and well), but doc-view it's a disaster and reading big CBZ files will crawl down Emacs. Inb4 'Emacs it's an editor, focus on the text' Emacs comes up with Calc with has plotting support with Gnuplot and OFC it needs a proper image displaying method.

- Eww should support minimal CSS rules to parse at least simple pages as HN.

- Stop locking on I/O, period.

- The UI, even the Lucid/Athena port, it's not smooth at all even with some changes under some 'legacy' top-hier 32 bit machines, such as N270 netbooks. I'm missing something for sure.

- Emacs notifications shouldn't be bound to dbus, the notifiication system should allow using messages and a beep/sound file as a method to alert the user, or a custom made script or Elisp code.

" )

sph · a year ago
Apart from your minibuffer issue, everything else is the fault of third party packages. Even IO, as I am told you can do async IO in elisp but practically no package does it.

So none of those issues are limitations of core.

shiomiru · a year ago
> Eww should support minimal CSS rules to parse at least simple pages as HN.

HN does not need CSS to display more or less correctly, just tables. I hear there is a w3m emacs package, maybe try that?