Readit News logoReadit News
glangdale · 7 years ago
There was a strange and mutually self-supporting pair of ideas in the Plan 9 community at the time:

1) "Shared libraries are bogus."

2) "Anyone who likes normal-looking user interfaces rather than plain boxes with text in them is a poopipants."

Both of these propositions are contentious to say the least, but what bothered me was that two propositions were mutually supporting while being (to my mind) orthogonal. The obvious and most compelling example of a shared library on the Unixen at the time were all the various UI libraries (Motif and various other abominations; all of them huge and unwieldy). It seemed necessary to accept that these libraries were obviously Completely Unnecessary to buy into the Plan 9 idea that shared libraries didn't do anything worth mentioning.

I'm sure it's possible to design a better UI library (or maybe even a wacky user level file system for user interfaces; in fact, my honours project in 1993!) but at the time the way people made interfaces that looked vaguely like what other people expected computer programs to look like was to use big-ass shared libraries on Linux.

This was also the way (dragging in one of those godawful blobs like Motif etc) that anyone might have exerted themselves to port across a (not completely pitiful) web browser, but the degree of aggressive disinterest in supporting Other Peoples Code was extreme (Howard Trickey's "ANSI POSIX Environment" or "APE" didn't get much love as far as I could tell).

It was quite undignified to watch people struggling with text-based web browsers or firing them up on non-P9 boxes because of the inability to support the web at the time.

brianberns · 7 years ago
> "Shared libraries are bogus."

Are you implying that "shared" = dynamically linked? Was there not a version of Motif that could be statically linked?

In the 90's, the most popular UI library for Windows was MFC (Microsoft Foundation Classes), which could be linked either statically or dynamically.

mwcampbell · 7 years ago
For the most part, MFC was and is a thin wrapper over the user32 and comctl32 libraries that ship with Windows. So the true Windows equivalent of statically linking Motif would be to statically link user32 and comctl32, and that's impossible.
pjmlp · 7 years ago
Actually it was OWL and VCL, until Borland lost their way and we were forced to migrate to MFC.
nemetroid · 7 years ago
"Shared library" is a synonym for "dynamically linked library".
pmoriarty · 7 years ago
"Was there not a version of Motif that could be statically linked?"

The GP seems to be confused about the meaning of "shared library", and seems not to be aware of the existence of static libraries -- which would (as you point out) be perfectly usable in the case of Motif.

astine · 7 years ago
It seems to me that the issue isn't between shared code and no shared code but between statically and dynamically linking that code. It's a question of bundling dependencies. You can generally statically link a GUI toolkit; you just have to deal with the extra disk and network usage. You can dynamically link the same library but then you just have to deal with all the familiar dependency issues you've probably dealt with. (DLL hell and related issues) It's a trade off, but in either case you're still sharing code.
erik_seaberg · 7 years ago
DLL hell happened because Windows used to lack a package manager, leaving apps to bundle their dependencies and overwrite other versions (without semver names!) in common directories.
tomatocracy · 7 years ago
Static libraries are a potentially bigger problem when your library is tightly coupled to a particular version of some other process (eg an X windows server).

The real distinction in my view is between tightly coupled components and loosely coupled ones. Loosely coupled as a design paradigm is harder to design and maintain for the programmer and in some cases has a performance impact, but a lot nicer for the admin and feels ‘cleaner’ to me, at least.

erikpukinskis · 7 years ago
As a UI person, I think shared UI libraries are bad for usability in many ways. They lead people to design UIs that happen to be made of the widgets that are available, rather than designing something truly suitable to the user task.

This is one of the reasons the web eclipsed native applications. The web only provided the most basic common widgets, so designers were forced to reimplement the UI toolkit, but also given the freedom to have their own take on it.

I personally would prefer to see a UNIX-like UI library made of many composeable parts. With idependant codebases.

In that world, having a single giant dynamically linked UI blob doesn’t help.

I’m not saying standardization is bad, just that forced standardization at the architectural level is bad.

ridiculous_fish · 7 years ago
The theory of UI libraries is that users can take their knowledge between applications. When you start using a new app, you already know mostly how it will behave, because it shares the UI vocabulary with other apps.

The web has been terribly violent to this idea. Native UIs are expected to support keyboard navigation, keyboard shortcuts, a menu bar, type select, drag and drop, accessibility, scripting... And in any given interaction there are advanced modes: multiple selection, modifier keys, etc.

Hardly any of this works on the web; even if it did you wouldn't think to try it. Does type select works in GMail's custom context menu? Can you command-C copy a message in one folder, and paste it in another? Would it even occur to you to try those things?

That stuff is ancient I know, and it would be one thing if the web were pioneering new UI vocabulary that displaced the old. But it's not. There's nothing new that has taken hold, no replacement for what is lost. Gmail has its janky, app-specific keyboard shortcuts, which is at least something, but there's no mechanism for it to spread.

We're in a Dark Age. Every web page now has its own custom CSS-style menu that only supports the most basic hover-and-click, and the bar for good UI is just lying on the floor.

int_19h · 7 years ago
As a person who has to use those web apps that supposedly "eclipsed" native applications, I hope this stage will pass, and we're back to the sanity of consistent UI across apps and frameworks.
hawski · 7 years ago
Isn't it also a reason why it's said that accessibility sucks on the web?
pizza · 7 years ago
Just so I'm following properly, do you mean something like how you have {React | React Native | React VR} + a cornucopia of community-supplied custom components? imo it's a system that works well - you have a common system + easily extensible custom bits.

(take my experience with a grain of salt, I've only used React on side-projects, never anything complicated or that I was forced to develop on because of work, and never ran into any perf issues)

pcwalton · 7 years ago
> I personally would prefer to see a UNIX-like UI library made of many composeable parts. With idependant codebases.

OK, and those parts ought to be shared libraries.

gameswithgo · 7 years ago
you can statically link a ui library.
adriveatrain · 7 years ago
I found during my ten years in Plan9 it was the most usable OS I have experienced in my 35 years of computing. My only problem with it is every other platform is now ruined because I gnash my teeth and say "would have been easy in Plan9".
jolmg · 7 years ago
> my ten years in Plan9

Using it primarily? I would find that very difficult with today's dependence on the web, especially with how complex the web has become today.

Q6T46nT668w6i3m · 7 years ago
I’ve assumed the 2018 version of the dynamic library view is bad is that non-operating system dynamic libraries are bad (and operating systems should provide a GUI library), use a web browser, or use the command-line.
sprash · 7 years ago
> 1) "Shared libraries are bogus."

Starwman, nobody said that. Shared libraries make sense when a program wants to load additional code at run time. A classical example is loading and unloading plugins.

For everything else, not so much.

2) "Anyone who likes normal-looking user interfaces rather than plain boxes with text in them is a poopipants."

Most people I know use a tiling window manager with terminals. So plain boxes with text in them seems to make sense for many people.

glangdale · 7 years ago
It's hard to recall conversations from 1994, but I came away with the overwhelming sense that the Plan 9 guys thought shared libraries were bogus - certainly that they were bogus for the non-plugin use case (which is the one that pertains to this discussion; libraries like Motif were not plugins). So you've accused me of raising a strawman (actually a "starwman" - "he'd like to cwome and mweet us, but he thinks he'd blow our mwinds") by raising a point absolutely not germane to the discussion, but you when refute my point with this devastating rebuttal:

"For everything else, not so much".

Most people I know also use the web and menus and buttons and dialog boxes and so forth and expect all this stuff to look vaguely like other computers do. The fact that the Plan 9 folks were trundling over to non-P9 machines to read the web seemed to suggest that they also liked seeing things that weren't just text in plain boxes.

The point remains that these two propositions are both at the least contentious, and only by combining these unrelated points could anyone really seriously take Plan 9's approach to shared libraries (at the time) seriously.

theoh · 7 years ago
Here's a page which contains some authoritative discussion of the Plan 9 attitude to shared libraries: http://harmful.cat-v.org/software/dynamic-linking/

Your "most people I know" comment is bizarre: even hardcore terminal users need to browse the web, and it's been years since lynx or w3m could be claimed to be adequate for most web tasks.

But the Plan 9 folks consciously made it difficult to write or port web browsers and other consumer software. I think I remember Tom Duff stating that writing a web browser was a "fool's errand", which I guess captures the sense of heightened seriousness, privilege, and lack of concern for the average mainstream user that informed Plan 9's design.

amalcon · 7 years ago
> Most people I know use a tiling window manager with terminals. So plain boxes with text in them seems to make sense for many people.

As my human-computer interaction prof said back in undergrad: You are not normal. There is a reason that no modern operating system outside the "Other Linux/BSD" bucket ships with this as a default.

hollerith · 7 years ago
Most here seem to know that the motivation for adding DLLs to unix was to make it possible for the X windowing system to fit in the memory of a computer of that time, but many comment writers here seem not to know something that the participants in the discussion that is the OP all knew:

Plan 9 has an alternative method for sharing code among processes, namely the 9P protocol, and consequently never needed -- and never used -- DLLs. So for example instead of dynamically linking to Xlib, on Plan 9 a program that wanted to display a GUI used 9P to talk to the display server, which is loosely analogous to a Unix process listening on a socket.

AnIdiotOnTheNet · 7 years ago
Problem with that is that it is less efficient than a shared library but also susceptible to the same issue, namely that if you change the interface then things break.

Edit: It does of course get you extra functionality, like being usable over a network connection. Trade offs.

erikpukinskis · 7 years ago
How is it less efficient?
tedunangst · 7 years ago
How is that different than talking to the X11 server over a socket?
hollerith · 7 years ago
I am repeating stuff I learned over the years from internet discussions, e.g., on the 9fans mailing list, rather than from direct experience in writing GUIs in Plan 9 and in X. I think when the decision was made to add DLLs to Unix, Xlib, the library a program would use to talk over the socket, was itself too big to fit in memory if a separate copy got statically linked to every program that displays a GUI. (The Wikipedia page for Xlib says that one of the two main aims of the XCB library, and alternative to Xlib, were "reduction in library size".)

I'm not advocating for removing DLLs from our OSes, BTW. Nor am I advocating for Plan 9.

wbl · 7 years ago
One protocol for just about everything. You didn't need Xlib.
arbitrage · 7 years ago
It's not.
malkia · 7 years ago
So basically a sidecar, a service mesh :)
astine · 7 years ago
This sounds a lot like COM, which is one of my least favorite features of Windows programming.
int_19h · 7 years ago
COM uses vtables and normal calls via function pointers, unless the caller and the callee are in incompatible contexts that require marshaling (remoting, different processes, or different threading apartments in the same process).
teddyh · 7 years ago
SunOS before 4.0, when it still used SunView¹ instead of X11, still did not have dynamic linking. Hence this email rant by John Rose titled Pros and Cons of Suns from 1987 (as included in the preface of The UNIX-HATERS Handbook²):

[…]

What has happened? Two things, apparently. One is that when I created my custom patch to the window system, to send mouse clicks to Emacs, I created another massive 3/4 megabyte binary, which doesn’t share space with the standard Sun window applications (“tools”).

This means that instead of one huge mass of shared object code running the window system, and taking up space on my paging disk, I had two such huge masses, identical except for a few pages of code. So I paid a megabyte of swap space for the privilege of using a mouse with my editor. (Emacs itself is a third large mass.) The Sun kernel was just plain running out of room. Every trivial hack you make to the window system replicates the entire window system.

[…]

1. https://en.wikipedia.org/wiki/SunView

2. https://web.mit.edu/~simsong/www/ugh.pdf

exitcode00 · 7 years ago
Oh my, each app is going to be 20 mb bigger! This mattered 30 years ago, but now I would say we have a huge problem for end users with all of these "Package managers" and "dependency managers" getting tangled up because there are 5 versions of Perl needing 3 versions of Python and so on... I would be a much more happy Linux user if was able to drag and drop an exe. 100mb be damned
nwellnhof · 7 years ago
> Oh my, each app is going to be 20 mb bigger! This mattered 30 years ago

CPU caches aren't that big so it still matters today, at least for desktop applications.

bunderbunder · 7 years ago
This seems easier in both Windows and OS X. On both, a native application gets its own directory, and will first look in there for any shared libraries it needs. It gives you a nice middle ground between static linking everything and dynamic linking everything that still avoids the "we have to choose between pervasive system-wide dependency hell and Dockerizing everything" situation that seems to exist on Linux.
bepvte · 7 years ago
Thats what appimage is for, but if every single thing in /bin/ was 100mb you would have an OS much larger then windows
dtzWill · 7 years ago
This is actually an area of very current research. We have implemented a form of software multiplexing that achieves the code size benefits of dynamically linked libraries, without the associated complications (missing dependencies, slow startup times, security vulnerabilities, etc.) My approach works even where build systems support only dynamic and not static linking.

Our tool, allmux, merges independent programs into a single executable and links an IR-level implementation of application code with its libraries, before native code generation.

I would love to go into more detail and answer questions, but at the moment I'm entirely consumed with completing my prelim examination. Instead, please see our 2018 publication "Software Multiplexing: Share Your Libraries and Statically Link Them Too" [1].

1: https://wdtz.org/files/oopsla18-allmux-dietz.pdf

acqq · 7 years ago
How does your tool handle the dynamic libraries loaded on demand during the life of the program? Specifically, where the application depending of the user input dynamically loads only one out of the set of shared libraries which all are made to be linked with the main application and use the same interface but are designed to be "the only one" loaded? That is, both the application and each in the set of the libraries expect to have only 1-1 relation (only one library loaded at the time)? Edit: OK, reading further your article, I've found: "our approach disables explicit symbol lookup and other forms of process introspection such as the use of dlsym, dlopen, and others."

If you'd manage to implement that too then it seems that really big projects could be packed together.

Tepix · 7 years ago
The page is down for me but archive.org comes to the rescue:

https://web.archive.org/web/20190215103117/https://9p.io/wik...

Or if you prefer google groups: https://groups.google.com/forum/#!topic/comp.os.plan9/x3s1Ib...

The headline could use a "(2004)" suffix.

snarfy · 7 years ago
In Linux, if libssl is compromised, you install a new libssl. In Plan 9, if libssl is compromised, you re-install Plan 9. That's static linking for you.
IshKebab · 7 years ago
Yeah that works if your app is in the Debian's software repository (and Ubuntu's and Red Hat's and Gentoo's and etc. etc.). If that is the case it is trivial to update all apps that depend on libssl anyway, even if they use static linking.

In practice it is far easier for a lot of software to distribute Windows-like binaries where all but the most basic dependencies are included (e.g. Flatpak or Snappy). In that case dynamic linking doesn't help at all.

fao_ · 7 years ago
Exactly.

Modern Ubuntu systems rely on Snap or Flatpak for a lot of software. What these systems do (as I understand it) is package a large amount of the dynamic libraries that would be provided by the operating system, and stick them in a compressed file (or virtual file system, whatever).

So what you essentially get is a 200MiB 'binary' without any of the benefits of dynamic linking (being able to swap out a library given a vulnerability without recompiling) OR static linking (a single file, with the extraneous code removed, etc. etc.).

vetinari · 7 years ago
Flatpak has a concept of runtimes, shared among multiple applications, which are basically a well-defined bundles of libraries. So yes, dynamic linking helps there.
ddevault · 7 years ago
The entire plan9 system takes less than 10 minutes to compile from scratch on one core of my 11 year old laptop. OpenSSL alone takes 2-3x that on two cores.
bakul · 7 years ago
On the original RaspberryPi it took a minute to recompile the kernel from scratch and 4 minutes to recompile all the standard programs. In comparison it took 10 to 11 hours to recompile the Linux kernel. Cross-compiling the plan9 kernel on a 2009 era amd64 computer took 20 seconds computer. And rebooting took few seconds.
simias · 7 years ago
I think it's silly to dismiss static linking like TFA seems to do but I don't think your point is very fair. Assuming that you have a proper package manager upgrading all applications that link to libssl would definitely be a much larger download that merely libssl.so but it could be handled automatically and without too much fuss.
AnIdiotOnTheNet · 7 years ago
> In Linux, if libssl is compromised, you install a new libssl.

And then you pray that the interface and behavior have not changed enough to break things that depend on it.

muraiki · 7 years ago
> One of the primary reasons for the redesign of the Plan 9 security infrastructure was to remove the authentication method both from the applications and from the kernel. Cryptographic code is large and intricate, so it should be packaged as a separate component that can be repaired or modified without altering or even relinking applications and services that depend on it. If a security protocol is broken, it should be trivial to repair, disable, or replace it on the fly. Similarly, it should be possible for multiple programs to use a common security protocol without embedding it in each program.

> Some systems use dynamically linked libraries (DLLs) to address these configuration issues. The problem with this approach is that it leaves security code in the same address space as the program using it. The interactions between the program and the DLL can therefore accidentally or deliberately violate the interface, weakening security. Also, a program using a library to implement secure services must run at a privilege level necessary to provide the service; separating the security to a different program makes it possible to run the services at a weaker privilege level, isolating the privileged code to a single, more trustworthy component.

The paper goes on to explain how the various cryptographic services are exposed as a file server. This is the Plan 9 way of doing things: have lots of small programs that talk to one another.

http://doc.cat-v.org/plan_9/4th_edition/papers/auth

joshe · 7 years ago
The new model of One Version, all updated together is interesting in this context. Examples are iOS, Chrome, Firefox and node_modules. All super complicated with many dependancies. Update everything, fix broken stuff. Only maintain the one blessed dependency graph.

If you report an iOS or Chrome bug where you tried to revert a library upgrade and something broke, they'll just mark it "Won't fix: omg never ever look at this".

The dependency graph when everyone isn't updating all at once is brutal. Half of Unix life is/was "well I need to update X, but can't because Y depends on old X. Now we'll just create this special environment/virtualenv/visor/vm with exactly the brittle dependency graph we need and then update it, um, never."

We complain about One Version/Evergreen, and should, but it's got huge advantages. And might be an indicator that testing surface is the real complexity constraint.

One Version's success a good indication that Plan 9 was at least not totally wrong.

B-Con · 7 years ago
Arch Linux's approach is similar. The only version of the OS that is blessed is the current version. Every package install should come with a full system update. Package downgrades aren't supported.

In the case of an irreconcilable "X requires Z v1 but Y requires Z v2" they fork package Z.

nrclark · 7 years ago
Shared libraries are a pain for sure. They also have a lot of really nice advantages, including:

  - You can upgrade core functionality in one location
  - You can fix security bugs without needing to re-install the world
  - Overall, they take up less disk-space and RAM
  - They can take much less cache, which is significant today
The cache aspect is one that I'm surprised not to see people talk about more. Why would I want to blow out my CPU cache loading 20 instances of libSSL? That slows down performance of the entire system.

shereadsthenews · 7 years ago
That is just not how cpu cache works on multi-user systems. There is no L1 icache sharing between programs.
twoodfin · 7 years ago
You don’t have to go much further down the hierarchy before you’re sharing one copy of common code & static data.
taeric · 7 years ago
I think the dream is you could have one SSL process that others communicated with. Message passing at large.
otterley · 7 years ago
It's not without cost, though. Moving messages around causes CPU data cache pressure and CPU cycles you wouldn't otherwise have spent if you merely referenced shared memory that's mapped into your process.
joejev · 7 years ago
So, like some sort of shared library that my programs dynamically communicate with? How is this functionally different from a shared object?