Readit News logoReadit News
Arch-TK · a year ago
I have a theory that the worse is better approach begets an environment where the worse is better approach is better.

At least hypothetically, I think there's an approach which is not "the right thing" or "worse is better" but rather more like "the right foundations".

Most interface complexity in my experience seems to be inherited from underlying interface complexity, and it takes a lot of work to fix that underlying interface complexity. This, I think is where "worse is better" shines. If you try to apply a "the right thing" approach to a system where you're dealing with shitty underlying interfaces (i.e. every popular operating system out there including every unix, and NT system) you end up with endless complexity and performance loss. So obviously nobody will want to do "the right thing" and everyone who takes the "worse is better" approach will end up way ahead of you in terms of delivering something. People will be happy (because people are almost always happy regardless of how crap your product is).

On the other hand, designing something with "the right foundations" means that "the right thing" no longer needs to involve "sacrifice implementation simplicity in favour of interface simplicity" to anywhere near the same extent because your implementation can focus on implementing whatever interface you want rather than first paving over a crappy underlying interface.

But the difficulty of "the right foundations" is that nobody knows what the right foundations are the first 10 times they implement them. This approach requires being able to rip the foundations up a few times. And nobody wants that, so "worse is better" wins again.

wismi · a year ago
I think there's a lot of truth to this. It reminds me of an idea in economics about the "second-best". From the wikipedia page:

"In welfare economics, the theory of the second best concerns the situation when one or more optimality conditions cannot be satisfied. The economists Richard Lipsey and Kelvin Lancaster showed in 1956 that if one optimality condition in an economic model cannot be satisfied, it is possible that the next-best solution involves changing other variables away from the values that would otherwise be optimal. Politically, the theory implies that if it is infeasible to remove a particular market distortion, introducing one or more additional market distortions in an interdependent market may partially counteract the first, and lead to a more efficient outcome."

https://en.wikipedia.org/wiki/Theory_of_the_second_best

germandiago · a year ago
I hate intervention but I like analysis. Good insights there, I did not know about this theory.
pron · a year ago
That worse-is-better is self-reinforcing and that it's the only stable strategy in an environment with less-than-perfect cooperation (i.e. it's the only Nash equilibrium) may both be true at the same time. In fact, if the latter is true then the former is alsmost certainly true.

The real question is, then, whether doing "the right thing" is a stable and winning strategy at all, i.e. viable and affordable. As you yourself suspect, the answer may well be no. Not only because it takes a few tries to figure out the right foundations, but also because what foundation is right is likely to change over time as conditions change (e.g. hardware architecture changes, programming practices -- such as the use of AI assistants -- change etc.).

kagevf · a year ago
> the worse is better approach is better.

I think this ties back to the idea of "get it working, then once it's working go back and make it fast | preformant | better for whatever meaning of better".

I think much of the consternation towards "worse is better" comes from re-inventing things to achieve the "make it better" improvements from scratch instead of leveraging existing knowledge. Re-inventing might be fine, but we shouldn't throw away knowledge and establshed techniques if we can avoid it.

zombot · a year ago
That may be one failure mode, but another one is more prominent: Half-assing the next feature is more interesting than going back and making the last feature that you half-implemented actually work. That goes for both commercial and open-source software.
jpc0 · a year ago
I have a question for this premise.

How would you design a network interface using your right foundations model? I'm not talking about HTML or whatnot.

I have some sort of medium, copper, fiber whatever and I would like to send 10 bytes to the other side of it. What is the right foundations that would lead to an implementation which isn't overly complex.

Arch-TK · a year ago
Unfortunately, I am not a network engineer. So I don't know how I would approach this problem other than to try to make sure that the resulting hardware is easy to deal with from firmware and software.

I have worked with hardware directly and there is something inherently simple about some hardware APIs versus others.

What's more, the complexity doesn't entirely relate to the underlying hardware or protocol complexity.

The issue is, though, that reality is complicated. This is where the right foundations are important. It's not necessarily that the right foundations themselves have simple internals, but that the right foundations successfully tame the complexity of reality.

The best place to work on developing the right foundations is therefore precisely at such interfaces between the real world and the software world.

shuntress · a year ago
> I have some sort of medium, copper, fiber whatever and I would like to send 10 bytes to the other side of it. What is the right foundations that would lead to an implementation which isn't overly complex.

The bane of every project is understanding what you actually need to do

For example, it is entirely possible that the "right foundation" for your proposed scenario is: Hook one end up to a lightswitch, the other to a light bulb, hire two operators trained in morse code. Then once the 10 bytes are sent write them their cheques and shut it down.

me_again · a year ago
Not a direct answer, but Ethernet is sometimes brought up as a successful example of Worse is Better. At one point Token Ring was a serious competitor - it had careful designs to avoid collisions when the network was busy, prioritize traffic, etc. But it was comparatively slow and expensive. Ethernet just says "eh, retry on collision.". And that simplistic foundation has carried on to where we have a standard for 800 Gigabit Ethernet.
musicale · a year ago
Telegraph/morse code would probably work fine.

For this application I might also consider classic serial/RS-232 (c. 1969), which can be implemented with one signal wire (tx) and can connect to modern USB.

I'm not entirely sure whether they qualify as "right foundation" but they've worked well in practice.

gpderetta · a year ago
> But the difficulty of "the right foundations" is that nobody knows what the right foundations are the first 10 times they implement them

Yes and it worse than that. The right foundations might change with time and changing requirements.

tightbookkeeper · a year ago
good comment. But I question how much you can package up inherent complexity in a simple interface, due to leaky abstraction.

The biggest benefit of simplicity in design is when the whole system is simple, so it’s easy to hack on and reason about.

Arch-TK · a year ago
Abstraction leaks are usually a result of a worse is better approach. But yes, as I think I said in my original comment, its very difficult to successfully completely pave over a poor base.

And yes, I agree that simplicity needs to start quite low down the stack (ideally at the hardware, or the firmware, or the drivers, or the kernel as a last resort) for the complexity not to explode as you keep adding layers.

ezekiel68 · a year ago
I'm always happy whenever this old article goes viral. For two reasons: First, learning to accept the fact that the better solutions doesn't always win has helped me keep may sanity over more than two decades in the tech industry. And second, I'm old enough to have a pretty good idea what the guy meant when he replied, "It takes a tough man to make a tender chicken."
bbor · a year ago
I’m glad to know a new article that “everyone knows”! Thanks for pointing out the age.

And, at the risk of intentionally missing the metaphor: they do in fact make automated tenderizers, now ;) https://a.co/d/hybzu2U

hyggetrold · a year ago
It's a funny expression and it is rooted in advertising: https://en.wikipedia.org/wiki/Frank_Perdue
bccdee · a year ago
> Both early Unix and C compilers had simple structures, are easy to port, require few machine resources to run, and provide about 50%-80% of what you want from an operating system and programming language.

> Unix and C are the ultimate computer viruses.

The key argument behind worse-is-better is that an OS which is easy to implement will dominate the market in an ecosystem with many competing hardware standards. Operating systems, programming languages, and software in general have not worked this way in a long time.

Rust is not worse-is-better, but it's become very popular anyway, because LLVM can cross-compile for anything. Kubernetes is not worse-is-better, but nobody needs to reimplement the k8s control plane. React is not worse-is-better, but it only needs to run on the one web platform, so it's fine.

Worse-is-better only applies to things that require an ecosystem of independent implementers providing compatible front-ends for diverging back-ends, and we've mostly standardized beyond that now.

xiphias2 · a year ago
There are some differences between your examples in my opinion:

Rust started as an experiment of Mozilla team replacing C++ with something that helps them compete with Chrome in developing safe multi-threaded code more efficiently. It took a lot of experiments to get to the current type system, most of which gives real advantages by using affine types, but the compiler is at this point clearly over-engineered for the desired type system (and there are already ideas on how to improve on it). It's still too late to restart, as it looks like it takes 20 years to productionize something like Rust.

As for React I believe it's I believe an over-engineered architecture from the start for most web programming tasks (and for companies / programmers that don't have separate frontend and backend teams), but the low interest rates + AWS/Vercel pushed them on all newcomers (and most programmers are new programmers, as the number of programmers grew exponentially).

HTMX and Rails 8 are experiments in the opposite direction (moving back to the servers, nobuild, noSAAS), but I believe there's lot of space to further simplify the web programming stack.

tightbookkeeper · a year ago
Rust is not very popular in terms of number of users. It’s just over represented in online discussion.
bccdee · a year ago
It made it into the Linux kernel, and it's still a relatively young language. I don't think any language has made such a large impact since Java or Javascript, both of which are nearly 30 years old now.
mplewis · a year ago
Rust isn’t popular in web dev. It’s very popular in embedded.
psychoslave · a year ago
Unix and C are still there, and while on shallow level this can be more or less ignored, all abstractions end up to leak sooner than later.

Could the industry get rid of C and ridiculous esoteric abbreviation in identifiers, it could almost be a sane world to wander.

karel-3d · a year ago
I remember when I had a lesson about OSI layers, where the teacher has carefully described all the layers in detail

and then said something like "most of this is not important, these layers don't really exist, TCP/IP got popular first because it's just much simpler than OSI was"

pjc50 · a year ago
Oh, there's an entirely different feature-length article to be written/found about how packet switching beat circuit switching and the "Internet approach" beat the telco approach. The great innovation of being able to deploy devices at the edges without needing clearance from the center.

I don't think very many people even remember X25. The one survivor from all the X standards seems to be X509?

kragen · a year ago
The OSI stack was also designed using the packet-switching approach. Rob Graham's "OSI Deprogrammer" is a book-length article about how TCP/IP beat OSI, and how the OSI model is entirely worthless: https://docs.google.com/document/d/1iL0fYmMmariFoSvLd9U5nPVH...

I'm not sure he's right, but I do think his point of view is important to understand.

fanf2 · a year ago
LDAP is “lightweight” compared to the X.500 directory access protocol. LDAP DNs are basically the same as X.500 DNs.

SNMP is “simple” compared to X.711 CMIP. But SNMP also uses ASN.1 and X.660 OIDs.

rwmj · a year ago

Deleted Comment

hiatus · a year ago
X.25 is still used in ham radio with AX.25
scroot · a year ago
This might be fiery take, but I think the X.400 standards for naming and messaging would have been a lot better than the chaotic email situation, and probably would have made more sense from a commercial/legal perspective than making DNS "the" global naming system
jll29 · a year ago
I enjoyed how we got taught the two models in the 1990s, and why one has the four layers you need and the "standard" has seven layers instead.

The professor asked "How many layers do you count?" - "Seven." - "How many members do you think the ISO/OSI committee had that designed it between them?" - [laughter] - "Seven.".

PhilipRoman · a year ago
IMO the OSI layer system (even though using TCP/IP suite) has some merit in education. To most of us, the concept of layering protocols may seem obvious, but I've talked to people who are just learning this stuff, and they have a lot of trouble understanding it. Emphasizing that each layer is (in theory) cleanly separated and doesn't know about layers above and below, is a very useful step towards understanding abstractions.
ahoka · a year ago
The problem is that this is not true. There are no such clean strictly hierarchical layers in most of the protocols that make up the internet.

Deleted Comment

supportengineer · a year ago
I've never done any kernel programming but I assumed the OSI model corresponded to Linux kernel modules or a similar division internally.
gpderetta · a year ago
> The good news is that in 1995 we will have a good operating system and programming language; the bad news is that they will be Unix and C++.

And 30 years later they show few signs of letting go.

ezekiel68 · a year ago
Yep. And nary a tear is shed these days over the death of the so-called superior Lisp machines.
gpderetta · a year ago
The Spirit of the Machine still lives in some form in emacs.
rwmj · a year ago
Maybe not in any position to do anything about it, but I'm quite sad :-/
OhMeadhbh · a year ago
Meh. Lisp machines still exist. They're just simulated in various Lisp's runtime environments. It turns out that a RISC machine running a Lisp interpreter or an executable compiled from Lisp source tends to perform better than a tagged/cdr Lisp Machine w/ hardware GC.

That being said... I've wanted to implement an old Explorer using an FPGA for a while. Maybe if I just mention it here, someone will get inspired and do it before I can get to it.

Jach · a year ago
At a certain level, sure, but C++ at least has definitely lost out. In the 90s it seemed like it might really take over all sorts of application domains, it was incredibly popular. Now and for probably the last couple decades it and C have only kept around 10% of the global job market.
OhMeadhbh · a year ago
My gut feeling is there are still the same number of jobs for C++ today as there were in the 90s. It's just that they're hard to find because the total number of programming jobs has exploded. The reason you can't see the C++ jobs is because the newer, non-C++ jobs are crowding them out on job boards. [This is a hypothesis, one I haven't (dis)proven.]

For fun a few weeks ago I went looking for COBOL on VMS jobs. They're definitely still out there, but you do have to look for them. No one's going to send you an email asking if you're interested and if you don't hang out with COBOL/VMS people, you may not know they exist.

I think my point is the total number of C/C++ jobs today are probably the same or slightly higher than 1994. But the total number of Java and C# jobs (or Ruby or Elixr or JavaScript jobs) is dramatically higher than in 1994, if for no other reason than the fact these languages didn't exist in 1994.

[As an aside... if you're looking for a COBOL / VMS programmer/analyst... I spent much of the 80s as a VMS System Manager, coding custom dev tools in Bliss and some of the 90s working on the MicroFocus COBOL compiler for AIX. And while you would be crazy to ignore my 30+ years of POSIX/Unix(tm) experience, I think it would be fun to sling COBOL on VMS.]

worstspotgain · a year ago
It's not C++ that has been replaced, it's VB.
pjmlp · a year ago
Depends on the market, even the C++ wannabe replacements are implemented in compiler toolchains written in C++.

It gets a bit hard to replace something that your compiler depends on to exist in first place.

stonemetal12 · a year ago
Isn't "Worse is better" just a restatement of "Perfect is the enemy of Good", only slanted to make better\Perfect sound more enticing?

>The right thing takes forever to design, but it is quite small at every point along the way. To implement it to run fast is either impossible or beyond the capabilities of most implementors.

A deer is only 80% of a unicorn, but waiting for unicorns to exist is folly.

OhMeadhbh · a year ago
Yes and no. "Worse is Better" also implies you allow someone outside your problem domain to define abstractions you use to decompose the problem domain (and construct the solution domain.) So... I mean... that's probably not TOO bad if they're well-understood and well-supported. Until it isn't and you have to waste a lot of time emulating a system that allows you to model abstractions you want/need to use.

But at the end of the day everyone knows never to assume STD I/O will write an entire buffer to disk and YOU need to check for EINTR and C++ allows you to wrap arbitrary code in try...catch blocks so if you're using a poorly designed 3rd party library you can limit the blast radius. And it's common now to disclaim responsibility for damages from using a particular piece of software so there's no reason to spend extra time trying to get the design right (just ship it and when it kills someone you'll know it's time to revisit the bug list. (Looking at YOU, Boeing.))

I do sort of wonder what happens when someone successfully makes the argument that C++ Exceptions are a solution often mis-applied to the problem at hand and someone convinces a judge that Erlang-like supervisory trees constitute the "right" way to do things and using legacy language features is considered "negligence" by the courts. We're a long way off from that and the punch line here is a decent lawyer can nail you on gross negligence even if you convinced your customer to sign a liability waiver (at least in most (all?) of the US.)

Which is to say... I've always thought there is an interplay between the "worse is better" concept and the evolution of tech law in the US. Tort is the water in which we swim; it defines the context for the code we write.

Deleted Comment

th43o2i4234234 · a year ago
The critical point of the article holds true of everything in human social networks (be it religion/culture/philosophy/apps/industry...).

If you don't achieve virality, you're as good as dead. Once a episteme/meme spreads like wild-fire there's very little chance for a reassessment based on value/function - because the scope is now the big axis of valuation.

It's actually worse because humanity is now a single big borg. Even 30-40 years back, there were sparsely connected pools where different species of fish could exist - not any more. The elites of every single country is a part of the Anglosphere, and their populations mimic them (eventually).

This tumbling towards widespread mono-memetism in every single sphere of life is deeply dissatisfying about the modern human life, not just for PL/OS/... but also for culture etc.

Anthropocene of humanity itself.

esafak · a year ago
> If you don't achieve virality, you're as good as dead.

Are you? Maybe the worse solution peaks faster, but can be supplanted by a better solution in the future, like how Rust is displacing C/C++ in new projects. The better solution may never be popular yet persist.

pjmlp · a year ago
For Rust to fully displace C++, it needs to eventually bootstrap itself, until then, C++ will be around.

Additionally there are no significant new projects being done in Rust for the games industry, AI/ML, HPC, HFT, compiler backends, hardware design,....

Deleted Comment

th43o2i4234234 · a year ago
Rust is nowhere near displacing C++.

There's typically a "exhaustion" phase with mono-memetism/theories where everyone gets sick and tired of the "one and only way" and it becomes fashionable to try out new things (eg. Xtianity in Europe). We're not at this point where the olds can be toppled.

Deleted Comment

JohnFen · a year ago
"Worse is better" has become like "move fast and break things". They're both sayings that reveal an often-overlooked truth, but they have both been taken far too far and result in worse things for everybody.
ezekiel68 · a year ago
I see what you mean. Yet I feel like the first one (at least, as outlined in the article) is more about accepting an inevitability that you probably have little control over, while the second is more often adopted as a cultural process guideline for things you can control. But that's just my impression.
sesm · a year ago
And then it transformed into "move things and break fast".