Readit News logoReadit News
cloogshicer · 2 years ago
I've long been having a hunch that we're currently in the "wild west of abstraction".

I think we're missing an essential constraint on the way we do abstraction.

My hunch is that this constraint should be that abstractions must be reversible.

Here's an example: When you use a compiler, you can work at a higher layer of abstraction (the higher-level language). But, this means you're now locked into that layer of abstraction. By that I mean, you can no longer work at the lower layer (assembly), even if you wanted to. You could in theory of course modify the compiler output after it's been generated, but then you'd have to somehow manually keep that work in sync whenever you want to re-generate. Using an abstraction kinda locks you into that layer.

I see this problem appearing everywhere:

- Use framework <--> Write from scratch

- Use an ORM <--> Write raw SQL

- Garbage collection <--> Manual memory management

- Using a DSL <--> Writing raw language code

- Cross platform UI framework <--> Native UI code

- ...

I think we're missing a fundamental primitive of abstraction that allows us to work on each layer of abstraction without being locked in.

If you have any thoughts at all on this, please share them here!

jerf · 2 years ago
Abstractions work by restricting the domain of what you can do, then building on those restrictions. For example, raw hardware can jump anywhere, but structured programming constrains you to jump only to certain locations in order to implement if, for, functions, etc. It is precisely those restrictions that bring the benefits of structured programming; if you still frequently dipped into jumping around directly structured programming would fail to provide the guarantees it is supposed to provide. CRUD frameworks provide their power by restricting you to CRUD operations, then building on that. Immutable data is accomplished by forbidding you from updating values even though the hardware will happily do it. And so on.

Escape hatches under the abstractions are generally there precisely to break the abstractions, and break them they do.

Abstractions necessarily involve being irreversible, or, to forestall a tedious discussion of the definition of "irreversible", necessarily involve making it an uphill journey to violate and go under the abstraction. There's no way around it. Careful thought can make using an escape hatch less pain than it might otherwise be (such as the ORM that makes it virtually impossible to use SQL by successfully hiding everything about the SQL tables from you so you're basically typing table and column names by dead reckoning), but that's all that can be done.

One thing to do about this is that just as in the past few years the programming community has started to grapple with the fact that libraries aren't free but come with a certain cost that really adds up once you're pulling in a few thousand libraries for a framework's "hello world", abstractions that look really useful but whose restrictions don't match your needs need to be looked at a lot more closely.

I had something like that happen to me just this week. I needed a simple byte ring buffer. I looked in my language's repos for an existing one. I found them. But they were all super complicated, offering tons of features I didn't need, like being a writethrough buffer (which involved taking restrictions I didn't want), or where the simple task of trying to understand the API was quite literally on par with implementing one myself. So I just wrote the simple thing. (Aiding this decision is that broadly speaking if this buffer does fail or have a bug it's not terribly consequential, in my situation it's only for logging output and only effectively at a very high DEBUG level.) It wasn't worth the restrictions to build up stuff I didn't even want.

cloogshicer · 2 years ago
> It is precisely those restrictions that bring the benefits [...]

Wouldn't it be possible to say "ok, I'll take those restrictions as long as they benefit me, but once I notice that they no longer do, I'll break them and drop down to the lower layer. But only for those parts that actually require it"?

> Abstractions necessarily involve being irreversible, or, to forestall a tedious discussion of the definition of "irreversible", necessarily involve making it an uphill journey to violate and go under the abstraction.

Why? Not being snarky, I'm genuinely trying to understand this better.

samatman · 2 years ago
> Here's an example: When you use a compiler, you can work at a higher layer of abstraction (the higher-level language). But, this means you're now locked into that layer of abstraction. By that I mean, you can no longer work at the lower layer (assembly), even if you wanted to.

Native-code compilers commonly allow emitting assembly directly, but now your source code isn't portable between CPUs. Many interpreted languages, even most, allow FFI code to be imported, modifying the runtime accordingly, but now your program isn't portable between implementations of that language, and you have to be careful to make sure the behavior you've introduced doesn't mess with other parts of the system in unexpected ways.

Generalizing, it's often possible to drill down beneath the abstraction layer, but there's often an inherent price to be paid, whether it be taking pains to preserve the invariants of the abstraction, losing some of the benefits of it, or both.

There are better and worse versions of this layer, I would point to Lua as a language which is explicitly designed to cross the C/Lua boundary in both directions, and which did a good job of it. But nothing can change the fact that pure-Lua code simply won't segfault, but bring in userdata and it very easily can; the problems posed are inherent.

ihumanable · 2 years ago
Lots of abstractions have an escape hatch down to the lower level, you can put assembly in your C code, most ORMs have some way to just run a query, etc.

I think the question I have is, what benefit does this provide? Let's say we could wave a magic wand and you can operate at any layer of abstraction. Is this beneficial in some way? The article is about leaky abstractions and states

> One reason the law of leaky abstractions is problematic is that it means that abstractions do not really simplify our lives as much as they were meant to.

I think I'm just struggling to understand how this would help with that.

cloogshicer · 2 years ago
It would help because you could tackle the problem at hand always at the right layer of abstraction.

If a certain aspect of the problem can be solved easily in a higher layer of abstraction, great! Let's solve it at that layer, because it's usually easier and allows for more expressiveness.

But whenever we need more control, we can seamlessly drop down to the lower layer and work there.

I think we need to find a fundamental principle that allows this. But I see barely anyone working on this - instead we keep trying to find higher and higher layers of abstractions (LLMs being the most recent addition) in the hopes they will get rid of the need of dealing with the lower layers. Which is a false hope, I feel.

yen223 · 2 years ago
There's a well-written article by Bret Victor on climbing the ladder of abstraction. It makes the same argument you made, in that climbing "down" the ladder is just as important as going "up"

https://worrydream.com/LadderOfAbstraction/

cloogshicer · 2 years ago
Thank you for posting, I love that article. Bret Victor is a genius in my opinion and his writings and talks have inspired many of the thoughts I've written above.
Veserv · 2 years ago
No, reversible abstractions are just one kind of abstraction. For instance, a machine code sequence to a linear sequence of assembly instructions is a reversible abstraction. Not every machine code sequence is expressible as a linear sequence of assembly instructions, but every linear sequence of assembly instructions has a trivial correspondence to a machine code sequence.

However, consider the jump to a C-like language. The key abstraction provided there is the abstraction of infinite local variables. The compiler manages this through a stack, register allocation, and stack spilling to provide the abstraction and consumes your ability to control the registers directly to provide this abstraction. To interface at both levels simultaneously requires the leakage of the implementation details of the abstraction and careful interaction.

What you can do easily is what I call a separable abstraction, a abstraction that can be restricted to just the places it is needed/removed where unneeded. In certain cases in C code you need to do some specific assembly instruction, sequence, or even function. This can be easily done by writing a assembly function that interfaces with the C code via the C ABI. What is happening there is that the C code defines a interface allowing you to drop down or even exit the abstraction hierarchy for the duration of that function. The ease of doing so makes C highly separable and is part of the reason why it is so easy to call out to C, but you hardly ever see anybody calling out to say Java or Haskell.

Of course, that is just one of the many properties of abstractions that can make them easier to use, simpler, and more robust.

saghm · 2 years ago
> My hunch is that this constraint should be that abstractions must be reversible.

> Here's an example: When you use a compiler, you can work at a higher layer of abstraction (the higher-level language). But, this means you're now locked into that layer of abstraction. By that I mean, you can no longer work at the lower layer (assembly), even if you wanted to. You could in theory of course modify the compiler output after it's been generated, but then you'd have to somehow manually keep that work in sync whenever you want to re-generate. Using an abstraction kinda locks you into that layer.

Just to make sure I understand, you're proposing a constraint that would rule out every compiler in existence today? I feel like overall I think compilers have worked out well, but if I'm not misunderstanding and this is how you actually feel, I guess I at least should comment your audacity, because I don't think I'd be willing to seriously propose something that radical.

cloogshicer · 2 years ago
Yes, you understood correctly.

What I'm saying is extremely radical and would require rethinking and rebuilding practically everything we have.

yogorenapan · 2 years ago
A lot of languages allow in-line assembly
lmm · 2 years ago
The best paradigm for understanding abstractions is not the theory-and-model style (which requires hiding details irreversibly), but the equivalence style.

A good abstraction is e.g. summing a list whose elements are a monoid - summing the list is equivalent to adding up all the elements in a loop. Crucially, this doesn't require you to "forget" the specific type of element that your list has - a bad version of this library would say that your list elements have to be subtypes of some "number" type and the sum of your list came back as a "number", permanently destroying the details of the specific type that it actually is. But with the monoid model your "sum" is still whatever complex type you wanted it to be - you've just summed it up in the way appropriate to that type.

cloogshicer · 2 years ago
Interesting! Can you point me to some further reading or resources on these different paradigms?
drewm1980 · 2 years ago
You can probably find an IDE plugin that inlines the assembly for a c function. Most ids can show the assembly side by side with your c code so it wouldn't be that much of a step. To fulfill your vision you would also need a decompiler (and an inliner) to convert a block of assembly back into C if a corresponding C routine exists.
titzer · 2 years ago
I work on programming languages and systems (virtual machines). A key thing with a systems programming language is that you need to be able to do things at the machine level. Here's a talk I gave a year ago about it: https://www.youtube.com/watch?v=jNcEBXqt9pU
eschneider · 2 years ago
That's not really true of, at least C compilers. Because compilers have ABI's and fixed calling conventions, it's straightforward, documented, and not uncommon (depending on your application area/deployment target) to drop down to the ASM layer if you need to do that.

It's definitely one of those things that makes C nice for bare metal programming.

cloogshicer · 2 years ago
Interesting, I'm curious though, once you do drop down to the ASM layer, how do you ensure that this code doesn't get overwritten by new compiler output? Or is this something you somehow include in the compile step?
wvenable · 2 years ago
Most ORMs give a way to integrate nicely with SQL if you need to reach down to that layer and still use the rest of the ORM features.

There is no silver bullet; everything is a trade off. Almost all of the time, the trade off is entirely worth it even if that gets you locked into that solution.

cloogshicer · 2 years ago
> Most ORMs give a way to integrate nicely with SQL if you need to reach down to that layer and still use the rest of the ORM features.

Agreed, that's a good thing, in my experience.

> Almost all of the time, the trade off is entirely worth it even if that gets you locked into that solution.

I wish this would match my experience.

samsquire · 2 years ago
This is something I think a lot about.

I spend a lot of time trying to think of something that composes. Monads are one answer.

I think we need advanced term rewriting systems that also optimize and equivalise.

I really enjoy Joel on Software blog posts from this era.

dhdjksosja · 2 years ago
Babel towers of macro edsls, aka learn lisp. https://github.com/combinatorylogic/mbase
__s · 2 years ago
I think this is a good way to frame abstraction vs macro
mjw1007 · 2 years ago
I never liked the way he used TCP as an example here.

I don't think it's sensible to think of "make it reliable" as a process of abstraction or simplification (it's obviously not possible to build a reliable connection on top of IP if by "reliable" you mean "will never fail"). "You might have to cope with a TCP connection failing" doesn't seem to be the same sort of thing as his other examples of leaky abstractions.

TCP's abstraction is more like "I'll either give you a reliable connection or a clean error". And that one certainly does leak. He could have talked about how the checksum might fail to be sufficient, or how sometimes you have to care about packet boundaries, or how sometimes it might run incredibly slowly without actually failing.

joe_the_user · 2 years ago
Indeed, his discussion seems to involve a confusing of a leaky network protocol and a leaky abstraction. Perhaps he wanted to meta-illustrate his concept by having his discussion itself be leaky.
lcuff · 2 years ago
I like the idea of TCP as a leaky abstraction because it points out the difficulty of engineering the abstraction we really want. It would be wonderful for TCP to be a guaranteed connection abstraction, but it turns out in today's world, the abstraction of a reliable connection is TCP + a network administrator + a guy with wire snips + solder (metaphorically). Maybe down the road, AIs and repair bots will be involved, and the guaranteed connection abstraction might become real or much much stronger. Although it gets more complicated because if a message takes hours to deliver, is that going to work for your application? Yes if you're archiving documents, no if you're trying to set up a video conference call or display a web page.

TCP is problematic in modern circumstances (think: Inside a data center) because a response within milliseconds is what's expected to make the process viable. TCP was designed to accommodate some element of the path being a 300 Baud modem, where a response time in seconds is possible as the modem dials the next hop, so the TCP timeouts are unuseable. QUIC was developed to address this kind of problem. My point being, the abstraction of a guaranteed _timely_ connection is even harder.

I think Joel could have expanded his thoughts to include the degree of leak. SQL is a leaky abstraction itself, yes, but my own take is that ORMs are much leakier: Every ORM introduction document I've read explains the notation by saying "here's the sql that is produced". I think of ORMs as not a bucket with holes, but a bucket with half the bottom removed.

anonymous-panda · 2 years ago
> but it turns out in today's world, the abstraction of a reliable connection is TCP + a network administrator + a guy with wire snips + solder (metaphorically).

I think you’ve misunderstood the abstraction. In fact, TCP is not leaky because there’s wire snips or cable cuts. In fact, BGP will route around physical failures. But aside from that, it abstracts all the various failure modes as a single disconnection error. A leaky abstraction would be when you need to still distinguish the error type and TCP wouldn’t let you. A 100% reliable connection is physically impossible in any context (and an intrinsic concept of distributed systems which every abstraction is leaky over including the CPU bus) so if that’s your bar then all tech will be a leaky abstraction. It is at some level but not in a way that’s helpful to have a fruitful discussion.

an1sotropy · 2 years ago
I first learned about "leaky abstractions" from John Cook, who describes* IEEE 754 floats as a leaky abstraction of the reals. I think this is a good way of appreciating floating point for the large group of people who's experience is somewhere between numerical computing experts (who look at every arithmetic operation through the lens of numerical precision) and total beginners (who haven't yet recognized that there can't be a one-to-one correspondence between a point on the real number line and a "float").

* https://www.johndcook.com/blog/2009/04/06/numbers-are-a-leak...

davesque · 2 years ago
I feel like this article should be called "The Law of Bad Abstractions." I often see this cited as a blanket rejection of complexity in software. But complexity is unavoidable and even necessary. A skillful engineer will therefore design their abstractions carefully and correctly, balancing time spent thinking forward against time spent implementing a solution. I think Joel understands this, but it feels weird how he frames it as a "law", as though it's something he's discovered instead of a simple fact that arises from the nature of what abstractions are: things that stand in for (or mediate interaction with) some other thing without actually being that thing. What a surprise that the stand-in ends up not being the actual thing it's standing in for!
refactor_master · 2 years ago
A car is an implementation meant to deal with a problem (the weather), but never abstracts away physics or forces full buy-in to some alternate reality. You can’t just go around and say any imperfection in an implementation is a leaky abstraction. That’s not how it works.

My shoe is not abstracting away the terrain, nor is it leaky because it doesn’t handle all weather conditions. Well, it is leaky, but not in that sense.

samatman · 2 years ago
An analogy is an abstraction, and abstractions leak.

Dead Comment

BoiledCabbage · 2 years ago
Young people should probably know that (as far as I recall) Joel more or less invented tech blogging as a form of advertising/recruiting for your company.

Namely either listing out the process/perks that a good engineering team should have and how conveniently his company has it. Or describing interesting and challenging problems they solved and how you can join them and solve problems like that too.

I don't recall anyone popular doing it before him and it's pretty much industry standard now. (Although, feel free to chime in if that's wrong. But popular being a key word here),

Deleted Comment

wvenable · 2 years ago
I loved this essay when it came out but I've come to dislike how "leaky abstraction" has become a form of low effort criticism that gets applied to almost anything.
simonw · 2 years ago
I love this essay so much. I read it 22 years ago and it's been stuck in my mind ever since: it taught me that any time you take on a new abstraction that you don't understand, you're effectively taking on mental debt that is likely to come due at some point in the future.

This has made me quite a bit more cautious about the abstractions I take on: I don't have to understand them fully when I start using them, but I do need to feel moderately confident that I could understand them in depth if I needed to.

And now I'm working with LLMs, the most opaque abstraction of them all!

Legend2440 · 2 years ago
>And now I'm working with LLMs, the most opaque abstraction of them all!

You put a black box around it to fit it into the world of abstractions that traditional programs live in.

But I'd say the most interesting thing about neural networks is that they do not have any abstractions within them. They're programs, but programs created by an optimization algorithm just turning knobs to minimize the loss.

This creates very different kinds of programs - large, data-driven programs that can integrate huge amounts of information into their construction. It's a whole new domain with very different properties than traditional software built out of stacked abstractions.