Rule of thumb: every 10% increase in complexity cuts your potential user base in half.
This is why people make backups by copy-pasting files. This is why Excel is so dominant. This is why systems like hypercard and git are not mainstream and never will be.
There is a large universe of tools people would love if only they would bother to learn how they worked. If only. Most people will just stick to whatever tools they know.
For most people the ability to go back and forward in time (linear history) is something they grasp immediately. Being able to go back in time and make a copy also requires no explanation. But having a version tree, forking and merging, having to deal multiple timelines and the graphs that represent them -- that's where you lose people.
I wouldn't frame it as "complexity", I would frame it as "cognitive load". You can lower cognitive load despite having high complexity. For example, you could (and many companies have done so) build a user-friendly version management system and UI on top of git, which on its surface is just "version 1", "version 2", "version 2 (final) (actually)" but under the hood is using commits and branches. You can have submenus expose advanced features to advanced users while the happy path remains easy to use.
> Rule of thumb: every 10% increase in complexity cuts your potential user base in half.
I agree this is an accurate rule of thumb. However if the complexity lets users achieve more, then the complexity can earn its keep. Using version control is so beneficial that software engineers deal with the complexity. The ability to maintain a more complicated model in one's head and use it to produce more value is not something that all users are able to do. More sophisticated users can afford to use more complicated tools.
However the sophisticated users are reigned in by network effects. If you want to work with people then everyone needs to be able to deal with the complexity. Programmers are more sophisticated than most office workers, which is why we ubiquitously version codebases, and not so much spreadsheets.
> This is why systems like hypercard and git are not mainstream and never will be.
We are moving towards a world where fewer humans are needed, and the humans that are needed are the most sophisticated operators in their respective domains. This means less network effects, less unsophisticated user drag holding back the tooling. The worst drop off and the population average increases.
I would not be surprised to see an understanding of version control and other sophisticated concepts become common place among the humans that still do knowledge work in the next few years.
> But having a version tree, forking and merging, having to deal multiple timelines and the graphs that represent them -- that's where you lose people.
There's no good reason why that should be the case. E.g., one could imagine the guts of the "copy-pasting files" UI being a VCS. That would keep the original 100% of the userbase plus allow whatever percentage to level up if/when the need arises (or go "back in time" in the event of a major screw-up).
It's just that software UX in 2025 is typically very bad. The real axiom: the longer you run an application, the more likely it will do the opposite of its intended purpose.
Oops, the word "stash" in git has an idiosyncratic meaning. That content has been removed from the history I was trying to keep. Fuck.
Oops, "Start" in Windows pauses interactivity and animation until ads are ready to be displayed in the upcoming dialog. Fuck!
Especially in the latter case, I don't think users are deterred by the cognitive load required to interact with the interface. It's probably more a case of them being deterred because the goddamned stupid thing isn't doing what it's supposed to.
In theory you can have these "zero cost abstractions" but in practice I don't think so. The user manual gets thicker. Concepts like 'delete permanently' and backup/restore get more complicated. Users will get confronted by scary "advanced users only" warnings in the interface. Some enthusiast blogger or youtuber will create content highlighting those advanced features and then regular users will get themselves in trouble. Customer support gets way more complicated because you always have to consider the possibility that the user has (unknowingly) used these advanced features. If you put buttons in the interface users will press those buttons. That's just a fact of life. Advanced features always come at a cost. Sometimes that cost is worth it, but only sometimes.
Part of the reasons why so many people are disillusioned by AI. We are attempting to tame complexity that shouldn't exist at the first place.
Im guessing lots of code that was getting written was kind of verbose boiler plate, automating all that doesn't move the productivity needle all that much. That shouldn't have existed at all to start with.
I think the author's ideas are likely too complex for a wide audience, but they could be a game changer for those who can handle that kind of complexity.
When we model systems and data models, we model them for the problem we're solving today, and with enough experience you can anticipate some future problems...but not all. Business is dynamic and software data models are relatively rigid. That's why business users love Excel - it's the javascript of the database world...you can slap something together without too much fuss and 60% of the time it'll work every time.
Feels like a part of this question is in how databases haven't evolved in the last 50 years. Databases don't treat types and domain models as first class citizens.
SQL generally lacks the expressiveness to really encode your intent in its type system, which puts more on developers, and creates longer term higher database maintenance- lower productivity.
We've been noodling around this space at TypeDB (built on the entity-relations model and Type theory, rather than set theory). Still fairly in the journey, and not well known, but there's a strong core of early adopters using it in domains where the model is the system!
Our head of engineering gave a talk about database design a couple weeks ago at New York Systems, touching on this direction if of interest- https://www.youtube.com/watch?v=w8n1TSqokF4
So… I agree with the problem, but the solution in my mind is that you want to treat the counter parties as first class values that are essentially interacting objects with user space defined capabilities that wind up being a sort of transactional db as runtime environment. Transactions plus first class ownership and authorization and the ability to define incremental interactions between systems (which sounds like dependent session types but isn’t quite for a lot of little reasons)
I led building a first pilot system at jpmorgan 2015-2018 but ethereum et al got mind share for buzzword compliance at the time.
Sure. Top level context / motivation was build a db/ modelling tool that could easily describe all the different administrative and economic workflows and resources across all of finance.
You actually want to view possible economic or administrative exchanges as just being literally partially applied anonymous fucntions that only counter parties can import, partially apply and if not fully applied, re-export for further steps.
You kinda want dependent types so you can express stuff like “this has been signed and accepted by Bob or some designated delegate on his/her behalf”
First class signed values get a title subtle because you’ve gotta have a robust way to talk about a canonical binary serialization that plays nicely with cryptographic signatures. The simplest way is to require every signed value have a globally unique nonce pubkey counter for each new thing signed by that identity. I’m saying that a little bit imprecisely and I mean the weaker per pubkey sense of that. This also is a sane approach because it’s the same as requiring per identity sequential consistency OR explicitly coordinated nonce sharding.
Basically: if you just add first class identity and signatures as values to like anyyy dbms and have strict validation for any db transaction to commit, the whole needing an api barrier around your application db kinda goes away!
AFAICT, pretty much every api wrapper around a db is mostly because pretty much every db has no native way to model and enforce an application specific identity authorization and resource ownership model.
Allowing your full dbms to be exposed over an api while respecting application state and security is a pretty mind blowing perspective shift, and maybe I should revisit working on that.
There’s a lot of other cool pieces that I’ve not touched on that make it pretty fun/interesting/useful, but I think that partially expresses stuff
I assert that the idea of "authoritative data" for a lot of hypothetical questions people would explore will itself be a hindrance. That is, for most hypothetical scenarios, you explicitly want modeled data. Ideally, you would know the parameters used in generating the data, but the entire point of many hypotheticals is you are projecting into an area where you don't have data. Or building a counterfactual world and predicting what it would have changed.
Similarly, "invariant preserving operations" is not necessarily what you want, either? You want to know what other parameters you would need to adjust to keep some conditions. But you want to be able to edit anything free form and then interact with it to get back to a "solved" state. That is to say, when interacting with a system, you explicitly want to allow bad or incomplete states. (This is, ultimately, what kills many "code as an AST" ideas.)
FreeCAD has a good example to consider on this. When drafting, you can pretty much freeform draw a part. But if you want to finish the session, you have to fully specify all parameters. This doesn't mean you can't add to the drawing while it is unsolved. It does mean that you can't take it to the next phase without fully solving.
This can be difficult if you don't have training material on how to use an application. And while it can be frustrating that you can save something in a state where you can't click "next," it is also frustrating to have a system where you can't save what you currently have to send to someone else to look at.
So this is just an "idea"? It uses a lot of formal language terms, but doesn't specifically say how those are applied to the problem. Is this just "save all of the analysis state at every step, and allow subsequent steps to take different approaches?" I can see problems with data compatibility between steps depending upon the processing done.
This is why people make backups by copy-pasting files. This is why Excel is so dominant. This is why systems like hypercard and git are not mainstream and never will be.
There is a large universe of tools people would love if only they would bother to learn how they worked. If only. Most people will just stick to whatever tools they know.
For most people the ability to go back and forward in time (linear history) is something they grasp immediately. Being able to go back in time and make a copy also requires no explanation. But having a version tree, forking and merging, having to deal multiple timelines and the graphs that represent them -- that's where you lose people.
I agree this is an accurate rule of thumb. However if the complexity lets users achieve more, then the complexity can earn its keep. Using version control is so beneficial that software engineers deal with the complexity. The ability to maintain a more complicated model in one's head and use it to produce more value is not something that all users are able to do. More sophisticated users can afford to use more complicated tools.
However the sophisticated users are reigned in by network effects. If you want to work with people then everyone needs to be able to deal with the complexity. Programmers are more sophisticated than most office workers, which is why we ubiquitously version codebases, and not so much spreadsheets.
> This is why systems like hypercard and git are not mainstream and never will be.
We are moving towards a world where fewer humans are needed, and the humans that are needed are the most sophisticated operators in their respective domains. This means less network effects, less unsophisticated user drag holding back the tooling. The worst drop off and the population average increases.
I would not be surprised to see an understanding of version control and other sophisticated concepts become common place among the humans that still do knowledge work in the next few years.
There's no good reason why that should be the case. E.g., one could imagine the guts of the "copy-pasting files" UI being a VCS. That would keep the original 100% of the userbase plus allow whatever percentage to level up if/when the need arises (or go "back in time" in the event of a major screw-up).
It's just that software UX in 2025 is typically very bad. The real axiom: the longer you run an application, the more likely it will do the opposite of its intended purpose.
Oops, the word "stash" in git has an idiosyncratic meaning. That content has been removed from the history I was trying to keep. Fuck.
Oops, "Start" in Windows pauses interactivity and animation until ads are ready to be displayed in the upcoming dialog. Fuck!
Especially in the latter case, I don't think users are deterred by the cognitive load required to interact with the interface. It's probably more a case of them being deterred because the goddamned stupid thing isn't doing what it's supposed to.
A first class model of a supply chain for assembly manufacturing and an (even bitemporal) accounting database are just wildly different domains.
To stop open ended rhetoric we need to make something fixed/constant.
Im guessing lots of code that was getting written was kind of verbose boiler plate, automating all that doesn't move the productivity needle all that much. That shouldn't have existed at all to start with.
SQL generally lacks the expressiveness to really encode your intent in its type system, which puts more on developers, and creates longer term higher database maintenance- lower productivity.
We've been noodling around this space at TypeDB (built on the entity-relations model and Type theory, rather than set theory). Still fairly in the journey, and not well known, but there's a strong core of early adopters using it in domains where the model is the system!
Our head of engineering gave a talk about database design a couple weeks ago at New York Systems, touching on this direction if of interest- https://www.youtube.com/watch?v=w8n1TSqokF4
I led building a first pilot system at jpmorgan 2015-2018 but ethereum et al got mind share for buzzword compliance at the time.
You actually want to view possible economic or administrative exchanges as just being literally partially applied anonymous fucntions that only counter parties can import, partially apply and if not fully applied, re-export for further steps.
You kinda want dependent types so you can express stuff like “this has been signed and accepted by Bob or some designated delegate on his/her behalf”
First class signed values get a title subtle because you’ve gotta have a robust way to talk about a canonical binary serialization that plays nicely with cryptographic signatures. The simplest way is to require every signed value have a globally unique nonce pubkey counter for each new thing signed by that identity. I’m saying that a little bit imprecisely and I mean the weaker per pubkey sense of that. This also is a sane approach because it’s the same as requiring per identity sequential consistency OR explicitly coordinated nonce sharding.
Basically: if you just add first class identity and signatures as values to like anyyy dbms and have strict validation for any db transaction to commit, the whole needing an api barrier around your application db kinda goes away!
AFAICT, pretty much every api wrapper around a db is mostly because pretty much every db has no native way to model and enforce an application specific identity authorization and resource ownership model.
Allowing your full dbms to be exposed over an api while respecting application state and security is a pretty mind blowing perspective shift, and maybe I should revisit working on that.
There’s a lot of other cool pieces that I’ve not touched on that make it pretty fun/interesting/useful, but I think that partially expresses stuff
Similarly, "invariant preserving operations" is not necessarily what you want, either? You want to know what other parameters you would need to adjust to keep some conditions. But you want to be able to edit anything free form and then interact with it to get back to a "solved" state. That is to say, when interacting with a system, you explicitly want to allow bad or incomplete states. (This is, ultimately, what kills many "code as an AST" ideas.)
FreeCAD has a good example to consider on this. When drafting, you can pretty much freeform draw a part. But if you want to finish the session, you have to fully specify all parameters. This doesn't mean you can't add to the drawing while it is unsolved. It does mean that you can't take it to the next phase without fully solving.
This can be difficult if you don't have training material on how to use an application. And while it can be frustrating that you can save something in a state where you can't click "next," it is also frustrating to have a system where you can't save what you currently have to send to someone else to look at.
Deleted Comment