Certainly, people manage to make poorly performing things even so, but at least at the base level, your primitives may be stupid, but they are generally fast.
The logic programming world works in a default space of O(n) operations, that stack together more freely than the imperative world, and that gives easy access to O(2^n). Since this is essentially impossible, a great deal of work is done to try to get that down, but you're always intrinsically starting behind the eight-ball. It is easier to work up from O(1) operations than to write an exponential or super-exponential algorithm and then try to trim it back down to a decent complexity for a normal programmer.
I think this is the root cause as to why logic programming is generally not something we see a lot of. It's like a wild stallion; it may be powerful but the amount of effort you pour into just keeping it under control may exceed any benefit you could get.
It isn't useless, of course. Amazing work has been done in the field of SAT solvers, and there's certainly a niche for it. The problems that are intrinsically higher-order polynomials or (technically) exponential, well, they are what they are and if you're going to be stuck in that world, logic programming may offer you a much better toolset than conventional programming on its own. But there was a hope a long time ago, in the Prolog era, that it could become part of the normal toolkit of general purpose programming, and I don't think that will ever happen, because of this line of logic.
This is a bit tangential to the article, it's just what I happened to read that finally crystallized this in my mind.
The problem with logic programming is the 'logic' part.
Modern imperative and functional programming languages are constructive.
Logic programming is not, and the expressive power varies with the exact logic being used.
Elementary logic gives you the following intuition. Propositional logic -- easy. First order logic -- easy (with caveats). And (some kinds of) second order logic -- only easy if you get lucky with the problem you are trying to solve.
For logic based programming languages and systems, both the language implementer and the programmer have to be careful about supporting and using language constructs which boil down to tractable computation.
This is much more difficult than it seems like.
For example, when using SMT solvers you learn quickly that multiplication and division with constants is very fast while the same operations with variables can be intractable. i.e. reasoning about x * C --easy, while x * y is often (but not always..) going to hang.
It also doesn't seem particularly interesting because it doesn't allow the programs to get input. Obviously that makes things much more difficult wrt to proving program equivalence.
In most cases, you just slice the program to isolate pure computation, and just optimize that.
Most traditional compiler optimizations stick to that as well, the exceptions to this rule are carefully engineered.
Related: I work on Souper (https://github.com/google/souper).
Feel free to reach out if anyone has questions!
Thanks for sharing! Really exciting stuff!