(defn apply-shipping-rules [order]
(cond-> order
(and (= :premium (:customer-type order))
(> (:order-total order) 100))
(assoc :shipping-cost 0)))
Since there is an equivalence between types and propositions, the Clojure program also models a "type", in the sense that the (valid) inputs to the program are obviously constrained by what the program can (successfully) process. One ought, in principle, to be able to transform between the two, and generate (parts of) one from the other.
We do a limited form of this when we do type inference. There are also (more limited) cases where we can generate code from type signatures.
I think op's point is that the Clojure code, which lays the system out as a process with a series of decision points, is closer to the mental model of the domain expert than the Haskell code which models it as a set of types. This seems plausible to me, although it's obviously subjective (not all domain experts are alike!).
The secondary point is that the Clojure system may be more malleable - if you want to add a new state, you just directly add some code to handle that state at the appropriate points in the process. The friction here is indeed lower. But this does give up some safety in cases where you have failed to grasp how the system works; a type system is more likely to complain if your change introduces an inconsistency. The cost of that safety is that you have two representations of how the system works: the types and the logic, and you can't experiment with different logic in a REPL-like environment until you have fully satisfied the type-checker. Obviously a smarter system might allow the type-checker to be overridden in such cases (on a per-REPL-session basis, rather than by further editing the code) but I'm not aware of any systems that actually do this.
The intuition is simple: LLMs are a force multiplier for the coding part, which means that they will produce code faster than I will alone. But that means that they'll also produce _bad_ code faster than I will alone (where by "bad" I mean "code which doesn't really solve the problem, due to some fundamental misunderstanding").
Previously I would often figure a problem out by trying to code a solution, noticing that my approach doesn't work or has unacceptable edge-cases, and then changing track. I find it harder to do this with an LLM, because it's able to produce large volumes of code faster than I'm able to notice subtle problems, and by the time I notice them there's a sufficiently large amount of code that the LLM struggles to fix it.
Instead, now I have to do a lot more "hammock time" thinking. I have to be able to give the LLM an explanation of the system's requirements that is sufficiently detailed and robust that I can be confident that the resulting code will make sense. It's possible that some of my coding skills might atrophy - in a language like Rust with lots of syntactic features, I might start to forget the precise set of incantations necessary to do something. But, corresponding, I have to get better at reasoning about the system at a slightly higher level of abstraction, otherwise I'm unable to supervise the LLM effectively.
Deleted Comment
I generally assumed ML was compute constrained, not code-monkey constrained. i.e. I'd probably tell my top N employees they had more room for experiments rather than hire N + 1, at some critical value N > 100 and N << 10000.
LLMs are still somewhat experimental, with various parts of the stack being new-ish, and therefore relatively un-optimised compared to where they could be. Let's say we took 10% of the training compute budget, and spent it on an army of AI coders whose job is to make the training process 12% more efficient. Could they do it? Given the relatively immature state of the stack, it sounds plausible to me (but it would depend a lot on having the right infrastructure and practices to make this work, and those things are also immature).
The bull case would be the assumption that there's some order-of-magnitude speedup available, or possibly multiple such, but that finding it requires a lot of experimentation of the kind that tireless AI engineers might excel at. The bear case is that efficiency gains will be small, hard-earned, or specific to some rapidly-obsoleting architecture. Or, that efficiency gains will look good until the low-hanging fruit is gone, at which point they become weak again.
- "everything is an expression" is a nicer solution for conditional assignments and returns than massive ternary expressions
- the pipe operator feels familiar from Elixir and is somewhat similar to Clojure's threading macros.
- being able to use the spread operator in the middle of an array? Sure, I guess.
I want to like the pattern matching proposal, but the syntax looks slightly too minimal.
The other proposals are either neutral or bad, in my opinion. Custom infix operators? Unbraced object literals? I'm not sure that anyone has a problem that these things solve, other than trying to minimize the number of characters in their source code.
Still, I'm glad that this exists, because allowing people to play with these things and learn from them is a good way to figure out which proposals are worth pursuing. I just won't be using it myself.