The author has made a CRDT. He denies that his algorithm constitutes a CRDT. It's a straightforward merge, not a "fancy algorithm".
What specific aspect of a CRDT does this solution not satisfy? The C? The R? The D? The T?
Still, every algorithm that's actually labeled a CRDT shares a magical property: if my replica has some changes, and your replica has some changes, our replicas can share their changes with each other and each converge closer to the final state of the document, even if other people have been editing at the same time, and different subsets of their changes have been shared with you or I. That is, you can apply peoples' changes in any order and still get the same result. I don't think it's useful to call anything without that property a CRDT.
The expression problem is about being able to extend both data types (new cases) and operations (new functions) without modifying existing code while preserving static type safety.
C#'s extension methods can't be virtual, so you can't use them to actually add any new operations which can be dispatched on. They're just syntactic sugar for adding static methods which in non-OOP languages can be accomplished by declaring a free function.
Rust traits let you add new types (just impl the Trait), but it doesn't let you add new functions, so it doesn't do anything to solve the problem.
Each data type is a `struct`. Each operation is a trait. You `impl` each trait on each struct.
This works even if you're using a library that has declared `struct A` and `struct B` and `trait F` and `trait G`, and you want to add both a `struct C` and a `trait H`, and fill out the whole 3x3 grid without modifying the library.
The library says:
struct A { ... }
struct B { ... }
trait F { ... }
impl F for A { ... }
impl F for B { ... }
trait G { ... }
impl G for A { ... }
impl G for B { ... }
fn some_function<T: F + G>(data: T) { ... }
Your code says: use library::{A, B, F, G};
struct C { ... }
impl F for C { ... }
impl G for C { ... }
trait H { ... }
impl H for A { ... }
impl H for B { ... }
impl H for C { ... }
fn another_function<T: F + G + H>(data: T);
Now `library::some_function()` can be called with an A, B, or C, even though C was defined by the user of the library. And `another_function()` can be called with A, B, or C, even though A was defined by the library.<troll> The definition of dictionary is just "Words about words" (source: Urban Dictionary), so I'd say that Urban Dictionary qualifies. </troll>
If you want to escape the limits of computability, you have to assume that the physical Church–Turing thesis is false. That the reality allows mechanisms that cannot be modeled formally or simulated on any currently plausible computer.
Something that cannot be proven within which formal system? Every true statement can be proven by some formal system. No formal system can prove all true statements.
(This is all about Godel incompleteness. Turing incomputability does apply though, see the sibling comments! It's important to keep them separate, they're saying different things.)
LLMs, like any model of computation, are subject to the limits of Turing Machines, so they cannot for example solve the halting problem.
Yes, that's correct, it's provably impossible to construct an LLM that, if you ask it "Will this program halt? <program>", answers correctly for all programs. Likewise humans, under the assumption that you can accurately simulate physics with a computer. (Note that QM isn't an obstacle, as you can emulate QM just fine with a classical computer with a mere exponential slowdown.)
I would add context for 2025 about the fundamental limits this places on what (modern) AI is in principal capable of. Perhaps some "non-computable" features would need to be hard-coded into AI, so that it could at least better approximate the types of incomputable problems we might ask it to do?
Also, a search of the text for "conscious" does not yield anything, which is probably a good thing. This text also reminds me of the questions like, "What does it mean to be conscious?" and, "How are human brains able to reason about (things like) incomputability, which in some sense, computers as we currently understand them could never do?" and, "What specifically beyond pure mathematics does a brain have or need in order to be conscious enough to reason about such things?"
Someone's going to think that, since you can formally model computer programs (and thus formally model how an LLMs runs), that means that Gödel's Incompleteness Theorem applies to computer programs and to LLMs. But that's irrelevant; being model-able doesn't make something a formal system!
"Formal system" has a very technical meaning: it has a set of consistent axioms, and the ability to enumerate formal proofs using those axioms. For example, Zermelo-Fraenkel set theory is a formal system [1]. It has nine formal axioms, which you can see listed in its Wikipedia article. Utterly different kind of thing than humans or LLMs. Comparing an LLM to ZFC is like comparing a particular watermelon to the number 4.
[1] https://en.wikipedia.org/wiki/Zermelo%E2%80%93Fraenkel_set_t...
Beginners to sorting are often still somewhat struggling with for loops (especially the three-part C kind), struggling with indexing an array, struggling with swaps. It's overwhelming.
This is kind-of like pseudocode for a simple sort -- it has the same rough outline, but no tricky bits. It allows them to get their brains around the basic algorithm, and only after that you trace through an execution on the board to show them that it wastes a lot of time moving things around.
Then you can show them insertion sort as an optimization, and from that move on to fancier ones.
Why is there any difference? Does the entire difference come from the iteration where i = 1?
> Bubble Sort always does exactly n(n-1)/2 comparisons
Bubble sort can do less than this. When you're bubbling down into a sorted list, and you find a value lower than you are, you can assume that all values below that one are also lower than you are and terminate the inner loop early.
Sometimes it's more swaps, and sometimes it's less. The vague pattern looks like Can't Believe Sort does slightly fewer swaps on larger lists, but that could also have been noise.
Why do you expect them to have exactly the same number of swaps?
> Bubble sort can do less than this.
Oh sorry yeah, I implemented bubble sort without any sort of early return, for a more direct comparison.