> It's not so amazing, the λ calculus is equivalent to a Turing machine.
It seems meta3 is a very basic implementation of the lambda calculus. But I believe the equivalence of Turing machines and the lambda calculus is proven. The church Turing thesis is a bit more. My understanding is that it conjectures that there is nothing that can be computed that can't be computed by a Turing machine or the lambda calculus. So nothing more powerful than the lambda calculus exists. I'm not sure this article shows that.
The contention that the combined theories must be the definition of computability should be true about each theory in isolation. If a Turing machine cannot calculate any computable problem, then why bother defining a Turing machine? What is the purpose of the definition if not to prove that any computable problem can be solved using the device? The same goes for lambda calculus.
Basically, why is consciousness always attached to the same physical body? Why can't I ever wake up in someone else's consciousness? How does "my" consciousness know to come back into "my" brain whenever I lose it (through sleep or injury, etc).
The answer that I lean toward is that there is no such thing as you or me. There is only one consciousness and it is merely being filtered through each living (or perhaps nonliving) being in containerized modules.
So, to "me", it feels like I'm experiencing my own consciousness but in reality everyone is the same "me". You are me, I am you, etc, we are simply filtering consciousness through different atomic arrangements.
For example, let's say you read about a criminal who does a terrible thing and you can't imagine yourself ever doing that. But in reality, it is the same "you", only that your consciousness has been filtered through a different arragement of atoms that has caused that "module" to act that way. It is the same YOU who committed that crime, all it took was a different filtering device to make you act that way.
Anyway, that's kind of what I'm thinking. I'm sure it's not an original thought, but I don't know what kind of philosophy this is called other than "one consciosness".
For example two consumers pushing results to a single result queue is a race condition, because the order in which they arrive depends on timing and is non-deterministic. That's a pretty normal thing to have.
Many languages people think of as being safe and high level are extremely racey - Erlang is my go-to example - two processes sending messages to a third is a race. Which message do you get first? It's non-deterministic. But it doesn't have to be a problem.
> The strongest form of the theory is linguistic determinism, which holds that language entirely determines the range of cognitive processes. The hypothesis of linguistic determinism is now generally agreed to be false.
https://news.ycombinator.com/item?id=22022599
https://news.ycombinator.com/item?id=22022842
https://news.ycombinator.com/item?id=22022896
https://news.ycombinator.com/item?id=22022836
The author of this piece was not engaging in a DRY activity even if he thought he was. He (perhaps unwittingly) admits to it himself:
> My code traded the ability to change requirements for reduced duplication, and it was not a good trade.
The acronym DRY was coined in The Pragmatic Programmer, and the author of that book makes it clear that DRY is about not repeating requirements in code. You definitely don't want to deduplicate similar code if it involves multiple requirements because then you entangle yourself, which is what the author of this piece did.