Undecidability is not a sign that the foundation has cracks (not well founded), but it might be a sign that you put the foundation on wheels so you can drive it at highway speeds, with all the dangers that entails.
It's not a trade everyone would make, but the languages I prefer do.
Julia does not reassemble amino acids like earth life does. But it does absorb disentropy from it's prey. The extreme specificity of an interstellar spacecraft, it's contents and occupants, is absorbed by Julia, so that it can move, grow, and attract more prey.
https://langdev.stackexchange.com/a/2072
My interpretation
Decidability is of academic interest, and might be a hint if something is feasible.
But there are (1) ways of sidestepping undecidability, e.g. A valid C++/Rust program is one for which the typechecker terminates in x steps without overflowing the stack
And (2) things which are decidable, but physically impossible to calculate, e.g the last digit of the 10^10^10 th prime
What matters is being able to reject all incorrect programs, and accept most human written valid programs
Hoping they do it for April 1st one year.
What was your algorithm? Compute a bitset for every word, for each word with 7 unique letters, check against every other word if it has a subset of those letters? Surely there's a better than O(n^2) way
~I watched a video series where someone did the techniques in isolation in reverse order and I thought it was a good idea. That way you're always learning on perfect ingredients. i.e. buy chips and learn to temper, then buy roasted nibs and learn to refine, then buy dried and learn to roast, then buy pods and learn to ferment and dry.~
A site for filtering word lists and solving word puzzles
Anagrams, regular expression search, and a crossword helper, as well as several NYT word games.
We can't know for every possible program if it halts or not, but the complexity of programs we can determine is increasing as tools and techniques get better
To clear up a few misconceptions:
- Aristotle uses modern AI techniques heavily, including language modeling.
- Aristotle can be guided by an informal (English) proof. If the proof is correct, Aristotle has a good chance at translating it into Lean (which is a strong vote of confidence that your English proof is solid). I believe that's what happened here.
- Once a proof is formalized into Lean (assuming you have formalized the statement correctly), there is no doubt that the proof is correct. This is the core of our approach: you can do a lot of (AI-driven) search, and once you find the answer you are certain it's correct no matter how complex the solution is.
Happy to answer any questions!
My limited understanding of Rust is that it applies a fixed set of rules to guarantee memory safety. The rules are somewhat simple and limiting, for ease of understanding and implementation, but also because of undecidability.
Programmers run into situations where they know that their code won't cause memory errors, but it doesn't follow the rules. Wouldn't it be cool if something like Aristotle was integrated into the compiler? Any code for which a proof of correctness could be written would pass/compile, without having to add more and more rules
Nitpick: typo in the dimensions for A3.