Caveat: Coercions exist in Lean, so subtypes actually can be used like the supertype, similar to other languages. This is done via essentially adding an implicit casting operation when such a usage is encountered.
I'm not quite following. According to the OP and the docs you linked, a subtype is defined by a base type and a predicate. In other words: You can view it as a subset of the set of elements of the base type. That's pretty much the standard definition of a subtype.
Object-oriented programming languages are not that different: The types induced by classes can easily be viewed as sets: A child class is a specialized version of its parent's class, hence a subtype/subset thereof if you define all the sets by declaring `instanceof` to be their predicate function.
B. Meyer made an attempt to formulate many concepts in programming using simple, set theory. It might help in discussions like this. I say might since I'm not mathematically-inclined enough to know for sure.
> You can view it as a subset of the set of elements of the base type.
Technically speaking the elements in the supertype are all distinct from the elements in the subtype and viceversa. They are not a subset of the other, hence why it's improper to consider one a subtype of the other.
This is proof by exhaustion: the "proof" just computes the entire memo table for any n and compares the values in the table with the corresponding return from recursive definition. You could write this same proof in absolutely any language that supports recursion (or not, if you transform to the bottom-up formulation).
In Lean you don't actually have to run this for every n to verify that the algorithm is correct for every n. Correctness is proved at type-checking time, without actually running the algorithm. That's something that you can't do in a normal programming language.
> the "proof" just computes the entire memo table for any n
No, this is what would happen _if you ran the proof_, but proofs are not meant to be ran in the first place! The usual goal is proving their correctness, and for that it's enough for them to _typecheck_.
That can't possibly be the case. The thing concluded was that for every n the statement holds. To do that exhaustively for _every_ n requires infinite time. Either their conclusion is incorrect, or your description of the proof is incorrect.
He doesn't mention it, but this is a form of proof by induction. (As you'd expect, really, for a universal statement.)
Induction is often taught as something you do with natural numbers. But it's actually something you do with sets that are defined inductively. Any time you have a set that is defined like so:
1. x is in the set.
2. for all y in the set, f(y) is in the set
(where x and f are constants), you can apply induction. The base case is that you show some property is true of x. The inductive step is that you show that when the property is true of y, it is necessarily true of f(y).
If you have multiple elements that you guarantee will be in the set, each of them must be dealt with as a base case, and if you have multiple rules generating new elements from existing ones, each of them must be dealt with as another inductive step.
For the case of the natural numbers, x is 0, and f is the successor function.
If you wanted to apply this model to the Fibonacci sequence, you could say that the tuple (0, 1, 1) is in the set [representing the idea "F_1 is 1"] and provide the generator f((a, b, c)) = (b, a+b, c+1). Then since (0, 1, 1) is in the set, so is (1, 1, 2) [or "F_2 is 1"], and then (1, 2, 3) ["F_3 is 2"], and so on.
(Or you could say that the two tuples (1, 1) and (2, 1) are both in the set, and provide the generator f( (i, x), (i+1, y) ) = (i+2, x+y). Now your elements are simpler, your generator is more complex, and your set has exactly the same structure as before.)
The approach taken by the author's "improved solution" is to define a set consisting of the elements of the memoization table, with the generator being the function chain that adds elements to the table. He annotates the type of the elements to note that they must be correct (this annotation is administrative, just making it easy to describe what it is that we want to prove), and then does a proof over the addition operation that this correctness is preserved (the inductive step!).
Doing the proof inside the algorithm (i.e. doing inline induction over the algorithm recursion structure) has the advantage that the branching structure doesn't have to be duplicated as in an external proof like the one I did.
In my proof I didn't struggle so much with induction, but much more with basic Lean stuff, such as not getting lost in the amount of variables, dealing with r.fst/r.snd vs r=(fst, snd) and the confusing difference of .get? k and [k]?.
Nice job. My attempt at the initial strong induction proof was a long time ago so I don't remember the details. It definitely followed a similar structure as yours (but this was before `omega` im pretty sure). Can't quite remember where I got stuck but your proof is good. Thanks!
https://lean-lang.org/doc/reference/latest/Basic-Types/Subty...
A crucial difference between type theory (as its known in Lean) and set theory is that an inhabitant/element is of exactly one type.
Object-oriented programming languages are not that different: The types induced by classes can easily be viewed as sets: A child class is a specialized version of its parent's class, hence a subtype/subset thereof if you define all the sets by declaring `instanceof` to be their predicate function.
https://bertrandmeyer.com/2015/07/06/new-paper-theory-of-pro...
Technically speaking the elements in the supertype are all distinct from the elements in the subtype and viceversa. They are not a subset of the other, hence why it's improper to consider one a subtype of the other.
> For an arbitrary n, compute the table full of values and their proofs, and just pull out the nth proof
if you thought harder about it you'd realize what you're suggesting is impossible
No, this is what would happen _if you ran the proof_, but proofs are not meant to be ran in the first place! The usual goal is proving their correctness, and for that it's enough for them to _typecheck_.
> For an arbitrary n, compute the table full of values and their proofs, and just pull out the nth proof
if you thought harder about it you'd realize what you're suggesting is impossible
1. build a table tab[n]
2. check that for every i, tab[i] == maxDollars_spec[i]
if you take the latter approach i proposed (bottom up) there is nothing to check the totality of.
Well, you at least need dependent types just to state the theorem, which eliminates nearly all other languages.
Tupples don't really indicate what I can expect from the members.
Deleted Comment
Induction is often taught as something you do with natural numbers. But it's actually something you do with sets that are defined inductively. Any time you have a set that is defined like so:
(where x and f are constants), you can apply induction. The base case is that you show some property is true of x. The inductive step is that you show that when the property is true of y, it is necessarily true of f(y).If you have multiple elements that you guarantee will be in the set, each of them must be dealt with as a base case, and if you have multiple rules generating new elements from existing ones, each of them must be dealt with as another inductive step.
For the case of the natural numbers, x is 0, and f is the successor function.
If you wanted to apply this model to the Fibonacci sequence, you could say that the tuple (0, 1, 1) is in the set [representing the idea "F_1 is 1"] and provide the generator f((a, b, c)) = (b, a+b, c+1). Then since (0, 1, 1) is in the set, so is (1, 1, 2) [or "F_2 is 1"], and then (1, 2, 3) ["F_3 is 2"], and so on.
(Or you could say that the two tuples (1, 1) and (2, 1) are both in the set, and provide the generator f( (i, x), (i+1, y) ) = (i+2, x+y). Now your elements are simpler, your generator is more complex, and your set has exactly the same structure as before.)
The approach taken by the author's "improved solution" is to define a set consisting of the elements of the memoization table, with the generator being the function chain that adds elements to the table. He annotates the type of the elements to note that they must be correct (this annotation is administrative, just making it easy to describe what it is that we want to prove), and then does a proof over the addition operation that this correctness is preserved (the inductive step!).
https://live.lean-lang.org/#codez=JYWwDg9gTgLgBAZRgEwHQBECGN...
Doing the proof inside the algorithm (i.e. doing inline induction over the algorithm recursion structure) has the advantage that the branching structure doesn't have to be duplicated as in an external proof like the one I did.
In my proof I didn't struggle so much with induction, but much more with basic Lean stuff, such as not getting lost in the amount of variables, dealing with r.fst/r.snd vs r=(fst, snd) and the confusing difference of .get? k and [k]?.