Readit News logoReadit News
gugagore · 2 months ago
FYI the use of "subtype" here does not, as far as I know, have much connection to the concept in class-based object oriented programming languages.

https://lean-lang.org/doc/reference/latest/Basic-Types/Subty...

A crucial difference between type theory (as its known in Lean) and set theory is that an inhabitant/element is of exactly one type.

ImprobableTruth · 2 months ago
Caveat: Coercions exist in Lean, so subtypes actually can be used like the supertype, similar to other languages. This is done via essentially adding an implicit casting operation when such a usage is encountered.
codethief · 2 months ago
I'm not quite following. According to the OP and the docs you linked, a subtype is defined by a base type and a predicate. In other words: You can view it as a subset of the set of elements of the base type. That's pretty much the standard definition of a subtype.

Object-oriented programming languages are not that different: The types induced by classes can easily be viewed as sets: A child class is a specialized version of its parent's class, hence a subtype/subset thereof if you define all the sets by declaring `instanceof` to be their predicate function.

nickpsecurity · 2 months ago
B. Meyer made an attempt to formulate many concepts in programming using simple, set theory. It might help in discussions like this. I say might since I'm not mathematically-inclined enough to know for sure.

https://bertrandmeyer.com/2015/07/06/new-paper-theory-of-pro...

SkiFire13 · 2 months ago
> You can view it as a subset of the set of elements of the base type.

Technically speaking the elements in the supertype are all distinct from the elements in the subtype and viceversa. They are not a subset of the other, hence why it's improper to consider one a subtype of the other.

almostgotcaught · 2 months ago
This is proof by exhaustion: the "proof" just computes the entire memo table for any n and compares the values in the table with the corresponding return from recursive definition. You could write this same proof in absolutely any language that supports recursion (or not, if you transform to the bottom-up formulation).
lacker · 2 months ago
In Lean you don't actually have to run this for every n to verify that the algorithm is correct for every n. Correctness is proved at type-checking time, without actually running the algorithm. That's something that you can't do in a normal programming language.
almostgotcaught · 2 months ago
it's explicitly stated in the article:

> For an arbitrary n, compute the table full of values and their proofs, and just pull out the nth proof

if you thought harder about it you'd realize what you're suggesting is impossible

SkiFire13 · 2 months ago
> the "proof" just computes the entire memo table for any n

No, this is what would happen _if you ran the proof_, but proofs are not meant to be ran in the first place! The usual goal is proving their correctness, and for that it's enough for them to _typecheck_.

almostgotcaught · 2 months ago
it's explicitly stated in the article:

> For an arbitrary n, compute the table full of values and their proofs, and just pull out the nth proof

if you thought harder about it you'd realize what you're suggesting is impossible

hansvm · 2 months ago
That can't possibly be the case. The thing concluded was that for every n the statement holds. To do that exhaustively for _every_ n requires infinite time. Either their conclusion is incorrect, or your description of the proof is incorrect.
Quekid5 · 2 months ago
Not if that language doesn't actually check the totality of your proof and ensures that the base case holds.
almostgotcaught · 2 months ago
i don't know what you're saying - here is the proof that is described in the article:

1. build a table tab[n]

2. check that for every i, tab[i] == maxDollars_spec[i]

if you take the latter approach i proposed (bottom up) there is nothing to check the totality of.

duve02 · 2 months ago
> You could write this same proof in absolutely any language that supports recursion

Well, you at least need dependent types just to state the theorem, which eliminates nearly all other languages.

Tainnor · 2 months ago
A language without dependent types wouldn't even let you write down the statement of the theorem, so no.
CSMastermind · 2 months ago
I've been meaning to learn Lean and fascinated with the concept but syntax like:

    let rec helperMemo : Nat → HashMap Nat Nat → Nat × HashMap Nat Nat
is a big turnoff to me. I find it annoying to parse mentally. I can do it but I have to concentrate or it's easy to gloss over an important detail.

duve02 · 2 months ago
Hey, author here. This is actually not-great style on my part. Is the following better?

   let rec helperMemo (n : Nat) (map : HashMap Nat Nat) : Nat × HashMap Nat Nat
This is how it would usually be written. I will update the post accordingly.

westurner · 2 months ago
Does aliasing the types work?

  def MemoMap := HashMap Nat Nat
  def MemoResult := Nat × MemoMap

  let rec helperMemo : Nat → MemoMap → MemoResult

tossandthrow · 2 months ago
Record types would likely help a lot also.

Tupples don't really indicate what I can expect from the members.

runeblaze · 2 months ago
Tbh this is exactly how I felt in my algebraic geometry class. I still remember the fear I had when reading this from the blackboard

    Defn. f: X → Y is flat ⇔ O_{Y,f(x)} → O_{X,x} flat ∀ x.
Then immediately I dropped that class. Turns out I like real analysis much more

Deleted Comment

tikhonj · 2 months ago
What makes it hard to parse? The lack of parentheses? The way HashMap Nat Nat is a bit verbose and not clear at a glance? Something else?
jeremyscanvic · 2 months ago
Really interesting trick!
thaumasiotes · 2 months ago
He doesn't mention it, but this is a form of proof by induction. (As you'd expect, really, for a universal statement.)

Induction is often taught as something you do with natural numbers. But it's actually something you do with sets that are defined inductively. Any time you have a set that is defined like so:

    1. x is in the set.

    2. for all y in the set, f(y) is in the set
(where x and f are constants), you can apply induction. The base case is that you show some property is true of x. The inductive step is that you show that when the property is true of y, it is necessarily true of f(y).

If you have multiple elements that you guarantee will be in the set, each of them must be dealt with as a base case, and if you have multiple rules generating new elements from existing ones, each of them must be dealt with as another inductive step.

For the case of the natural numbers, x is 0, and f is the successor function.

If you wanted to apply this model to the Fibonacci sequence, you could say that the tuple (0, 1, 1) is in the set [representing the idea "F_1 is 1"] and provide the generator f((a, b, c)) = (b, a+b, c+1). Then since (0, 1, 1) is in the set, so is (1, 1, 2) [or "F_2 is 1"], and then (1, 2, 3) ["F_3 is 2"], and so on.

(Or you could say that the two tuples (1, 1) and (2, 1) are both in the set, and provide the generator f( (i, x), (i+1, y) ) = (i+2, x+y). Now your elements are simpler, your generator is more complex, and your set has exactly the same structure as before.)

The approach taken by the author's "improved solution" is to define a set consisting of the elements of the memoization table, with the generator being the function chain that adds elements to the table. He annotates the type of the elements to note that they must be correct (this annotation is administrative, just making it easy to describe what it is that we want to prove), and then does a proof over the addition operation that this correctness is preserved (the inductive step!).

duve02 · 2 months ago
Great breakdown of this. Thanks.
Gehinnn · 2 months ago
This would be the classical proof via strong induction, without Σ-types:

https://live.lean-lang.org/#codez=JYWwDg9gTgLgBAZRgEwHQBECGN...

Doing the proof inside the algorithm (i.e. doing inline induction over the algorithm recursion structure) has the advantage that the branching structure doesn't have to be duplicated as in an external proof like the one I did.

In my proof I didn't struggle so much with induction, but much more with basic Lean stuff, such as not getting lost in the amount of variables, dealing with r.fst/r.snd vs r=(fst, snd) and the confusing difference of .get? k and [k]?.

duve02 · 2 months ago
Nice job. My attempt at the initial strong induction proof was a long time ago so I don't remember the details. It definitely followed a similar structure as yours (but this was before `omega` im pretty sure). Can't quite remember where I got stuck but your proof is good. Thanks!
xnacly · 2 months ago
sigma types, hmmm