Readit News logoReadit News
ebingdom commented on Introduction to Homotopy Type Theory   arxiv.org/abs/2212.11082... · Posted by u/euler1729
hash-no-gal-fld · 3 years ago
I don't think you intended "propositionally" equal in your final sentence. Equality is data in HoTT. If you take the propositional truncation then you usually throw away too much.
ebingdom · 3 years ago
Yeah, I only meant as opposed to judgmental equality, not the quality of being a proposition.
ebingdom commented on Introduction to Homotopy Type Theory   arxiv.org/abs/2212.11082... · Posted by u/euler1729
spekcular · 3 years ago
The point of the ZFC axioms was never to write down actual formalizations of complicated proofs. It was to provide a small, parsimonious foundation for all of mathematics with a minimal number of "obvious" commitments, to give us confidence that the mathematics we're doing is consistent, and to provide a basis for metamathematical investigations. (Roughly speaking - this compresses a lot of history. Also ZFC may not be the optimal set theory for doing this, and its choice as the standard foundation is somewhat historically contingent.)

A good analogy is the idea of a Turing machine in theoretical CS. It's an idealized model for studying the theory of computation. To object that it's impractical to write a complicated program like a computer algebra system using the Turning machine formalism misses the point.

> The key idea of univalence is an axiom that says equivalence is equivalent to equality; and that if we only want equivalence as our standard, that we can substitute proofs of equivalence for proofs of equality.

I just said I don't want equivalence to be equivalent to equality!

> The main insight is that topology of diagrams determines the semantics of your logic; which helps us explore concepts like abstraction and proof simplification. (This relates to topos theory — which creeps up in CS fairly often.)

OK, so what are the concrete fruits of this? What new metamathematical statements - recognizable to an ordinary mathematician with no particular interest in topos theory or HoTT - has this led to?

ebingdom · 3 years ago
> It was to provide a small, parsimonious foundation for all of mathematics with a minimal number of "obvious" commitments, to give us confidence that the mathematics we're doing is consistent

I would argue that type theory does a better job at this than set theory. With set theory, you need to believe in two separate things: (1) the language of first-order logic (or some other logic) with its inference rules, (2) the set theory axioms. With type theory, there is only the language of lambda terms. And the rules for type theory are straightforward and intuitive for programmers, e.g., you can only call a function on an argument if the function's domain matches the type of the argument. Contrast that with set theory, where you have highly counterintuitive and seemingly arbitrary axioms like the axiom of separation.

ebingdom commented on Introduction to Homotopy Type Theory   arxiv.org/abs/2212.11082... · Posted by u/euler1729
spekcular · 3 years ago
I realize it's Christmas Eve, but this post tempts my inner curmudgeon.

I do not understand why homotopy type theory posts are so popular on this website. My view is that all the "philosophical" arguments in favor of it (vs. the standard set theory foundations) misunderstand the issues at play. Further, the "practical" arguments in terms of facilitating formalization are not so compelling given the HoTT people haven't actually (as far as I know) formalized much mathematics - whereas (seemingly) less ideological communities like users of Lean have made great progress.

To expand on the comment about the philosophical arguments: take for example the abstract of this article. It states:

> It is common in mathematical practice to consider equivalent objects to be the same, for example, to identify isomorphic groups. In set theory it is not possible to make this common practice formal. For example, there are as many distinct trivial groups in set theory as there are distinct singleton sets. Type theory, on the other hand, takes a more structural approach to the foundations of mathematics that accommodates the univalence axiom. This, however, requires us to rethink what it means for two objects to be equal.

It is sometimes quite useful in practice to recognize that two isomorphic objects are not literally the same. So I am skeptical of any approach that wants to blur those distinctions.

Also, more to the point: ZFC does everything we need a foundation to do extremely well, except serve as a basis for practical formalization of proofs.

ebingdom · 3 years ago
> I do not understand why homotopy type theory posts are so popular on this website.

Martin-Löf type theory (and, therefore, homotopy type theory) is like an idealized programming language that is capable of expressing both programs and proofs, such that you can prove your code correct in the same language. Hacker News is a mostly technical community that often likes to geek out on programming languages.

Homotopy type theory is an especially cool flavor of type theory that finally gives a satisfying answer to the question of when two types should be considered propositionally equal.

ebingdom commented on Typeclassopedia (2009)   wiki.haskell.org/Typeclas... · Posted by u/perihelions
bradrn · 3 years ago
Some context, for those with no Haskell experience: a ‘typeclass’ (no relation to OOP ‘classes’) is a way of abstracting over a function which can be implemented for numerous different types, very much like ‘interfaces’ in Java or C#. For instance, the ‘Eq’ typeclass provides the equality operators:

    class Eq a where
        (==) :: a -> a -> Bool
        (/=) :: a -> a -> Bool
Which can then be implemented for various types, allowing any of those types to be compared for equality using (==) and (/=). You can also write functions which are polymorphic over all types implementing a typeclass, e.g.:

    allThreeUnequal :: Eq a => a -> a -> a
    allThreeUnequal a b c = (a /= b) && (b /= c) && (a /= c)
    -- (/=) here is provided by the ‘Eq a’ constraint
    -- in the signature
I tend to think of typeclasses in Haskell — and especially the ones listed in the Typeclassopedia — almost as allowing the formalisation of design patterns and common conventions. For instance, many languages have multiple ‘map’ functions: one mapping a function over a list, one over a promise, and so on. Haskell, by contrast, unifies those in single ‘Functor’ typeclass, with one polymorphic function ‘fmap’. (The (in)famous ‘Monad’ typeclass is also very similar in its aims: compare C#, where precisely the same behaviour gets encoded in a mere convention declaring that a certain function should be named ‘SelectMany’.) The advantage of all this is that you can write very highly generic (hence reusable) code while still retaining typesafety. In fact, the effect tends to be remeniscient of duck typing, just more typesafe.

ebingdom · 3 years ago
For people who know Rust, Haskell's type classes are like a more general version of Rust's traits. But that comparison suggests the wrong chronology; Haskell's type classes came a couple decades earlier.
ebingdom commented on Show HN: Modifying Clang for a Safer, More Explicit C++   github.com/compiler-devel... · Posted by u/compiler-devel
jesse__ · 4 years ago
> You might want to consider adopting a more modern programming style

Just because some people think a particular style is useful doesn't mean everyone does. You might want to consider checking your biases before making comments like this. I understand the (many) arguments for using `const`, and I've concluded (for myself) that it's not a useful construct. Read and internalize the rest of my previous comment for more information.

ebingdom · 4 years ago
It's not about biases. Using immutability is not just some personal preference. It's common knowledge that mutability everywhere is a bad practice. That's why there's a trend toward immutability by default in newer languages.
ebingdom commented on Separation Logic   cacm.acm.org/magazines/20... · Posted by u/haskellandchill
chriswarbo · 4 years ago
Not going to happen; Chrome is such a huge pile of code that it's pretty much guaranteed to be incorrect.

Thankfully, this sort of analysis tends to use "constructive logic"; in which case, we're told why some property isn't provable (either because we're given a counterexample proving it's incorrect, or we're told which assumptions/preconditions need to be proved first)

ebingdom · 4 years ago
> Thankfully, this sort of analysis tends to use "constructive logic"; in which case, we're told why some property isn't provable (either because we're given a counterexample proving it's incorrect, or we're told which assumptions/preconditions need to be proved first)

That's not what constructive logic means.

ebingdom commented on Separation Logic   cacm.acm.org/magazines/20... · Posted by u/haskellandchill
satisfice · 4 years ago
Wake me when they prove Chrome is correct.
ebingdom · 4 years ago
That's a ridiculously high bar to set before you recognize any progress being made.
ebingdom commented on Show HN: Modifying Clang for a Safer, More Explicit C++   github.com/compiler-devel... · Posted by u/compiler-devel
jesse__ · 4 years ago
> All basic types (excluding pointers and references) are const by default and may be marked 'mutable' to allow them to be changed after declaration

FWIW, for me, this is an anti-feature, and I would not use this language because of it. The net effect of this would be that I type "mutable" all over the place and get very little for my effort.

I've spent a significant amount of time understanding what the high-consequence programming errors that I make are, and "oops, I mutated that thing that I could have marked const" is a class of error that consumes a vanishingly small amount of my debugging time.

The errors I make that account for a large portion of my debugging time are errors related to semantics of my program. Things that, in C++, are typically only detectable at runtime, but with a better type system could be detected at compile time. The first step for this might be type annotations that specify valid values. For example, being able to annotate whether an argument is or is not allowed to be null, and having that enforced at call sites.

(NOTE: I also don't spend a meaningful amount of time debugging accidental nullptr values, but that's a good first step towards the type annotations I _do_ want)

ebingdom · 4 years ago
> The net effect of this would be that I type "mutable" all over the place

You might want to consider adopting a more modern programming style for the benefit of your coworkers (and possibly yourself). Mutability all over the place is a nightmare, speaking as someone who currently has to work in a large codebase written like that. It's hard to predict what value a variable is going to have at any particular point in your code, since instead of only having to check where the variable is defined, you have to audit all the code between the definition and the use. For the same reason, it's hard to guarantee that your invariants are maintained, since there is a much larger surface area to check.

Deleted Comment

ebingdom commented on Phrases in computing that might need retiring   sicpers.info/2022/07/phra... · Posted by u/mpweiher
ebingdom · 4 years ago
> Reasoning About Software

What's wrong with this? I need a way to describe techniques that make it easier to...well I don't even know another way to say it. Maybe I'm biased by my experience in formal verification, a field in which it's expected that you can formally study a program's behavior.

u/ebingdom

KarmaCake day577May 5, 2021View Original