A good analogy is the idea of a Turing machine in theoretical CS. It's an idealized model for studying the theory of computation. To object that it's impractical to write a complicated program like a computer algebra system using the Turning machine formalism misses the point.
> The key idea of univalence is an axiom that says equivalence is equivalent to equality; and that if we only want equivalence as our standard, that we can substitute proofs of equivalence for proofs of equality.
I just said I don't want equivalence to be equivalent to equality!
> The main insight is that topology of diagrams determines the semantics of your logic; which helps us explore concepts like abstraction and proof simplification. (This relates to topos theory — which creeps up in CS fairly often.)
OK, so what are the concrete fruits of this? What new metamathematical statements - recognizable to an ordinary mathematician with no particular interest in topos theory or HoTT - has this led to?
I would argue that type theory does a better job at this than set theory. With set theory, you need to believe in two separate things: (1) the language of first-order logic (or some other logic) with its inference rules, (2) the set theory axioms. With type theory, there is only the language of lambda terms. And the rules for type theory are straightforward and intuitive for programmers, e.g., you can only call a function on an argument if the function's domain matches the type of the argument. Contrast that with set theory, where you have highly counterintuitive and seemingly arbitrary axioms like the axiom of separation.
I do not understand why homotopy type theory posts are so popular on this website. My view is that all the "philosophical" arguments in favor of it (vs. the standard set theory foundations) misunderstand the issues at play. Further, the "practical" arguments in terms of facilitating formalization are not so compelling given the HoTT people haven't actually (as far as I know) formalized much mathematics - whereas (seemingly) less ideological communities like users of Lean have made great progress.
To expand on the comment about the philosophical arguments: take for example the abstract of this article. It states:
> It is common in mathematical practice to consider equivalent objects to be the same, for example, to identify isomorphic groups. In set theory it is not possible to make this common practice formal. For example, there are as many distinct trivial groups in set theory as there are distinct singleton sets. Type theory, on the other hand, takes a more structural approach to the foundations of mathematics that accommodates the univalence axiom. This, however, requires us to rethink what it means for two objects to be equal.
It is sometimes quite useful in practice to recognize that two isomorphic objects are not literally the same. So I am skeptical of any approach that wants to blur those distinctions.
Also, more to the point: ZFC does everything we need a foundation to do extremely well, except serve as a basis for practical formalization of proofs.
Martin-Löf type theory (and, therefore, homotopy type theory) is like an idealized programming language that is capable of expressing both programs and proofs, such that you can prove your code correct in the same language. Hacker News is a mostly technical community that often likes to geek out on programming languages.
Homotopy type theory is an especially cool flavor of type theory that finally gives a satisfying answer to the question of when two types should be considered propositionally equal.
class Eq a where
(==) :: a -> a -> Bool
(/=) :: a -> a -> Bool
Which can then be implemented for various types, allowing any of those types to be compared for equality using (==) and (/=). You can also write functions which are polymorphic over all types implementing a typeclass, e.g.: allThreeUnequal :: Eq a => a -> a -> a
allThreeUnequal a b c = (a /= b) && (b /= c) && (a /= c)
-- (/=) here is provided by the ‘Eq a’ constraint
-- in the signature
I tend to think of typeclasses in Haskell — and especially the ones listed in the Typeclassopedia — almost as allowing the formalisation of design patterns and common conventions. For instance, many languages have multiple ‘map’ functions: one mapping a function over a list, one over a promise, and so on. Haskell, by contrast, unifies those in single ‘Functor’ typeclass, with one polymorphic function ‘fmap’. (The (in)famous ‘Monad’ typeclass is also very similar in its aims: compare C#, where precisely the same behaviour gets encoded in a mere convention declaring that a certain function should be named ‘SelectMany’.) The advantage of all this is that you can write very highly generic (hence reusable) code while still retaining typesafety. In fact, the effect tends to be remeniscient of duck typing, just more typesafe.Just because some people think a particular style is useful doesn't mean everyone does. You might want to consider checking your biases before making comments like this. I understand the (many) arguments for using `const`, and I've concluded (for myself) that it's not a useful construct. Read and internalize the rest of my previous comment for more information.
Thankfully, this sort of analysis tends to use "constructive logic"; in which case, we're told why some property isn't provable (either because we're given a counterexample proving it's incorrect, or we're told which assumptions/preconditions need to be proved first)
That's not what constructive logic means.
FWIW, for me, this is an anti-feature, and I would not use this language because of it. The net effect of this would be that I type "mutable" all over the place and get very little for my effort.
I've spent a significant amount of time understanding what the high-consequence programming errors that I make are, and "oops, I mutated that thing that I could have marked const" is a class of error that consumes a vanishingly small amount of my debugging time.
The errors I make that account for a large portion of my debugging time are errors related to semantics of my program. Things that, in C++, are typically only detectable at runtime, but with a better type system could be detected at compile time. The first step for this might be type annotations that specify valid values. For example, being able to annotate whether an argument is or is not allowed to be null, and having that enforced at call sites.
(NOTE: I also don't spend a meaningful amount of time debugging accidental nullptr values, but that's a good first step towards the type annotations I _do_ want)
You might want to consider adopting a more modern programming style for the benefit of your coworkers (and possibly yourself). Mutability all over the place is a nightmare, speaking as someone who currently has to work in a large codebase written like that. It's hard to predict what value a variable is going to have at any particular point in your code, since instead of only having to check where the variable is defined, you have to audit all the code between the definition and the use. For the same reason, it's hard to guarantee that your invariants are maintained, since there is a much larger surface area to check.
Deleted Comment
What's wrong with this? I need a way to describe techniques that make it easier to...well I don't even know another way to say it. Maybe I'm biased by my experience in formal verification, a field in which it's expected that you can formally study a program's behavior.