Readit News logoReadit News
lvh · 4 years ago
I recommend the following post, by the author of the proof, for deeper context. Especially near the end, they talk about some of the things they're trying to accomplish with it in plain(ish) English.

https://xenaproject.wordpress.com/2020/12/05/liquid-tensor-e...

deadbeef57 · 4 years ago
See also [1] for a follow-up blogpost that is less technical.

[1]: https://xenaproject.wordpress.com/2021/06/05/half-a-year-of-...

sabujp · 4 years ago

    "Proof assistants can’t read a maths textbook, they need continuous input from humans, and they can’t decide whether a mathematical statement is interesting or profound — only whether it is correct, Buzzard says. Still, computers might soon be able to point out consequences of the known facts that mathematicians had failed to notice, he adds."
we're closer to this than people realize

kevinbuzzard · 4 years ago
I agree, but I think my statement is accurate today in 2021. I would love to see funds directed towards this sort of question. The big problem is that at high level so so much is skipped over, and you still sometimes have to struggle to put undergraduate-level mathematics into Lean -- this is why UG maths is such a good test case.
mherrmann · 4 years ago
Very nice to see you here Kevin. We never interacted but I do still remember a lecture you gave at Imperial in '06 where you filled in for Prof. Liebeck and started with Lemma 1: "I am not Professor Liebeck" ;-) Thank you for the nice memory and your important work on / with Lean.
zozbot234 · 4 years ago
> The big problem is that at high level so so much is skipped over

This is an issue, but there's an established practice of writing formal sketches where the gaps in the proof are explicitly marked, and future tooling might bring ways to address these gaps once a full formal context is provided.

One issue is that Lean has little or no support for declarative proof, which is by far the most natural setting for these "proof sketches", and also brings other advantages wrt. complex proofs. (Coq has the same issue; some code was written to support declarative proofs, but it was too buggy and bitrotted, so it got removed.)

gwern · 4 years ago
It's worth noting that GPT-f already gets a big performance boost from pretraining on Arxiv etc (https://arxiv.org/pdf/2009.03393.pdf#page=7) despite those sources containing next to no Metamath or anything that looks like a raw Metamath proof, just regular natural language & LaTeX discussing math...
astrange · 4 years ago
How well does text extraction from a PDF work? I almost never try it but thought there were random spaces in the output and such things.
amelius · 4 years ago
> we're closer to this than people realize

At least give a proper reference to what you're alluding to, please.

Also, closeness in AI has shown to be a misleading concept.

gerdesj · 4 years ago
A reference you might like to note is in a response - that kevinblizzard bloke probably has a fair old handle on this stuff. Note how he is quoted throughout the article.

This is about some pretty creative uses of computing in maths and bugger all to do with AI (whatever that is.)

If you put enough blood, sweat and tears into codifying mathematical concepts into Lean, you can feed it a mathematical thingie and it can tell you if that thingie is correct within its domain of knowledge. If you get an "out of cheese error", you need to feed it more knowledge or give up and take up tiddlywinks.

This explains Lean in terms I can understand: https://www.quantamagazine.org/building-the-mathematical-lib...

AtlasBarfed · 4 years ago
Simply having a linked graph of related concepts might show "impact diffs" in theorems.

I recall that the Fermat's proof linked several normally disparate areas to get to the meat of the issue.

Simply tagging those relations to identified sub-fields of study will probably help give guidance to impacts of theories, maybe farm them out to advanced students for quick review.

GPerson · 4 years ago
Why do you say this?
kevinbuzzard · 4 years ago
I'm not quite sure what you're asking about. I'm saying that we can't yet take the Wiles and Taylor-Wiles proof of Fermat's Last Theorem, feed it into a machine, and get a Lean proof of Fermat's Last Theorem.
Haga · 4 years ago
Can't usefulness be approximated like Google search results of old, by connectedness to other theories.
Tainnor · 4 years ago
This is going to be an exciting area.

I spent some time previously playing with Coq. It's very powerful, but even proving the simplest undergraduate maths statements (say, about group theory) can prove very challenging. I believe that part of this is that Coq uses different mathematical foundations than traditional mathematics, which mostly uses set theory (ZFC, although most people don't care about the specifics). So it can be hard or unnatural to express something like "subgroup". I don't know if Lean fares better in that respect. Coq documentation is also IMHO almost impossible to understand unless you're already very deeply knowledgeable about the system.

We will probably still need some more iterations, to get more user friendly assistants with better documentation and to get adequate teaching resources etc.

deadbeef57 · 4 years ago
In my experience, the difficulty is very rarely the transition from set theory to type theory. I find this almost transparent in practice.

The issue is rather that you need to deal with edge cases that are usually swept under the rug, or that you need to spend a full page working out the details of a proof that everyone recognizes as obvious. It would be great if computers could give even more assistance with these tedious parts of formalization, and I'm very glad that people are working hard on realizing this.

chriswarbo · 4 years ago
One area I've been playing with is "theory exploration", which takes a set of definitions and produces lemmas that are 'interesting' (i.e. irreducible, via some scheme like Knuth-Bendix rewriting).

(Thanks to Curry-Howard, we can also view this as generating test suites/specifications for software libraries!)

Notable tools include Hipster, IsaScheme, IsaCoSy, QuickSpec and Speculate.

ganzuul · 4 years ago
Kurt Gödel with a bat in dark alleyway of obviousness haunts my dreams.
kmill · 4 years ago
Here's the definition of a subgroup in Lean's mathlib: https://leanprover-community.github.io/mathlib_docs/group_th...

Given a group G, there is a type `subgroup G` of subgroups of G, and a subgroup is a structure consisting of a set of elements of G along with some proofs that it has the identity, it's closed under multiplication, and it's closed under taking inverses. Lean has a coercion facility using typeclasses, and there is a typeclass instance that will use the subgroup's set coerced to a type when the subgroup is used in a place that expects a type. This coerced type has a group typeclass instance, so a subgroup "is" a group.

The big complexity in all of this doesn't seem to be ZFC vs type theory, but rather how do you implement a mathematician's use of synecdoche and implicit coercions. Synecdoche is the figure of speech where you refer to a thing by its parts or vice versa -- for example, "let G be a group" and then using G as if it were a set, even though a group consists of a set, a binary operation on the set, a choice of identity element, and a few axioms that must hold. Mathlib uses typeclasses to implement the synecdoche -- a type is a group if there is a typeclass instance that provides the rest of the structure. As I understand it, Coq's Mathematical Components library uses structures and unification hints instead (though I've failed to understand how it works exactly), but I have heard that they can be very fiddly to get right.

I think you'd have to find solutions to these same problems no matter the foundations. At least with types, there's enough information lying around that, for example, group structures can be automatically synthesized by the typeclass resolution system in many common situations.

raphlinus · 4 years ago
Lean's foundations are similar to Coq. I think the ergonomics are a bit better.

Most activity in proof systems is based in type theory these days, but set theoretical systems do exist, of which Metamath is the most mature. That said, Metamath is seriously lacking in automation, so there is an element of tedium involved. That's not because of any fundamental limitations, but I think mostly because people working in the space are more motivated to do things aligned with programming language theory. There was also a talk by John Harrison a few years ago proposing a fusion of HOL and set theory, but I'm not sure there's been much motion since then.

I believe a fully featured proof assistant based in set theory would be a great contribution.

[1]: http://aitp-conference.org/2018/slides/JH.pdf

robinzfc · 4 years ago
It depends on what you mean by "fully featured" but Isabelle supports ZF logic [1] and also you can do ZFC in Isabelle/HOL [2]. And, of course there is Mizar [3].

[1] https://isabelle.in.tum.de/dist/library/ZF/ZF/index.html

[2] https://www.isa-afp.org/entries/ZFC_in_HOL.html

[3] http://mizar.org/

ixaxaar · 4 years ago
Hey type hacking is hard. Agda is kinda easier for someone who knows haskell, but hard nonetheless.
Ericson2314 · 4 years ago
You don't need subgroups, you just need injective homomorphisms.
Jweb_Guru · 4 years ago
That's a nice idea but it tends to fall flat when you can't actually construct an explicit function exhibiting the homomorphism in question (which does happen from time to time). A lot of equivalences in areas like group theory only hold classically.
zozbot234 · 4 years ago
ZFC was only ever developed as a proof of concept, not as a practical foundation for formal math. Structural set theories, type theories or category-based foundations are actually a lot easier to work with, and otherwise quite equivalent in power.
Tainnor · 4 years ago
This is missing the point. Modern mathematics textbooks, especially undergratuate ones, are written with set theoretic foundations. It requires a lot of effort to reformulate all of mathematics into equivalent formulations. That makes it harder to get buy-in from many working mathematicians, and it also makes the subject less approachable for, say, undergraduates.
qntmfred · 4 years ago
A quick intro from quanta magazine to Kevin Buzzard's work on computer-assisted proof systems

https://www.youtube.com/watch?v=HL7DEkXV_60&t=295s

contrarian_5 · 4 years ago
i love the time that we live in. people talk about there being a glut of podcasts, but i can type "Peter Scholze" into youtube and there is a full hour-long interview with him hosted by another mathematician. you can see her channel is very new and almost certainly a part of this latest wave of pandemic podcasters. its so great to take an interest in a random person from a nature article and be able to immediately get a visceral idea of who he is and what hes about.

https://www.youtube.com/watch?v=HYZ3reRcVi8

gigatexal · 4 years ago
For anyone frustrated that the article doesn’t say what specific part of math has the most to gain it’s here:

“ The crucial point of condensed mathematics, according to Scholze and Clausen, is to redefine the concept of topology, one of the cornerstones of modern maths. A lot of the objects that mathematicians study have a topology — a type of structure that determines which of the object’s parts are close together and which aren’t. Topology provides a notion of shape, but one that is more malleable than those of familiar, school-level geometry: in topology, any transformation that does not tear an object apart is admissible. For example, any triangle is topologically equivalent to any other triangle — or even to a circle — but not to a straight line.

Topology plays a crucial part not only in geometry, but also in functional analysis, the study of functions. Functions typically ‘live’ in spaces with an infinite number of dimensions (such as wavefunctions, which are foundational to quantum mechanics). It is also important for number systems called p-adic numbers, which have an exotic, ‘fractal’ topology.”

dr_kiszonka · 4 years ago
I know nothing about topology. If you have time, could you please explain this sentence?

"For example, any triangle is topologically equivalent to any other triangle — or even to a circle — but not to a straight line."

Is it because triangles and circles are "2D" and lines aren't?

yongjik · 4 years ago
Crudely speaking, topologists consider spaces as if they're made of rubber - a mathematically perfect rubber that can be made infinitely thin or stretch to infinity. So, a circle can be made with an infinitely thin circular rubber ring, and you just pinch three points and stretch, and you get your triangle, in any shape.

But you can't get a straight line - to do that you need scissors to cut one point of a circle (to be precise, remove a single point) - and then you can stretch the remainder to infinity and now you have your line.

QuesnayJr · 4 years ago
You can give a function that maps the points of the triangle to the points of the circle in such a way that this function is continuous. (Continuous basically means "no jumps".)

You can give this function explicitly. Let's assume that the triangle is drawn on a piece of paper with an x and y axis, such that the intersection of the axes (the origin) is inside the triangle. For each point on the triangle, draw a line from the origin to the point. Take the angle between that line and the x-axis. Use that angle to map it onto a circle.

hopfenspergerj · 4 years ago
If you remove a point from a line, it breaks into two pieces.
77pt77 · 4 years ago
No. Both are 1-d (lines).

It's because triangles are loops (closed) and straight lines aren't.

fjfaase · 4 years ago
For more details about the actual proof, see: 'Blueprint for the Liquid Tensor Experiment'

https://leanprover-community.github.io/liquid/index.html

bpcpdx · 4 years ago
This kinda gives me chills because I can't help but think of Warhammer 40k.

Essentially our civilization progressed to the point where computers started to take more of a role in new discoveries much like in the real world. As computers got more powerfull and AI developed naturally scientific advancement sped up. But there came a point when the science and math became too advanced for humans to even comprehend, so computers did it. Then there came a point when things were so advanced that even scientists couldn't even ask the right questions so an AI intermediate would have to be used.

Then things get weird and you have the 40k universe.

Anyways I know I probably butchered it a little but that's the gist of it and I can totally see things progressing in the real world up to the point where things get weird in 40k.