Also I can't find an accompanying dataset from the link: https://imo-grand-challenge.github.io/
EDIT: Nvm found it: https://github.com/IMO-grand-challenge/formal-encoding/blob/.... But there are only a handful of formal encodings of problems and last commit was from a year ago!
IE: Stand up on two legs, walk forward, and move these 10 objects from this bin to that bin without breaking any of those objects. Or fold these clothes (surprisingly difficult to get a robot to do that).
We can build super-advanced Chess and Go AIs that beat out the grand summation of theoretical human knowledge (with Chess theory going back hundreds of years, and Go theory going back maybe thousands!!), but we still can't move a variety of objects from one bin to another successfully and repeatably.
-----------
Another funny example: AIs (or various algorithms at least) have solved the object camera tracking problem, but still basically fail at depth perception and figuring out if something is "still" or "moving" based on sight alone.
This is the baseless myth that won't die. We are nowhere near rivaling humans in mathematical proving. Even after a human has proved a theorem, it can be an astronomic amount of work to even explain the proof to a machine after the fact (i.e. to construct a machine-checkable version of the proof), even when using suites of sophisticated software tools developed solely to facilitate this process. Progress has been glacially slow since the field began in the 1950s. Deep learning has had zero impact.
We will have robots that can grasp competently way before we have machines that can rival humans in mathematical proving.
For example, one could try to better guess the learnt clauses to keep/throw away or to restart when the search space is deemed non-interesting through prediction models built using machine learning. See (my) blogpost here: https://www.msoos.org/2018/01/predicting-clause-usefulness/ (sorry, self-promotion, but relevant)
Let's not forget the work that could be done on auto-configuring SAT solvers, tuning their configuration to the instance, as per the competition at: http://aclib.net/cssc2014/
Another piece of work in this domain are portifolio solvers, which pick the best-fitting SAT solver from a list of potentials, after having guessed the best one given the instance profile, e.g. priss at http://tools.computational-logic.org/content/riss.php
I think there are some interesting low-hanging fruits in there somewhere, using regular SAT solvers and machine/deep learning, exploiting domain-specific information and know-how.
Isabelle/HOL has much better proof automation than any prover based on Curry/Howard. That's primarily because HOL is a classical logic, while Curry/Howard is constructive (yes, I know we can do classical proofs inside constructive logic, e.g. with proof irrelevance). Almost all automated provers are also classical. Having comparably powerful proof automation to Curry/Howard based systems is an open research problem.
weird extra quotes
That's because Isabelle is not a prover, but a 'meta-prover' used for implementing provers, e.g. Isabelle/HOL. In practise HOL is the only logic implemented in Isabelle with enough automation to be viable, so we don't really use Isabelle's 'meta-prover' facilities in practise.Isabelle/HOL is essentially isomorphic to a subset of Lean when we assume classical axioms, and any automation you can write for Isabelle, you can write for that subset of Lean. It is often easy to generalize automation techniques to support the rest of Lean (i.e. dependent types) as well, but sometimes doing so may introduce extra complexity or preclude the use of an indexing data structure. See https://arxiv.org/abs/1701.04391 for an example of generalizing a procedure (congruence closure) to support dependent types that introduces very little overhead.
It is not possible to say
Whether the gods set the Singularity
Upon us, or the Singularity
Caused the gods to be.