Readit News logoReadit News
dselsam commented on The New Poem-Making Machinery   newyorker.com/culture/cul... · Posted by u/the_decider
dselsam · 4 years ago
“Here is a Homer poem about the Singularity:”

It is not possible to say

Whether the gods set the Singularity

Upon us, or the Singularity

Caused the gods to be.

dselsam commented on At the International Mathematical Olympiad, computers prepare to go for the gold   quantamagazine.org/at-the... · Posted by u/theafh
dselsam · 5 years ago
One of the founders of the IMO Grand Challenge here. FYI I presented the challenge along with a preliminary roadmap at AITP 2020 last week: https://youtu.be/GtAo8wqWHHg. No way to know if gold is five years out or five hundred but I hope the challenge spurs progress in the field either way.
dselsam commented on At the International Mathematical Olympiad, computers prepare to go for the gold   quantamagazine.org/at-the... · Posted by u/theafh
iop · 5 years ago
Aren't some IMO problems pretty easy if you can throw a computer algorithm at it? (for example geometry problems can be coordinate bashed?)

Also I can't find an accompanying dataset from the link: https://imo-grand-challenge.github.io/

EDIT: Nvm found it: https://github.com/IMO-grand-challenge/formal-encoding/blob/.... But there are only a handful of formal encodings of problems and last commit was from a year ago!

dselsam · 5 years ago
That repository has rotted. Preliminary roadmap described in recent invited talk at AITP-2020 http://grid01.ciirc.cvut.cz/~mptp/zoomaitp/aitp_sep_15_1930....
dselsam commented on How robots are grasping the art of gripping   nature.com/articles/d4158... · Posted by u/lainon
dragontamer · 8 years ago
Its always funny to realize how "easy" beating human-intelligence is (Chess AI, Go AI, even Mathematical Proofs), but how hard beating human-simple behaviors are.

IE: Stand up on two legs, walk forward, and move these 10 objects from this bin to that bin without breaking any of those objects. Or fold these clothes (surprisingly difficult to get a robot to do that).

We can build super-advanced Chess and Go AIs that beat out the grand summation of theoretical human knowledge (with Chess theory going back hundreds of years, and Go theory going back maybe thousands!!), but we still can't move a variety of objects from one bin to another successfully and repeatably.

-----------

Another funny example: AIs (or various algorithms at least) have solved the object camera tracking problem, but still basically fail at depth perception and figuring out if something is "still" or "moving" based on sight alone.

dselsam · 8 years ago
> Its always funny to realize how "easy" beating human-intelligence is (Chess AI, Go AI, even Mathematical Proofs), but how hard beating human-simple behaviors are.

This is the baseless myth that won't die. We are nowhere near rivaling humans in mathematical proving. Even after a human has proved a theorem, it can be an astronomic amount of work to even explain the proof to a machine after the fact (i.e. to construct a machine-checkable version of the proof), even when using suites of sophisticated software tools developed solely to facilitate this process. Progress has been glacially slow since the field began in the 1950s. Deep learning has had zero impact.

We will have robots that can grasp competently way before we have machines that can rival humans in mathematical proving.

dselsam commented on NeuroSAT: Learning a SAT Solver from Single-Bit Supervision   arxiv.org/abs/1802.03685... · Posted by u/kg9000
zero_k · 8 years ago
Wow, that's kinda interesting. I somehow cannot get rid of the feeling of having a really nice hammer and then treating everything as a nail. There are other uses of deep/machine learning that could help SAT solvers.

For example, one could try to better guess the learnt clauses to keep/throw away or to restart when the search space is deemed non-interesting through prediction models built using machine learning. See (my) blogpost here: https://www.msoos.org/2018/01/predicting-clause-usefulness/ (sorry, self-promotion, but relevant)

Let's not forget the work that could be done on auto-configuring SAT solvers, tuning their configuration to the instance, as per the competition at: http://aclib.net/cssc2014/

Another piece of work in this domain are portifolio solvers, which pick the best-fitting SAT solver from a list of potentials, after having guessed the best one given the instance profile, e.g. priss at http://tools.computational-logic.org/content/riss.php

I think there are some interesting low-hanging fruits in there somewhere, using regular SAT solvers and machine/deep learning, exploiting domain-specific information and know-how.

dselsam · 8 years ago
Author here. We weren't shooting for low-hanging fruit, and we realize that we are still very far from contributing to the state-of-the-art. We tried to approach this project as scientists instead of as engineers. I personally found NeuroSAT's success to be extremely surprising, and I think it underscores how little we understand about the capabilities and limitations of neural networks. Our approach might prove to be a dead-end but I think there is some chance we have stumbled on something powerful.
dselsam commented on Certigrad: bug-free machine learning on stochastic computation graphs   github.com/dselsam/certig... · Posted by u/kg9000
xgk · 9 years ago
Could this problem have been (partly) avoided by going for an LCF-based prover such as HOL or Isabelle/HOL, rather than a system based on the Curry-Howard correspondence?
dselsam · 9 years ago
I do not even know how I would have built Certigrad in Isabelle/HOL in the first place. In my first attempt to build Certigrad, I used a non-dependent type for Tensors (T : Type), and I accumulated so much technical debt that I eventually gave up and started again with a dependent type for tensors (T : Shape -> Type). This was my only major design error in the project, and development went smoothly once I made this change.
dselsam commented on Certigrad: bug-free machine learning on stochastic computation graphs   github.com/dselsam/certig... · Posted by u/kg9000
xgk · 9 years ago
I second this.

Isabelle/HOL has much better proof automation than any prover based on Curry/Howard. That's primarily because HOL is a classical logic, while Curry/Howard is constructive (yes, I know we can do classical proofs inside constructive logic, e.g. with proof irrelevance). Almost all automated provers are also classical. Having comparably powerful proof automation to Curry/Howard based systems is an open research problem.

   weird extra quotes
That's because Isabelle is not a prover, but a 'meta-prover' used for implementing provers, e.g. Isabelle/HOL. In practise HOL is the only logic implemented in Isabelle with enough automation to be viable, so we don't really use Isabelle's 'meta-prover' facilities in practise.

dselsam · 9 years ago
> Having comparably powerful proof automation to Curry/Howard based systems is an open research problem.

Isabelle/HOL is essentially isomorphic to a subset of Lean when we assume classical axioms, and any automation you can write for Isabelle, you can write for that subset of Lean. It is often easy to generalize automation techniques to support the rest of Lean (i.e. dependent types) as well, but sometimes doing so may introduce extra complexity or preclude the use of an indexing data structure. See https://arxiv.org/abs/1701.04391 for an example of generalizing a procedure (congruence closure) to support dependent types that introduces very little overhead.

u/dselsam

KarmaCake day276June 29, 2017View Original