Readit News logoReadit News
Xcelerate commented on Project to formalise a proof of Fermat’s Last Theorem in the Lean theorem prover   imperialcollegelondon.git... · Posted by u/ljlolel
timmg · 5 days ago
So: as I understand it, Fermet claimed there was an elegant proof. The proof we've found later is very complex.

Is the consensus that he never had the proof (he was wrong or was joking) -- or that it's possible we just never found the one he had?

Xcelerate · 5 days ago
I feel like there’s an interesting follow-up problem which is: what’s the shortest possible proof of FLT in ZFC (or perhaps even a weaker theory like PA or EFA since it’s a Π^0_1 sentence)?

Would love to know whether (in principle obviously) the shortest proof of FLT actually could fit in a notebook margin. Since we have an upper bound, only a finite number of proof candidates to check to find the lower bound :)

Xcelerate commented on I've seen 12 people hospitalized after losing touch with reality because of AI   twitter.com/KeithSakata/s... · Posted by u/fortran77
Xcelerate · 14 days ago
I could see this. For certain personality archetypes, there are particular topics, terms, and phrases that for whatever reason ChatGPT seems to constantly direct the dialogue flow toward: "recursive", "compression", "universal". I was interested in computability theory way before 2022, but I noticed that these (and similar) terms kept appearing far more often than I would expect to due chance alone, even in unrelated queries.

Started searching and found news articles talking about LLM-induced psychosis or forum posts about people experiencing derealization. Almost all of these articles or posts included that word: "recursive". I suspect those with certain personality disorders (STPD or ScPD) may be particularly susceptible to this phenomenon. Combine eccentric, unusual, or obsessive thinking with a tool that continually reflects and confirms what you're saying right back at you, and that's a recipe for disaster.

Xcelerate commented on Some thoughts on journals, refereeing, and the P vs NP problem   blog.computationalcomplex... · Posted by u/luu
Tainnor · 19 days ago
The effort from going to a paper proof, even one that everyone agrees is correct, to a formally verified one is still gigantic. Requiring all submissions in maths papers to come with a verified Lean proof would be a complete non-starter - most mathematicians don't even use Lean.

In many cases, the problem isn't that the statement of the theorem can't be properly expressed in Lean. It's very easy to write down the statement of FLT in Lean, for example. But for theorems whose proofs have been elusive for decades or centuries, the proofs very likely include objects in rather advanced mathematics that are not at all easy to encode in Lean. This is one of the current struggles of the FLT formalisation process, for example.

In any case, the problem in this case has nothing to do with formalisation, it's simply a case of a bad submission that for unclear reasons didn't get properly peer reviewed.

Xcelerate · 19 days ago
Thanks for sharing! I didn’t think about all of the intermediate steps that might be difficult to encode, but that makes sense.
Xcelerate commented on Some thoughts on journals, refereeing, and the P vs NP problem   blog.computationalcomplex... · Posted by u/luu
Xcelerate · 19 days ago
I’m not up-to-date on how advanced proof assistants have become, but are we not nearing the point where serious attempts at proving P vs NP can be automatically validated (or not) by a widely vetted collection of Lean libraries?

The P vs NP problem is expressible as the question of whether a specific Π_2 sentence is true or not (more specifically, whether a particular base theory proves the sentence or its negation). Unlike problems involving higher order set theory or analytical mathematics, I would think any claimed proof of a relatively short arithmetical sentence shouldn’t be too difficult to write up formally for Lean (although I suppose Fermat’s Last Theorem isn’t quite there either, but my understanding is we’re getting closer to having a formal version of it).

The impact of this would be that journals could publicly post which specific sentences they consider to represent various famous open problems, and a prerequisite to review is that all purported proofs require a formal version that automatically Lean-validates. This would nix the whole crank issue.

Then again, I may be way off base with how close we are to achieving something like I’ve described. My impression is that this goal seems about 5 years away for “arithmetical” mathematics, but someone better in the know feel free to correct me.

Xcelerate commented on The bewildering phenomenon of declining quality   english.elpais.com/cultur... · Posted by u/geox
layer8 · a month ago
One recent example is a sturdy fold-out clothes drying rack I owned. All reviews praised its quality. My unit unfortunately got damaged in a heavy storm when I left it outside by mistake. The manufacturer got bought up in the meantime, and the product now is more flimsy and unstable, metal axes have been replaced by plastic ones. And I haven’t found any other model comparable to the old one on the market. I’d be willing to pay double or triple the price because of how good it was, and it wasn’t particularly inexpensive to start with.

I very much doubt that such a product can’t be manufactured sustainably in robust quality.

Xcelerate · a month ago
It’s gotten absurd. I’ll easily pay 10x the regular price of some object if I’m confident it will last a very long time and I won’t have to think about it anymore. I’ve replaced all the crappy LED bulbs in my house with Yuji Sunwave brand. I’ve not had a single bulb flicker or go out in years now, and the quality of the light is superb (i.e. more akin to what everyone used to have with incandescent bulbs). I bought a Control Freak induction cooktop in 2018. The whole family uses it far more than the cheap gas range that came with the house and is a pain to clean. Similarly, I replaced all the Food Network brand pots and pans I had in college that were chipping paint and rusting with Demeyere versions. Not a single problem since.

And to your point, I’ve probably gone through six clothes drying racks by now that all break down after a short time. I have yet to find a high-quality one.

It sounds expensive, but I suspect that in the long-term, the approach of buying higher quality up front ultimately ends up cheaper in terms of time and replacement costs. I’ve debated replacing some home appliances with commercial or restaurant versions, but there are some notable tradeoffs with that unfortunately, as the purpose of the appliance becomes somewhat different than a home use case.

Of course this strategy is all well and good if you can foot the initial high cost of the products, which many people cannot on the typical family income. There’s been a lot written about how those of lower income are often taken advantage of in this way—they end up paying a higher “lifetime cost” for lower quality products and service, because the system attempts to produce the minimum viable affordable product, which then sets the bar for the “new normal”.

Xcelerate commented on All AI models might be the same   blog.jxmo.io/p/there-is-o... · Posted by u/jxmorris12
Xcelerate · a month ago
Edit: I wrote my comment a bit too early before finishing the whole article. I'll leave my comment below, but it's actually not very closely related to the topic at hand or the author's paper.

I agree with the gist of the article (which IMO is basically that universal computation is universal regardless of how you perform it), but there are two big issues that prevent this observation from helping us in a practical sense:

1. Not all models are equally efficient. We already have many methods to perform universal search (e.g., Levin's, Hutter's, and Schmidhuber's versions), but they are painfully slow despite being optimal in a narrow sense that doesn't extrapolate well to real world performance.

2. Solomonoff induction is only optimal for infinite data (i.e., it can be used to create a predictor that asymptotically dominates any other algorithmic predictor). As far as I can tell, the problem remains totally unsolved for finite data, due to the additive constant that results from the question: which universal model of computation should be applied to finite data? You can easily construct a Turing machine that is universal and perfectly reproduces the training data, yet nevertheless dramatically fails to generalize. No one has made a strong case for any specific natural prior over universal Turing machines (and if you try to define some measure to quantify the "size" of a Turing machine you realize this method starts to fail once the number of transition tables becomes large enough to start exhibiting redundancy).

Xcelerate commented on Hypercapitalism and the AI talent wars   blog.johnluttig.com/p/hyp... · Posted by u/walterbell
Xcelerate · a month ago
I find the current VC/billionaire strategy a bit odd and suboptimal. If we consider the current search for AGI as something like a multi-armed bandit seeking to identify “valuable researchers”, the industry is way over-indexing on the exploitation side of the exploitation/exploration trade-off.

If I had billions to throw around, instead of siphoning large amounts of it to a relatively small number of people, I would instead attempt to incubate new ideas across a very large base of generally smart people across interdisciplinary backgrounds. Give anyone who shows genuine interest some amount of compute resources to test their ideas in exchange for X% of the payoff should their approach lead to some step function improvement in capability. The current “AI talent war” is very different than sports, because unlike a star tennis player, it’s not clear at all whose novel approach to machine learning is ultimately going to pay off the most.

Xcelerate commented on BusyBeaver(6) Is Quite Large   scottaaronson.blog/?p=897... · Posted by u/bdr
throwaway81523 · 2 months ago
> It boggles my mind that we ever thought a small amount of text that fits comfortably on a napkin (the axioms of ZFC) would ever be “good enough” to capture the arithmetic truths or approximate those aspects of physical reality that are primarily relevant to the endeavors of humanity.

ZFC is way overpowered for that. https://mathoverflow.net/questions/39452/status-of-harvey-fr...

Xcelerate · 2 months ago
I don’t understand your post. You’re linking to a discussion about the same conjecture I mentioned in another comment 11 hours prior to your comment. Did you mean to link something else?
Xcelerate commented on BusyBeaver(6) Is Quite Large   scottaaronson.blog/?p=897... · Posted by u/bdr
hyperpape · 2 months ago
This ignores the fact that it is not so easy to find natural interesting statements that are independent of ZFC.

Statements that are independent of ZFC are a dime a dozen when doing foundations of mathematics, but they're not so common in many other areas of math. Harvey Friedman has done interesting work on finding "natural" statements that are independent of ZFC, but there's dispute about how natural they are. https://mathoverflow.net/questions/1924/what-are-some-reason...

In fact, it turns out that a huge amount of mathematics does not even require set theory, it is just a habit for mathematicians to work in set theory. https://en.wikipedia.org/wiki/Reverse_mathematics.

Xcelerate · 2 months ago
Yeah, I’m quite familiar with Friedman’s work. I mentioned him and his Grand Conjecture in another comment.

> This ignores the fact that it is not so easy to find natural interesting statements that are independent of ZFC.

I’m not ignoring this fact—just observing that the sheer difficulty of the task seems to have encouraged mathematicians to pursue other areas of work beside foundational topics, which is a bit unfortunate in my opinion.

Xcelerate commented on BusyBeaver(6) Is Quite Large   scottaaronson.blog/?p=897... · Posted by u/bdr
azan_ · 2 months ago
Well, obviously we could simply take every true sentence of Peano arithmetic as an axiom to obtain a consistent and complete system, but if we think in that spirit, then almost every mathematician in the world is working on finding a better set of axioms (because every proof would either give us new axiom or show that something should not be included as axiom), right?
Xcelerate · 2 months ago
> obviously we could simply take every true sentence of Peano arithmetic as an axiom to obtain a consistent and complete system

If you’re talking about every true sentence in the language of PA, then not all such sentences are derivable via the theory of PA. If you are talking about the theorems of PA, then these are missing an infinite number of true statements in the language of PA.

Harvey Friedman’s “grand conjecture” is that virtually every theorem that working mathematicians actually publish can already be proved in Elementary Function Arithmetic (much weaker than PA in fact). So the majority of mathematicians are not pushing the boundaries of the existing foundational theories of mathematics, although there is certainly plenty of activity regardless.

u/Xcelerate

KarmaCake day11262June 8, 2012View Original