Readit News logoReadit News
tsimionescu · a year ago
This is an interesting case of a conjecture holding true for small objects but breaking down for huge ones, without specifically adding that size in somehow.

Sometimes we tend to have this intuition that if a rule applies to all low numbers, than it must apply to all numbers, that there can't be some huge number after which it breaks down (unless of course it explicitly includes that huge number, such as the rule "all numbers are smaller than a billion billion billion").

This is such a powerful intuition, even though it's obviously wrong, that rules that break it are even seen as a problem. For example there is the so-called "hierarchy problem" in physics, which states something like "there is no reason why the weak force is so much stronger than gravity". As if 2 times as strong is somehow fundamentally different than it being 10^24 times as strong from a mathematical perspective. And this has ended up being called a major problem with the standard model, even though it's completely normal from a mathematical perspective.

throwawayiionqz · a year ago
Many examples of conjectures true for all n, until a very large one is eventually found : https://mathoverflow.net/questions/15444/examples-of-eventua...
ants_everywhere · a year ago
One way to think of this is that smaller numbers are more constrained in a sense. As numbers get bigger more possibilities arise.

For example, in lower dimensions you get accidental isomorphisms [0]. These exist essentially because the defining equations are more constrained in lower dimensions.

A more tongue in cheek intuition is the interesting numbers theorem, which says essentially that every integer is interesting. One of the things that can make a number interesting is that there's a property for which it's the first counterexample.

[0] https://en.wikipedia.org/wiki/Exceptional_isomorphism

snarkconjecture · a year ago
There are two kinds of naturalness principle in physics, sometimes called "technical naturalness" and "Dirac naturalness" respectively.

Dirac naturalness is as you describe: skepticism towards extremely large numbers, end of story. It has the flaw you (and every other person who's ever heard it) point out.

Technical (or t'Hooft) naturalness is different, and specific to quantum field theory.

To cut a long story short, the "effective", observable parameters of the Standard Model, such as the mass of the electron, are really the sums of enormous numbers of contributions from different processes happening in quantum superposition. (Keywords: Feynman diagram, renormalization, effective field theory.) The underlying, "bare" parameters each end up affecting many different observables. You can think of this as a big machine with N knobs and N dials, but where each dial is sensitive to each knob in a complicated way.

Technical naturalness states: the sum of the contributions to e.g. the Higgs boson mass should not be many orders of magnitude smaller than each individual contribution, without good reason.

The Higgs mass that we observe is not technically natural. As far as we can tell, thousands of different effects due to unrelated processes are all cancelling out to dozens of decimal places, for no reason anyone can discern. There's a dial at 0.000000.., and turning any knob by a tiny bit would put it at 3 or -2 or something.

There are still critiques to be made here. Maybe the "underlying" parameters aren't really the only fundamental ones, and somehow the effective ones are also fundamental? Maybe there's some reason things cancel out, which we just haven't done the right math to discover? Maybe there's new physics beyond the SM (as we know there eventually has to be)?

But overall it's a situation that, imo, demands an explanation beyond "eh, sometimes numbers are big". If you want to say that physical calculations "explain" anything -- if, for example, you think electromagnetism and thermodynamics can "explain" the approximate light output of a 100W bulb -- then you should care about this.

solidsnack9000 · a year ago
It is helpful that the post links to the Wikipedia page: https://en.wikipedia.org/wiki/Bunkbed_conjecture

Reading that and then rereading the post, it all made a whole more sense: why the conjecture is intuitively appealing and why the computational approach doesn't readily result in a proof.

yread · a year ago
I don't know. To me it sounds like it's obvious the conjecture doesn't hold. if you have a path in the upper bunk that gets broken you are screwed but if that path is broken you have the option to switch to path in the lower bunk at ANY moment. So you have n+1/n higher chance of a break but n-1 ways how to avoid it
supernewton · a year ago
The conjecture is true for all small graphs that they tried, so if it's "obviously" false to you then something went wrong with your intuition somewhere.
winwang · a year ago
Intuitively (subjectively) why this argument doesn't work: if you switch to the path in the lower bunk, you can switch to the upper bunk "at any moment", regardless of if there is a path which exists on the lower bunk.

In this example, a1 is connected to both c1 and c2:

  a1    b1 -- c1
  |     |     |
  a2 -- b2 -- c2
If, instead of deleting edges, we assigned a distance d = 1/p, then you'd expect (and probably easily prove) that the distance to the target vertex is at most the distance to its partner in the other bunk. The fact that this intuitive "problem translation" doesn't actually translate to probabilities is quite surprising (to me)!

tech_ken · a year ago
I'm pretty sure you're sampling the "bedposts" as well, the upper and lower subgraphs aren't just connected at every node; I understood the rough (and incorrect) intuition to be like:

P[nodes in upper bunk connected] > P[nodes in lower bunk connected] * P[sufficient "bedpost" edges survive connecting upper and lower]

Since

P[nodes in upper bunk connected] = P[nodes in lower bunk connected]

And by definition

P[sufficient "bedpost" edges survive connecting upper and lower]<1

scottmsul · a year ago
> if you have a path in the upper bunk that gets broken you are screwed

Counter-factuals like this don't apply when talking about average probabilities. If you cross over, it's an identical graph with identical probabilities. idk, to me it seems really counter-intuitive that the opposite bunk's node would be easier to get to than the current bunk's node.

knodi123 · a year ago
I still don't get it. So you make this graph, with a random number on each edge. Then you duplicate that graph, stack them like a bunkbed, and connect every lower vertex to the one above it. Then you delete all edges with a number below some threshold. Wouldn't the upper and lower graphs still be identical? Since you had the same edge weights in both? So the route from any upper node X to any upper node Y should be 1 less than the route length between X and the point beneath Y.

What did I miss?

cochne · a year ago
There is no thresholding, you delete edges randomly based on the probability assigned.
TheRealPomax · a year ago
You missed the part where the conjecture is about subgraphs: after forming the "bunk bed", we start randomly deleting edges based on edge probability, so there may no longer be a direct path from x to y in what was originally the upper graph.
auggierose · a year ago
Yeah, the introductory sentence in the post doesn't really do a good job explaining what the bunkbed conjecture is. If I was interested in it, I would have to look it up somewhere else.
andrewflnr · a year ago
While I wouldn't accept a probabilistic "proof" of a theorem like this, it does seem clear to me that those results are important for directing the focus of research, especially in cases where they go against intuition. Given that most of the math community was barking up the wrong tree, even if these guys only had the probabilistic result, surely that would eventually help someone find the right proof? That's at least worth publishing as a letter or something, right?

Ed: reflecting on my first sentence, I guess I tend to think of proofs as being at least as important a product of math as the raw truth of a statement. A fact is a fact, but a proof represents (some level of) understanding, and that's the good stuff. Experiments are potentially good science, but usually not math.

bubblyworld · a year ago
Can you elaborate? It's possible to prove probabilistic results about discrete objects rigourously - in fact this is a reasonably common technique (going back to work by Paul Erdos, and probably further). It can give you concrete information about the existence or non-existence of these objects (if something exists with non-zero probability in a finite set there has to be at least one example). It's not a "less valid" way to prove things, just a particular choice of approach. Sometimes it makes things easier, sometimes it doesn't.
ilya_m · a year ago
We are talking about different kinds of "probabilistic" proofs. Igor Pak's blog post and the root message in this thread refer to statements that are validated only probabilistically. The probabilistic method, by Erdos and others, is a proof as rigorous as any. "Probabilities" in the this method are just a convenient apparatus for counting objects.

(As an aside, there's important distinction with statements that are believed to be true based on empirical evidence, such as hardness of factoring or the Riemann conjecture. If the bunkbed conjecture was refuted based on Monte Carlo simulations, it'd be possible to reduce the level of uncertainty arbitrarily low by throwing more resources at running more simulations.)

samatman · a year ago
What you're describing is different from the distinction the parent is making, which is between providing a high probability of evidence that a conjecture is true, versus proving the conjecture true through a rigorous formal proof.

Often there are three useful states: none, some, and all. You describe a rigorous proof-of-some, because a proven probability of not-zero guarantees a counterexample exists. It's still news when or if the counterexample is found, but given the standard of rigorous proof is met, it does exist. Of course another option is to construct a counterexample, as was done here.

The case of probability discussed in the Fine Article was rigorously demonstrating a 99.99% chance that the probability is not zero. That's informative but it isn't proof. Proof that the probability is not zero, absent the counterexample which we now know (rather than expect on good grounds) exists, is also a proof. But that isn't the case being discussed here.

feoren · a year ago
> if something exists with non-zero probability in a finite set there has to be at least one example

I think you mean an infinite set here. If I have a set containing one single coin, and I flip every coin in my set, both heads and tails have a non-zero probability to exist within my flipped set. But there's certainly not an example of both heads and tails in my single-coin set. Indeed for any finite set of coins, there's some chance that either heads or tails never shows up.

However, if I have an infinite set of coins and I flip them all, certainly both heads and tails must exist (although I wouldn't be surprised if even this depends on the axiom of choice or something.)

dataflow · a year ago
> This is completely obvious, of course!

Could someone explain why the conjecture seemed obviously true? I have no background with it, but just reading the description here, I was caught off guard by the sentence. What made it seem obvious?

jprete · a year ago
The upper and lower bunk graphs are symmetrical in their probabilities, so it would intuitively seem like connecting to the same bunk's point would be easier/more likely than connecting to the other bunk's point. The first would require zero (or an even number of) crossings, the latter would require one (or an odd number of) crossings. Every crossing would seem to only decrease the odds of the path existing, or at best leave it the same.

To me it smells like a trap, though. This feels like exactly the kind of problem where some algorithmically chosen infinite graph would break the conjecture, possibly through some kind of NP-hardness-proof-style transformation of the problem into another form, and who knows, maybe the Axiom of Choice gets involved too (but probably not based on the headline!).

joelignaatius · a year ago
I may be missing something. If theres edges u1 to v1 and symmetrical graph u2 to v2 then any post has to be between u1 to u2 for any u. Which means traversing a post would essentially be adding an edge to the graph (no matter the probability). It seems intuitively obvious (which is stated here as incorrect) that going through a post would be more expensive.
sweezyjeezy · a year ago
The (disproven) conjecture is for finite graphs.
cashew22 · a year ago
Here is my reasoning: start with two vertices u and v in the first floor and the vertex corresponding to v in the second floor (call it w). u and v are connected if, after deleting some edges randomly, there is still a path from u to v, similarly for u and w. But for any fixed path from u to w, you can get a shorter path from u to v by just going across between the floors one less time. Because a shorter path is more likely to survive than a longer one (by a factor of p), it makes sense that overall, the probability of some path from u to v surviving is larger than some path from u to w surviving. But, that reasoning breaks down, in part because when you drop the last crossing edge, there are many ways of adding it back in, so each path from u to v might correspond to many paths from u to w.
ted537 · a year ago
I also know nothing but here's something missing from the blog post:

"A random subgraph of the bunkbed graph is then formed by independently deleting each edge based on the assigned probability."

So the (apparently incorrect) intuition is that an (upper<->lower) connection starts with an extra edge in the connection, so an (upper<->lower) connection has a greater risk of disconnect via random edge removal. Therefore, a same-level connection is more likely to remain after the random edge removal.

stonemetal12 · a year ago
Intuitively the longer a path is the more likely it is to be broken when you start randomly removing edges, and before you remove edges the shortest path from U1 to V2 is the shortest path from U1 to V1 plus a post. Therefore it seems like the path from U1 to V2 is more likely to be broken than the path from U1 to V1.
keepamovin · a year ago
Why is it "obviously true" (that lower probabilities of connection between than on levels)?

Because if it wasn't true then that would imply that V on different levels (as reached presumably by a canonical DFS tree // spanning tree) were closer (in terms of paths) than V on the same level, which would mean that those V should have "more likely" been on the same level (as however measured).

TL;DR - 'obviously true' because there's a level between them so (on average) of course!

User3456335 · a year ago
The paper seems to have a definition where bed posts are never deleted, i.e. they are all assigned probability 1 in which case the conjecture is obviously true.

The counterexample seems to rely on correlations between edge deletions which makes no sense because the deletions should be independent (in the definition I'm seeing on Wikipedia).

I could be wrong here because I haven't read it in detail but on first sight, it looks like there are some serious issues with mathematical rigour here.

JohnKemeny · a year ago
You can probably replace any transversal vertex with a large clique (depending only on p) such that the probability that the clique remains connected and at least one post remains.

(I.e., you can "model" undeletability by adding sufficiently many twin posts.)

Ps., do you mean "in which case the conjecture is obviously false"?

User3456335 · a year ago
Surely, if the poles can't be deleted then which node is chosen from a bunkbed will not affect the connectedness probability, which would make it impossible to find a counterexample, right? What am I missing here?

PS Depends on what conjecture we refer to here.

Deleted Comment

ykonstant · a year ago
That does seem extremely counter-intuitive; can the counterexample be adapted to provide a sequence of counterexamples with unbounded number of edges?
r0uv3n · a year ago
I guess you can just add some edges extra vertices and edges and give all the extra edges probability 1 of being deleted.
dudeinjapan · a year ago
I already learned this from Step Brothers: https://www.youtube.com/watch?v=ulwUkaKjgY0