Readit News logoReadit News
JoshTriplett · 4 years ago
This doesn't seem like an actual solution to the problem as normally stated. The point of Newcomb's problem is that if someone makes decisions on the basis of accurate prediction of your choices, you should commit to making the choice that gives you the maximum payout.

In other words, the problem as stated is that if you choose both boxes there will be nothing in the opaque box, and if you choose one box there will be a large payout in the opaque box.

Redefining the problem to say "they've already made the decision and filled the box or boxes, so your choice won't change the outcome" is sidestepping the point of the problem.

remontoire · 4 years ago
This is definitely not an actual solution. It's a first order solution to a non-first order problem.
georgewsinger · 4 years ago
If you enjoyed this essay, you might also enjoy the author's fuller book Paradox Lost: Logical Solutions to Ten Puzzles of Philosophy, which covers 9 additional famous logical puzzles and offers solutions to them: https://www.amazon.com/Paradox-Lost-Logical-Solutions-Philos...

I personally wasn't convinced by every paradox solution offered in this book, but a few of the solutions were truly stunning, and left me with the feeling that some famous philosophical paradox had been completely and unambiguously resolved (esp. in the first chapter).

(I love shilling for Michael Huemer [author of this book/blog] because his philosophical books & essays have been extremely influential/clarifying in my own thinking).

drdec · 4 years ago
What we learned from that article is that Michael Huemer will only be winning $1000 if he is ever lucky enough to play this game.

The key to one-boxing is to commit to it early and don't waver when the time comes.

Deleted Comment

vlovich123 · 4 years ago
The thought experiment part aside, isn’t the premise a logical contradiction? You may choose to flip a coin about your choice and therefore the machine cannot possibly have a 90% accurate prediction of your guess.
josephcsible · 4 years ago
In some variants of the problem, if the machine predicts you'll choose to delegate your choice to something random, it leaves the box empty just as it would if it predicts you'll choose to open both.
jerf · 4 years ago
Newcomb's paradox obscures the same basic reason you can't have a Halting-Detection TM, only it covers it over with fuzzy terms and human complications.

If you recast the paradox as "There exists a machine scientists made that will determine if any other intelligence will either choose the box or not the choose the box. You are that other intelligence. Do you choose the box or not choose the box?" then you can see that the same principle holds; if you incorporate the logic of the predictor into your own intelligence, you can twist the original machine's logic back on to itself in exactly the same way the Halting problem does.

A subtlety to note about the halting problem is that while it is phrased in terms of that particular machine that can twist back on itself, it is itself a generalized proof of impossibility, and via Rice's theorem and lots of other work over the years it extends out into the impossibility of all sorts of other machines as well. The proof simply provides one machine that unambiguously can not be created, it is not limited to that one machine.

Similarly, while a human brain that encompasses the same logic as the box-prediction machine may have technical problems in the fuzzy real-world land, the fact that such a brain would be fundamentally unpredictable means that the prediction machine can't exist.

If you try to fix up the thought experiment by limiting the size of the predictor and the predictee in various ways, I suspect it isn't that hard to show that the predictor must be exponentially more complicated than the predictee in order to function, and something "exponentially larger" than the human brain doesn't really fit in the universe. And then if you try to escape by allowing arbitrarily large mathematical systems, you're back where I described above. If you try to bound how much larger the predictor has to be than the predictee, you are going to be encountering some really serious mathematical problems doing so (of the "busy beaver" mathematical sequence sorts).

Given that the problem intrinsically encompasses a delay between the scan and the decision, I can simply take an ad-hoc hash function of the experiences of the last day and now the predictor needs to have also had information of my entire last day as well, and a simulation sufficiently detailed to have predicted that, too. Even if it can predict the hash function I would use (itself no guarantee since that is also conditionalized on the intervening day, potentially), it can't predict the input going into it.

I think most people intuitively sense that the predictor can't really exist; I think people's intuitions are correct. It would require the predictor to have physically impossible amounts of additional resources vs. the predictee if you try to embed it in real space and time, and if you put it in math space it's nothing more and nothing less than the halting problem restated.

pas · 4 years ago
For anyone interested in a slightly different aspect of this:

A MIRI talk mentioned a hypothetical scenario when an AI gets duplicated. In this case the AI has a very high certainty about the state of its own duplicate. Also Nate argues that many everyday problems humans face are Newcomb-like. (And of course he talks about an AI that knows that humans know its source code.)

https://www.youtube.com/watch?v=3Sn0stHiNh4 (from 12 min 30 sec)

https://intelligence.org/2017/04/07/decisions-are-for-making...

daxfohl · 4 years ago
At 100% maybe, but at 90% it's possible that some algorithm that took, say, IQ, net income, and Meyers-Briggs type into account could hit 90%.

Deleted Comment

FeepingCreature · 4 years ago
The machine predicts that you will not choose to flip a coin. :)
lmm · 4 years ago
Well if I do then clearly it didn't.
vehemenz · 4 years ago
The original scenario as imagined by Nozick implies the probability is much higher than 90%, something close to 1. And you can't "outsmart" the machine. This actually makes a big difference when trying to "solve" the paradox, as this author attempts to do.

Also, why you should 1 box: https://www.youtube.com/watch?v=Tzi3PCi0WBU

planetsprite · 4 years ago
The solution is to misunderstand Newcomb's paradox and make yourself think that it's best to take 1 box, decide to take 1 box no matter what, have the machine evaluate your future decision, then read this article and then change your mind and take 2 boxes.
FeepingCreature · 4 years ago
Reading this article is also a decision the machine can evaluate.
daxfohl · 4 years ago
It depends. Need to know the algorithm.

If it's just that 90% of people choose one, then it can choose that one and be right 90% of the time without even taking your brain scan into account. Or maybe 90% of males go one way and 90% of females go the other way, well that's an easy thing. Or maybe nobody ever chooses one when it's raining, and there's a 10% chance of rain tomorrow. Or, maybe it does a perfect simulation and knows exactly what you'll choose, but flips it 10% of the time just for fun.

Any of those would have different ways of optimizing, so it doesn't make sense to speculate without knowing exactly how it works.

daxfohl · 4 years ago
One interesting thing, any scenario I devise where choosing one box wins also implies lack of free choice. IDK if this can be proved, but it seems to be the case.
sega_sai · 4 years ago
I don't agree with this calculation. Under the assumption that 1) you think about what to do then 2) you decide which decision to make then 3) the computer learns about it and provides it to organizers with 90% precision then I am pretty sure the the correct answer is what is provided as 'first answer' in the article. (assuming what we care about is expectation). And I think even if our strategy is probabilistic between box B and A+B the box then still chosing B box with 100% is the best stategy.