==> decision/newcomb.s <==
This is "Newcomb's Paradox".
You are presented with two boxes: one certainly contains $1000 and the
other might contain $1 million. You can either take one box or both.
You cannot change what is in the boxes. Therefore, to maximize your
gain you should take both boxes.
However, it might be argued that you can change the probability that
the $1 million is there. Since there is no way to change whether the
million is in the box or not, what does it mean that you can change
the probability that the million is in the box? It means that your
choice is correlated with the state of the box.
Events which proceed from a common cause are correlated. My mental
states lead to my choice and, very probably, to the state of the box.
Therefore my choice and the state of the box are highly correlated.
In this sense, my choice changes the "probability" that the money is
in the box. However, since your choice cannot change the state of the
box, this correlation is irrelevant.
The following argument might be made: your expected gain if you take
both boxes is (nearly) $1000, whereas your expected gain if you take
one box is (nearly) $1 million, therefore you should take one box.
However, this argument is fallacious. In order to compute the
expected gain, one would use the formulas:
E(take one) = $0 * P(predict take both | take one) +
$1,000,000 * P(predict take one | take one)
E(take both) = $1,000 * P(predict take both | take both) +
$1,001,000 * P(predict take one | take both)
While you are given that P(do X | predict X) is high, it is not given
that P(predict X | do X) is high. Indeed, specifying that P(predict X
| do X) is high would be equivalent to specifying that the being could
use magic (or reverse causality) to fill the boxes. Therefore, the
expected gain from either action cannot be determined from the
information given.