## Musings on Newcomb’s problem

Newcomb’s paradox posits a game with a transparent box containing $1 and an opaque box containing either$0 or $1,000. A Player is offered the choice between only the opaque box or both boxes. Before the Player makes this choice, a Predictor has attempted to predict the Player’s choice. The Predictor puts$1,000 into the opaque box, if the prediction is that only this box will be chosen. Otherwise, the Predictor puts $0. The Predictor neither has a time machine nor some gift for backward causation. It is assumed, however, that the Predictor is rather reliable, though not necessarily infallible. Should the Player choose the opaque box only or both boxes? A natural starting-point is to specify pay-off matrix or utility function for the Player. Letting 1′ and 2′ be shorthand for “the opaque box only” and “both boxes”, respectively, the pay-off matrix can be written as below.  Predicted choice Actual choice Reward 1 1$1,000 1 2 $1,001 2 1$0 2 2 $1 At first sight, one may think that the next step is to specify a probability distribution and compute the expected utility conditional on the Player’s actual choice. This requires some careful thought, however, because in standard decision theory there are no probabilities associated with one’s own actions. From the Player’s perspective, a decision theoretic treatment assumes that the actual choice is simply a variable under the Player’s control, not a random variable that is correlated with previous events. But it is part of the problem formulation that the Predictor is reliable, though perhaps not infallible. Therefore the Player’s actual choice must be strongly correlated in some way with the Predictor’s prediction. One possible conclusion at this point is that the problem formulation is simply contradictory or impossible. But let’s be open-minded enough to accept the problem formulation. How, then, can one incorporate the stipulated correlation between the predicted choice and the actual choice into an analysis of the Player’s optimal decision strategy? Because exotic scenarios like time travel and backward causation are not considered, the most natural approach is to postulate a common cause between the Predictor’s prediction and the Player’s actual choice. The Predictor might, for instance, be a psychologist or a close friend with extensive knowledge about the Player’s personality traits. This suggests a probability distribution that conforms to the following Bayesian network: Bayesian network for the Newcomb problem. The arrows in this graph define the statistical dependencies between variables. For a sufficiently pragmatic understanding of causation, they also represent causal relationships between variables. Mathematically, the Bayesian network expresses the claim that the joint probability distribution factorizes as $\mathcal{P}(R,A,B,P,C) = \mathcal{P}(R|A,B) \mathcal{P}(A|C) \mathcal{P}(B|P) \mathcal{P}(P|C) \mathcal{P}(C)$. Because the content of the opaque box is deterministically determined by the prediction, the variable $B$ may be eliminated from the probability space to give $\mathcal{P}(R,A,P,C) = \mathcal{P}(R|A,P) \mathcal{P}(A|C) \mathcal{P}(P|C) \mathcal{P}(C)$. In order to specify the probabilities, let’s for simplicity suppose that the Player’s choice is the outcome of a two-step process. First, the opaque box only ($C=1$) is chosen with probability $1-g$ and both boxes ($C=2$) with probability $g$. Of special interest are the cases $g=0$ (a Player trustful of Predictor’s abilities) and $g=1$ (a greedy distrustful Player). Secondly, this choice may be overturned due to the Player’s impulsivity with probability $1-s$. Let’s further say that the Predictor’s prediction is the result of a similar two-step process. The Predictor has knowledge about the common cause and the outcome of the first step of the Player’s decision procedure. But the knowledge of the common cause is slightly corrupted by noise. Thus, the prediction is set by the common cause $C$ with probability $a$, and the opposite of the common cause with probability $1-a$.  Common cause (C) Predicted choice (P) Actual choice (A) Reward (R) Probability $\mathbb{P}$ 1 1 1$1,000 $(1-g)as$ 1 1 2 $1,001 $(1-g)a(1-s)$ 1 2 1$0 $(1-g)(1-a)s$ 1 2 2 $1 $(1-g)(1-a)(1-s)$ 2 1 1$1,000 $g(1-a)(1-s)$ 2 1 2 $1,001 $g(1-a)s$ 2 2 1$0 $ga(1-s)$ 2 2 2 \$1 $gas$

The Predictor’s prediction accuracy is seen to be $\mathbb{P}(P=A) = as+(1-a)(1-s)$. According to the problem formulation, this accuracy is supposed to be high, which constrains the permissible parameter values to $a,s\approx 1$. Expectation values for different model parameters are in the table below.

 $g$ $a$ $s$ $E[R]$ $E[R|A=1]$ $E[R|A=2]$ 0 0.99 0.999 990.001 990.000 991.000 0.5 0.99 0.999 500.500 989.020 11.980 1 0.99 0.999 10.999 10.000 11.000 0 0.99 0.9 990.100 990.000 991.000 0.5 0.99 0.9 500.500 892.000 109.000 1 0.99 0.9 10.900 10.000 11.000

It is not entirely clear to me which quantity is actually the most relevant as a measure of the success of the Player’s strategy. My tentative conclusion is that it is most reasonable to think of the model parameters $g$ and $s$ as representing quantities that the Player can choose, and the outcomes $A=1$ or $A=2$ as a random variable beyond the Player’s direct control. This may seem a bit odd, since $A$ represents the Player’s choice. However, this is mostly a feature of the representation rather than a substantive point. A Player that deterministically chooses between the opaque box only or both boxes is represented using $g=0, s=1$ (always choose the opaque box) or $g=1, s=1$ (always choose both boxes). In addition, there is a possibility for a randomized strategy, in which the Player determines the probabilities $g,s$ rather than the outcome. Thus, for a deterministic Player the model is equivalent to considering $A$ to be under the Player’s direct control. For a Player utilizing a randomized strategy, it is anyway necessary to introduce a probability distribution over $A$.

On the above grounds, I take the unconditional expectation value $E[R]$ to be the relevant quantity. Some simple algebra shows that, in general,

$E[R] = (1-g)(1-s) + gs + 1000((1-g)a + g(1-a))$.

Consequently, $\partial E[R]/\partial s = 2g-1$, meaning that greater impulsivity is beneficial (detrimental) if the Player’s strategy amounts to $g \geq 2$ ($g \leq 2$). Because of the constraint on $s$ from the assumption that Predictor is reliable, $s$ cannot be varied too much and it is more interesting to study the dependence on $g$. One has $\partial E[R]/\partial g = 2s-1 + 1000(1-2a)$. Again, the assumption that Predictor is reliable means that $1-2a\leq 0$ and $\partial E[R]/\partial g \leq 0$. It follows that greater bias towards choosing the opaque box only is beneficial.

Conclusion (tentative)

Newcomb’s problem should not be analyzed as a standard decision theoretic problem. Rather, it requires the introduction of a probability distribution that describes how the Player’s choice is correlated, via a common cause, to the Predictor’s prediction. The relevant quantity for assessing the Player’s strategy is the unconditional expectation value $E[R]$ and the model parameters $g$ and $s$ specify which strategy is employed. In all relevant parameter ranges, $g=0$ is the optimal strategy, meaning that the optimal strategy is to deterministically choose the opaque box only. $s$ should be as small as is permissible given the assumptions stipulated in the problem formulation.