## Sunday, 12 February 2017

### Chance Neutrality and the Swamping Problem for Reliabilism

Reliabilism about justified belief comes in two varieties: process reliabilism and indicator reliabilism. According to process reliabilism, a belief is justified if it is formed by a process that is likely to produce truths; according to indicator reliabilism, a belief is justified if it likely to be true given the ground on which the belief is based. Both are natural accounts of justification for a veritist, who holds that the sole fundamental source of epistemic value for a belief is its truth.

Against veritists who are reliabilists, opponents raise the Swamping Problem. This begins with the observation that we prefer a justified true belief to an unjustified true belief; we ascribe greater value to the former than to the latter; we would prefer to have the former over the latter. But, if reliablism is true, this means that we prefer a belief that is true and had a high chance of being true over a belief that is true and had a low chance of being true. For a veritist, this means that we prefer a belief that has maximal epistemic value and had a high chance of having maximal epistemic value over a belief that has maximal epistemic value and had a low chance of having maximal epistemic value. And this is irrational, or so the objection goes. It is only rational to value a high chance of maximal utility when the actual utility is not known; once the actual utility is known, this 'swamps' any consideration of the chance of that utility. For instance, suppose I find a lottery ticket on the street; I know that it comes either from a 10-ticket lottery or from a 100-ticket lottery; both lotteries pay out the same amount to the holder of the winning ticket; and I know the outcome of neither lottery. Then it is rational for me to hope that the ticket I hold belongs to the smaller lottery, since that would maximise my chance of winning and thus maximise the expected utility of the ticket. But once I know that the lottery ticket I found is the winning ticket, it is irrational to prefer that it came from the smaller lottery --- my knowledge that it's the winner 'swamps' the information about how likely it was to be the winner. This is known variously as the Swamping Problem or the Value Problem for reliabilism about justification (Zagzebski 2003, Kvanvig 2003).

The central assumption of the swamping problem is a principle that, in a different context, H. Orri Stefánsson and Richard Bradley call Chance Neutrality (Stefánsson & Bradley 2015). They state it precisely within the framework of Richard Jeffrey's decision theory (Jeffrey 1983). In that framework, we have a desirability function $V$ and a credence function $c$, both of which are defined on an algebra of propositions $\mathcal{F}$. $V(A)$ measures how strongly our agent desires $A$, or how greatly she values it. $c(A)$ measures how strongly she believes $A$, or her credence in $A$. The central principle of the decision theory is this:

Desirability  If the propositions $A_1$, $\ldots$, $A_n$ form a partition of the proposition $X$, then $$V(X) = \sum^n_{i=1} c(A_i | X) V(A_i)$$

Now, suppose the algebra on which $V$ and $c$ are defined includes some propositions that concern the objective probabilities of other propositions in the algebra.  Then:

Chance Neutrality  Suppose $X$ is in the partition $X_1$, \ldots, $X_n$. And suppose $0 \leq \alpha_1, \ldots, \alpha_n \leq 1$ and $\sum^n_{i=1} \alpha = 1$. Then $$V(X\ \&\ \bigwedge^n_{i=1} \mbox{Objective probability of X_i is \alpha_i}) = V(X)$$

That is, information about the outcome of the chance process that picks between $X_1$, $\ldots$, $X_n$ `swamps' information about the chance process in our evaluation, which is recorded in $V$. A simple consequence of this: if $0 \leq \alpha_1, \alpha'_1 \ldots, \alpha_n, \alpha'_n \leq 1$ and $\sum^n_{i=1} \alpha_i = 1$ and $\sum^n_{i=1} \alpha'_i = 1$, then

$V(X\ \&\ \bigwedge^n_{i=1} \mbox{Objective probability of$X_i$is$\alpha_i$}) =$
$V(X\ \&\ \bigwedge^n_{i=1} \mbox{Objective probability of$X_i$is$\alpha'_i$})$

Now consider the particular case of this that is used in the Swamping Problem. I believe $X$ on the basis of ground $g$. I assign greater value to $X$ being true and justified than I do to $X$ being true and unjustified. That is, given the reliabilist's account of justification, if $\alpha$ is a probability that lies above the threshold for justification and $\alpha'$ is a probability that lies below that threshold --- for the veritist, $\alpha' < \frac{W}{R+W} < \alpha$ --- then

$V(X\ \&\ \mbox{Objective probability of$X$given I have$g$is$\alpha'$}) <$
$V(X\ \&\ \mbox{Objective probability of$X$given I have$g$is$\alpha$})$

And of course this violates Chance Neutrality.

Thus, the Swamping Problem stands or falls with the status of Chance Neutrality. Is it a requirement of rationality? Stefánsson and Bradley argue that it is not (Section 3, Stefánsson & Bradley 2015). They show that, in the presence of the Principal Principle, Chance Neutrality entails a principle called Linearity; and they claim that Linearity is not a requirement of rationality. If it is permissible to violate Linearity, then it cannot be a requirement to satisfy a principle that entails it. So Chance Neutrality is not a requirement of rationality.

In this context, the Principal Principle runs as follows:

Principal Principle $$c(X_i | \bigwedge^n_{i=1} \mbox{Objective probability of X_i is \alpha_i}) = \alpha_i$$

That is, an agent's credence in $X_i$, conditional on information that gives the objective probability of $X_i$ and other members of a partition to which it belongs, should be equal to the objective probability of $X_i$. And Linearity is the following principle:

Linearity $$V(\bigwedge^n_{i=1} \mbox{Objective probability of X_i is \alpha_i}) = \sum^n_{i=1} \alpha_iV(X_i)$$

That is, an agent should value a lottery at the expected value of its outcome. Now, as is well known, real agents often violate Linearity (Buchak 2014). The most famous violations are known as the Allais preferences (Allais 1953). Suppose there are 100 tickets numbered 1 to 100. One ticket will be drawn and you will be given a prize depending on which option you have chosen from $L_1$, $\ldots$, $L_4$:
• $L_1$: if ticket 1-89, £1m; if ticket 90-99, £1m; if ticket 100, £1m.
• $L_2$: if ticket 1-89, £1m; if ticket 90-99, £5m; if ticket 100, £0m
• $L_3$: if ticket 1-89, £0m; if ticket 90-99, £1m; if ticket 100, £1m
• $L_4$: if ticket 1-89, £0m; if ticket 90-99, £5m; if ticket 100, £0m
I know that each ticket has an equal chance of winning --- thus, by the Principal Principle, $c(\mbox{Ticket$n$wins}) = \frac{1}{100}$. Now, it turns out that many people have preferences recorded in the following desirability function $V$: $$V(L_1) > V(L_2) \mbox{ and } V(L_3) < V(L_4)$$

When there is an option that guarantees them a high payout (\pounds 1m), they prefer that over something with 1% chance of nothing (\pounds 0) even if it also provides 10%  chance of much greater payout (£5m). On the other hand, when there is no guarantee of a high payout, they prefer the chance of the much greater payout (\pounds 5m), even if there is also a slightly greater chance of nothing (£0). The problem is that there is no way to assign values to $V(£0)$, $V(£1m)$, and $V(£5m)$ so that $V$ satisfies Linearity and also these inequalities. Suppose, for a reductio, that there is. By Linearity,
$$V(L_1) = 0.89V(£1\mathrm{m}) + 0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m})$$
$$V(L_2) = 0.89V(£1\mathrm{m}) + 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m})$$
Then, since $V(L_1) > V(L_2)$, we have: $$0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m}) > 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m})$$ But also by Linearity, $$V(L_3) = 0.89V(£0\mathrm{m}) + 0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m})$$
$$V(L_4) = 0.89V(£0\mathrm{m}) + 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m})$$
Then, since $V(L_3) < V(L_4)$, we have: $$0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m}) < 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m})$$
And this gives a contradiction. In general, an agent violates Linearity when she has any risk averse or risk seeking preferences.

Stefánsson and Bradley show that, in the presence of the Principal Principle, Chance Neutrality entails Linearity; and they argue that there are rational violations of Linearity (such as the Allais preferences); so they conclude that there are rational violations of Chance Neutrality. So far, so good for the reliabilist: the Swamping Problem assumes that Chance Neutrality is a requirement of rationality; and we have seen that it is not. However, reliabilism is not out of the woods yet. After all, the veritist's version of reliabilism that in fact assumes Linearity! They say that a belief is justified if it is likely to true. And they say this because a belief that is likely to be true has high expected epistemic value on the veritist's account of epistemic value. And so they connect justification to epistemic value by taking the value of a belief to be its expected epistemic value --- that is, they assume Linearity. Thus, if the only rational violations of Chance Neutrality are also rational violations of Linearity, then the Swamping Problem is revived. In particular, if Linearity entails Chance Neutrality, then reliabilism cannot solve the Swamping Problem.

Fortunately, even in the presence of the Principal Principle, Linearity does not entail Chance Neutrality. Together, the Principal Principle and Desirability entail:

$V(\mbox{Objective probability of$X$given I have$g$is$\alpha$}) =$

$\alpha V(X\ \&\ \mbox{Objective probability of$X$given I have$g$is$\alpha$}) +$

$(1-\alpha) V(\overline{X}\ \&\ \mbox{Objective probability of$X$given I have$g$is$\alpha$})$

And Linearity entails:

$V(\mbox{Objective probability of$X$given I have$g$is$\alpha$}) = \alpha V(X) + (1-\alpha) V(\overline{X})$

So
$\alpha V(X) + (1-\alpha) V(\overline{X}) =$

$\alpha V(X\ \&\ \mbox{Objective probability of$X$given I have$g$is$\alpha$}) +$

$(1-\alpha) V(\overline{X}\ \&\ \mbox{Objective probability of$X$given I have$g$is$\alpha$})$

And, whatever the values of $V(X)$ and $V(\overline{X})$, there are values of $$V(X\ \&\ \mbox{Objective probability of X given I have g is \alpha})$$ and $$V(\overline{X}\ \&\ \mbox{Objective probability of X given I have g is \alpha})$$
such that the above equation holds. Thus, it is at least possible to adhere to Linearity, yet violate Chance Neutrality. Of course, this does not show that the agent who adheres to Linearity but violates Chance Neutrality is rational. But, now that the intuitive appeal of Chance Neutrality is undermined, the burden is on those who raise the Swamping Problem to explain why such cases are irrational.

## References

• Allais, M. (1953). Le comportement de l’homme rationnel devant le risque: critique des postulats et axiomes de l'école Amáricaine. Econometrica, 21(4), 503–546.
• Buchak, L. (2013). Risk and Rationality. Oxford University Press.
• Kvanvig, J. (2003). The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge University Press.
• Stefánsson, H. O., & Bradley, R. (2015). How Valuable Are Chances? Philosophy of Science, 82, 602–625.
• Zagzebski, L. (2003). The search for the source of the epistemic good. Metaphilosophy, 34(12-28).