Sunday, 2 July 2017

Three Postdoctoral Fellowships at the MCMP (LMU Munich)

The Munich Center for Mathematical Philosophy (MCMP) seeks applications for three 3-year postdoctoral fellowships starting on October 1, 2017. (A later starting date is possible.) We are especially interested in candidates who work in the field of mathematical philosophy with a focus on philosophical logic (broadly construed, including philosophy and foundations of mathematics, semantics, formal philosophy of language, inductive logic and foundations of probability, and more).

Candidates who have not finished their PhD at the time of the application deadline have to provide evidence that they will have their PhD in hand at the time the fellowship starts. Applications (including a cover letter that addresses, amongst others, one's academic background, research interests and the proposed starting date, a CV, a list of publications, a sample of written work of no more than 5000 words, and a description of a planned research project of about 1000 words) should be sent by email (in one PDF document) to office.leitgeb@lrz.uni-muenchen.de by August 15, 2017. Hard copy applications are not accepted. Additionally, two confidential letters of reference addressing the applicant's qualifications for academic research should be sent to the same email address from the referees directly.

The MCMP hosts a vibrant research community of faculty, postdoctoral fellows, doctoral fellows, master students, and visiting fellows. It organizes at least two weekly colloquia and a weekly internal work-in-progress seminar, as well as various other activities such as workshops, conferences, summer schools, and reading groups. The successful candidates will partake in the MCMP's academic activities and enjoy its administrative facilities and support. The official language at the MCMP is English and fluency in German is not mandatory.

We especially encourage female scholars to apply. The LMU in general, and the MCMP in particular, endeavor to raise the percentage of women among its academic personnel. Furthermore, given equal qualification, preference will be given to candidates with disabilities.

The fellowships are remunerated with 1.853 €/month (paid out without deductions for tax and social security). The MCMP is able to support fellows concerning expenses for professional traveling.

For further information, please contact Prof. Hannes Leitgeb (H.Leitgeb@lmu.de).


 

Three Doctoral Fellowships at the MCMP (LMU Munich)

The Munich Center for Mathematical Philosophy (MCMP) seeks applications for three 3-year doctoral fellowships starting on October 1, 2017. (A later starting date is possible.) We are especially interested in candidates who work in the field of mathematical philosophy with a focus on philosophical logic (broadly construed, including philosophy and foundations of mathematics, semantics, formal philosophy of language, inductive logic and foundations of probability, and more).

Candidates who have not finished their MA at the time of the application deadline have to provide evidence that they will have their MA in hand at the time the fellowship starts. Applications (including a cover letter that addresses, amongst others, one's academic background, research interests and the proposed starting date, a CV, a list of publications (if applicable), a sample of written work of no more than 3000 words, and a description of the planned PhD-project of about 2000 words) should be sent by email (in one PDF document) to office.leitgeb@lrz.uni-muenchen.de by August 15, 2017. Hard copy applications are not accepted. Additionally, one confidential letter of reference addressing the applicant's qualifications for academic research should be sent to the same email address from the referees directly.

The MCMP hosts a vibrant research community of faculty, postdoctoral fellows, doctoral fellows, master students, and visiting fellows. It organizes at least two weekly colloquia and a weekly internal work-in-progress seminar, as well as various other activities such as workshops, conferences, summer schools, and reading groups. The successful candidates will partake in the MCMP's academic activities and enjoy its administrative facilities and support. The official language at the MCMP is English and fluency in German is not mandatory.

We especially encourage female scholars to apply. The LMU in general, and the MCMP in particular, endeavor to raise the percentage of women among its academic personnel. Furthermore, given equal qualification, preference will be given to candidates with disabilities.

The fellowships are remunerated with 1.468 €/month (paid out without deductions for tax and social security). The MCMP is able to support fellows concerning expenses for professional traveling.

For further information, please contact Prof. Hannes Leitgeb (H.Leitgeb@lmu.de).


Tuesday, 16 May 2017

The Wisdom of the Crowds: generalizing the Diversity Prediction Theorem

I've just been reading Aidan Lyon's fascinating paper, Collective Wisdom. In it, he mentions a result known as the Diversity Prediction Theorem, which is sometimes taken to explain why crowds are wiser, on average, than the individuals who compose them. The theorem was originally proved by Anders Krogh and Jesper Vedelsby, but it has entered the literature on social epistemology through the work of Scott E. Page. In this post, I'll generalize this result.

The Diversity Prediction Theorem concerns a situation in which a number of different individuals estimate a particular quantity -- in the original example, it is the weight of an ox at a local fair. Take the crowd's estimate of the quantity to be the average of the individual estimates. Then the theorem shows that the distance from the crowd's estimate to the true value is less than the average distance from the individual estimates to the true value; and, moreover, the difference between the two is always given by the average distance from the individual estimates to the crowd's estimate (which you might think of as the variance of the individual estimates).

Let's make this precise. Suppose you have a group of $n$ individuals. They each provide an estimate for a real-valued quantity. The $i^\mathrm{th}$ individual gives the prediction $q_i$. The true value of this quantity is $\tau$. And we measure the distance from one estimate of a quantity to another, or to the true value of that quantity, using squared error. Then:
  • The crowd's prediction of the quantity is $c = \frac{1}{n}\sum^n_{i=1} q_i$.
  • The crowd's distance from the true quantity is $\mathrm{SqE}(c) = (c-\tau)^2$.
  • $S_i$'s distance from the true quantity is $\mathrm{SqE}(q_i) = (q_i-\tau)^2$
  • The average individual distance from the true quantity is $\frac{1}{n} \sum^n_{i=1} \mathrm{SqE}(q_i) = \frac{1}{n} \sum^n_{i=1} (q_i - \tau)^2$.
  • The average individual distance from the crowd's estimate is $v = \frac{1}{n}\sum^n_{i=1} (q_i - c)^2$.
Given this, we have:

Diversity Prediction Theorem $$\mathrm{SqE}(c) = \frac{1}{n} \sum^n_{i=1} \mathrm{SqE}(q_i) - v$$
The theorem is easy enough to prove. You essentially just follow the algebra. However, following through the proof, you might be forgiven for thinking that the result says more about some quirk of squared error as a measure of distance than about the wisdom of crowds. And of course squared error is just one way of measuring the distance from an estimate of a quantity to the true value of that quantity, or from one estimate of a quantity to another. There are other such distance measures. So the question arises: Does the Diversity Prediction Theorem hold if we replace squared error with one of these alternative measures of distance? In particular, it is natural to take any of the so-called Bregman divergences $\mathfrak{d}$ to be a legitimate measure of distance from one estimate to another. I won't say much about Bregman divergences here, except to give their formal definition. To learn about their properties, have a look here and here. They were introduced by Bregman as a natural generalization of squared error.

Definition (Bregman divergence) A function $\mathfrak{d} : [0, \infty) \times [0, \infty) \rightarrow [0, \infty]$ is a Bregman divergence if there is a continuously differentiable, strictly convex function $\varphi : [0, \infty) \rightarrow [0, \infty)$ such that $$\mathfrak{d}(x, y) = \varphi(x) - \varphi(y) - \varphi'(y)(x-y)$$
Squared error is itself one of the Bregman divergences. It is the one generated by $\varphi(x) = x^2$. But there are many others, each generated by a different function $\varphi$.

Now, suppose we measure distance between estimates using a Bregman divergence $\mathfrak{d}$. Then:
  • The crowd's prediction of the quantity is $c = \frac{1}{n}\sum^n_{i=1} j_i$.
  • The crowd's distance from the true quantity is $\mathrm{E}(c) = \mathfrak{d}(c, \tau)$.
  • $S_i$'s distance from the true quantity is $\mathrm{E}(j_i) = \mathfrak{d}(q_i, \tau)$
  • The average individual distance from the true quantity is $\frac{1}{n} \sum^n_{i=1} \mathrm{E}(j_i) = \frac{1}{n} \sum^n_{i=1} \mathfrak{d}(q_i, \tau)$.
  • The average individual distance from the crowd's estimate is $v = \frac{1}{n}\sum^n_{i=1} \mathfrak{d}(q_i, c)$.
 Given this, we have:

Generalized Diversity Prediction Theorem $$\mathrm{E}(c) = \frac{1}{n} \sum^n_{i=1} \mathrm{E}(q_i) - v$$
Proof.
\begin{eqnarray*}
& & \frac{1}{n} \sum^n_{i=1} \mathrm{E}(q_i) - v \\
& = & \frac{1}{n} \sum^n_{i=1} [ \mathfrak{d}(q_i, \tau) - \mathfrak{d}(q_i, c)] \\
& = & \frac{1}{n} \sum^n_{i=1} [\varphi(q_i) - \varphi(\tau) - \varphi'(\tau)(q_i - \tau)] - [\varphi(q_i) - \varphi(c) - \varphi'(\tau)(q_i - c)] \\
& = & \frac{1}{n} \sum^n_{i=1} [\varphi(q_i)- \varphi(\tau) - \varphi'(\tau)(q_i - \tau) - \varphi(q_i)+ \varphi(c) + \varphi'(\tau)(q_i - c)] \\
& = & - \varphi(\tau) - \varphi'(\tau)((\frac{1}{n} \sum^n_{i=1} q_i) - \tau) + \varphi(c) + \varphi'(\tau)((\frac{1}{n} \sum^n_{i=1} q_i) - c) \\
& = & - \varphi(\tau) - \varphi'(\tau)(c - \tau) + \varphi(c) + \varphi'(\tau)(c - c) \\
& = & \varphi(c) - \varphi(\tau) - \varphi'(\tau)(c - \tau) \\
& = &   \mathfrak{d}(c, \tau) \\
& = & \mathrm{E}(c)
\end{eqnarray*}
as required.

Thursday, 11 May 2017

Reasoning Club Conference 2017


The Fifth Reasoning Club Conference will take place at the Center for Logic, Language, and Cognition in Turin on May 18-19, 2017.

The Reasoning Club is a network of institutes, centres, departments, and groups addressing research topics connected to reasoning, inference, and methodology broadly construed. It issues the monthly gazette The Reasoner. (Earlier editions of the meeting were held in Brussels, Pisa, Kent, and Manchester.)



PROGRAM


THURSDAY, MAY 18

Palazzo Badini
via Verdi 10, Torino
Sala Lauree di Psicologia (ground floor)


9:00 | welcome and coffee

9:30 | greetings
           presentation of the new editorship of The Reasoner
           (Hykel HOSNI, Milan)


Morning session – chair: Gustavo CEVOLANI (IMT Lucca)


10:00 | invited talk

Branden FITELSON (Northeastern University, Boston)

Two approaches to belief revision

In this paper, we compare and contrast two methods for the qualitative revision of (viz., full) beliefs. The first (Bayesian) method is generated by a simplistic diachronic Lockean thesis requiring coherence with the agent's posterior credences after conditionalization. The second (Logical) method is the orthodox AGM approach to belief revision. Our primary aim will be to characterize the ways in which these two approaches can disagree with each other — especially in the special case where the agent's belief set is deductively cogent.

(joint work with Ted Shear and Jonathan Weisberg)


11:00 | Ted SHEAR (Queensland) and John QUIGGIN (Queensland)
 
A modal logic for reasonable belief


11:45 | Nina POTH (Edinburgh) and Peter BRÖSSEL (Bochum)

Bayesian inferences and conceptual spaces: Solving the complex-first paradox


12:30 | lunch break


Afternoon session I – chair: Peter BRÖSSEL (Bochum)


13:30 | invited talk

Katya TENTORI (University of Trento)

Judging forecasting accuracy 
How human intuitions can help improving formal models

Most of the scoring rules that have been discussed and defended in the literature are not ordinally equivalent, with the consequence that, after the very same outcome has materialized, a forecast X can be evaluated as more accurate than Y according to one model but less accurate according to another. A question that naturally arises is therefore which of these models better captures people’s intuitive assessment of forecasting accuracy. To answer this question, we developed a new experimental paradigm for eliciting ordinal judgments of accuracy concerning pairs of forecasts for which various combinations of associations/dissociations between the Quadratic, Logarithmic, and Spherical scoring rules are obtained. We found that, overall, the Logarithmic model is the best predictor of people’s accuracy judgments, but also that there are cases in which these judgments — although they are normatively sound — systematically depart from what is expected by all the models. These results represent an empirical evaluation of the descriptive adequacy of the three most popular scoring rules and offer insights for the development of new formal models that might favour a more natural elicitation of truthful and informative beliefs from human forecasters.

(joint work with Vincenzo Crupi and Andrea Passerini)


14:15 | Catharine SAINT-CROIX (Michigan)

Immodesty and evaluative uncertainty


15:15 | Michael SCHIPPERS (Oldenburg), Jakob KOSCHOLKE (Hamburg)

Against relative overlap measures of coherence


16:00 | coffee break


Afternoon session II – chair: Paolo MAFFEZIOLI (Torino)


16:30 | Simon HEWITT (Leeds)

Frege's theorem in plural logic


17:15 | Lorenzo ROSSI (Salzburg) and Julien MURZI (Salzburg)

Generalized Revenge


 
FRIDAY, MAY 19

Campus Luigi Einaudi
Lungo Dora Siena 100/A
Sala Lauree Rossa
building D1 (ground floor)


9:00 | welcome and coffee


Morning session – chair: Jan SPRENGER (Tilburg)


9:30 | invited talk

Paul EGRÉ (Institut Jean Nicod, Paris)

Logical consequence and ordinary reasoning

The notion of logical consequence has been approached from a variety of angles. Tarski famously proposed a semantic characterization (in terms of truth-preservation), but also a structural characterization (in terms of axiomatic properties including reflexivity, transitivity, monotonicity, and other features). In recent work, E. Chemla, B. Spector and I have proposed a characterization of a wider class of consequence relations than Tarskian relations, which we call "respectable" (Journal of Logic and Computation, forthcoming). The class also includes non-reflexive and nontransitive relations, which can be motivated in relation to ordinary reasoning (such as reasoning with vague predicates, see Zardini 2008, Cobreros et al. 2012, or reasoning with presuppositions, see Strawson 1952, von Fintel 1998, Sharvit 2016). Chemla et al.'s characterization is partly structural, and partly semantic, however. In this talk I will present further advances toward a purely structural characterization of such respectable consequence relations. I will discuss the significance of this research program toward bringing logic closer to ordinary reasoning.

(joint work with Emmanuel Chemla and Benjamin Spector)


10:30 | Niels SKOVGAARD-OLSEN (Freiburg)

Conditionals and multiple norm conflicts


11:15 | Luis ROSA (Munich)

Knowledge grounded on pure reasoning


12:00 | lunch break


Afternoon session I – chair: Steven HALES (Bloomsburg)


13:30 | invited talk

Leah HENDERSON (University of Groningen)

The unity of explanatory virtues

Scientific theory choice is often characterised as an Inference to the Best Explanation (IBE) in which a number of distinct explanatory virtues are combined and traded off against one another. Furthermore, the epistemic significance of each explanatory virtue is often seen as highly case-specific. But are there really so many dimensions to theory choice? By considering how IBE may be situated in a Bayesian framework, I propose a more unified picture of the virtues in scientific theory choice.


14:30 | Benjamin EVA (Munich) and Reuben STERN (Munich)

Causal explanatory power


15:15 | coffee break


Afternoon session II – chair: Jakob KOSCHOLKE (Hamburg)


16:00 | Barbara OSIMANI (Munich)

Bias, random error, and the variety of evidence thesis


16:45 | Felipe ROMERO (Tilburg) and Jan SPRENGER (Tilburg)

Scientific self-correction: The Bayesian way



ORGANIZING COMMITTEE

Gustavo Cevolani (Torino)
Vincenzo Crupi (Torino)
Jason Konek (Kent)
Paolo Maffezioli (Torino)



For any queries please contact Vincenzo Crupi (vincenzo.crupi@unito.it) or Jason Konek (jpkonek@ksu.edu).


Saturday, 8 April 2017

Formal Truth Theories workshop, Warsaw (Sep. 28-30)

Cezary Cieslinski and his team organize a workshop on formal theories of truth in Warsaw, to take place 28-30 September 2017. The invites include Dora Achourioti, Ali Enayat, Kentaro Fujimoto, Volker Halbach, Graham Leigh, and Albert Visser. Submission deadline is May 15. More details here.

Sunday, 19 March 2017

Aggregating incoherent credences: the case of geometric pooling

In the last few posts (here and here), I've been exploring how we should extend the probabilistic aggregation method of linear pooling so that it applies to groups that contain incoherent individuals (which is, let's be honest, just about all groups). And our answer has been this: there are three methods -- linear-pool-then-fix, fix-then-linear-pool, and fix-and-linear-pool-together -- and they agree with one another just in case you fix incoherent credences by taking the nearest coherent credences as measured by squared Euclidean distance. In this post, I ask how we should extend the probabilistic aggregation method of geometric pooling.

As before, I'll just consider the simplest case, where we have two individuals, Adila and Benoit, and they have credence functions -- $c_A$ and $c_B$, respectively -- that are defined for a proposition $X$ and its negation $\overline{X}$. Suppose $c_A$ and $c_B$ are coherent. Then geometric pooling says:

Geometric pooling The aggregation of $c_A$ and $c_B$ is $c$, where
  • $c(X) = \frac{c_A(X)^\alpha c_B(X)^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$
  • $c(\overline{X}) = \frac{c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$
for some $0 \leq \alpha \leq 1$.

Now, in the case of linear pooling, if $c_A$ or $c_B$ is incoherent, then it is most likely that any linear pool of them is also incoherent. However, in the case of geometric pooling, this is not the case. Linear pooling requires us to take a weighted arithmetic average of the credences we are aggregating. If those credences are coherent, so is their weighted arithmetic average. Thus, if you are considering only coherent credences, there is no need to normalize the weighted arithmetic average after taking it to ensure coherence. However, even if the credences we are aggregating are coherent, their weighted geometric averages are not. Thus, geometric pooling requires that we first take the weighted geometric average of the credences we are pooling and then normalize the result, to ensure that the result is coherent. But this trick works whether or not the original credences are coherent. Thus, we need do nothing more to geometric pooling in order to apply it to incoherent agents.

Nonetheless, questions still arise. What we have shown is that, if we first geometrically pool our two incoherent agents, then the result is in fact coherent and so we don't need to undertake the further step of fixing up the credences to make them coherent. But what if we first choose to fix up our two incoherent agents so that they are coherent, and then geometrically pool them? Does this give the same answer as if we just pooled the incoherent agents? And, similarly, what if we decide to fix and pool together?

Interestingly, the results are exactly the reverse of the results in the case of linear pooling. In that case, if we fix up incoherent credences by taking the coherent credences that minimize squared Euclidean distance, then all three methods agree, whereas if we fix them up by taking the coherent credences that minimize generalized Kullback-Leibler divergence, then sometimes all three methods disagree. In the case of geometric pooling, it is the opposite. Fixing up using generalized KL divergence makes all three methods agree -- that is, pool, fix-then-pool, and fix-and-pool-together all give the same result when we use GKL to measure distance. But fixing up using squared Euclidean distance leads to three separate methods that sometimes all disagree. That is, GKL is the natural distance measure to accompany geometric pooling, while SED is the natural measure to accompany linear pooling.

Friday, 17 March 2017

A little more on aggregating incoherent credences

Last week, I wrote about a problem that arises if you wish to aggregate the credal judgments of a group of agents when one or more of those agents has incoherent credences. I focussed on the case of two agents, Adila and Benoit, who have credence functions $c_A$ and $c_B$, respectively. $c_A$ and $c_B$ are defined over just two propositions, $X$ and its negation $\overline{X}$.

I noted that there are two natural ways to aggregate $c_A$ and $c_B$ for someone who adheres to Probabilism, the principle that says that credences should be coherent. You might first fix up Adila's and Benoit's credences so that they are coherent, and then aggregate them using linear pooling -- let's call that fix-then-pool. Or you might aggregate Adila's and Benoit's credences using linear pooling, and then fix up the pooled credences so that they are coherent -- let's call that pool-then-fix. And I noted that, for some natural ways of fixing up incoherent credences, fix-then-pool gives a different result from pool-then-fix. This, I claimed, creates a dilemma for the person doing the aggregating, since there seems to be no principled reason to favour either method.

How do we fix up incoherent credences? Well, a natural idea is to find the coherent credences that are closest to them and adopt those in their place. This obviously requires a measure of distance between two credence functions. In last week's post, I considered two:

Squared Euclidean Distance (SED) For two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$SED(c, c') = \sum^n_{i=1} (c(X_i) - c'(X_i))^2$$

Generalized Kullback-Leibler Divergence (GKL) For two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$GKL(c, c') = \sum^n_{i=1} c(X_i) \mathrm{log}\frac{c(X_i)}{c'(X_i)} - \sum^n_{i=1} c(X_i) + \sum^n_{i=1} c'(X_i)$$

If we use $SED$ when we are fixing incoherent credences -- that is, if we fix an incoherent credence function $c$ by adopting the coherent credence function $c^*$ for which $SED(c^*, c)$ is minimal -- then fix-then-pool gives the same results as pool-then-fix.

If we use GKL when we are fixing incoherent credences -- that is, if we fix an incoherent credence function $c$ by adopting the coherent credence function $c^*$ for which $GKL(c^*, c)$ is minimal -- then fix-then-pool gives different results from pool-then-fix.

Since last week's post, I've been reading this paper by Joel Predd, Daniel Osherson, Sanjeev Kulkarni, and Vincent Poor. They suggest that we pool and fix incoherent credences in one go using a method called the Coherent Aggregation Principle (CAP), formulated in this paper by Daniel Osherson and Moshe Vardi. In its original version, CAP says that we should aggregate Adila's and Benoit's credences by taking the coherent credence function $c$ such that the sum of the distance of $c$ from $c_A$ and the distance of $c$ from $c_B$ is minimized. That is,

CAP Given a measure of distance $D$ between credence functions, we should pick that coherent credence function $c$ such that minimizes $D(c, c_A) + D(c, c_B)$.

As they note, if we take $SED$ to be our measure of distance, then this method generalizes the aggregation procedure on coherent credences that just takes straight averages of credences. That is, CAP entails unweighted linear pooling:

Unweighted Linear Pooling If $c_A$ and $c_B$ are coherent, then the aggregation of $c_A$ and $c_B$ is $$\frac{1}{2} c_A + \frac{1}{2}c_B$$

We can generalize this result a little by taking a weighted sum of the distances, rather than the straight sum.

Weighted CAP Given a measure of distance $D$ between credence functions, and given $0 \leq \alpha leq 1$, we should pick the coherent credence function $c$ that minimizes $\alpha D(c, c_A) + (1-\alpha)D(c, c_B)$.

If we take $SED$ to measure the distance between credence functions, then this method generalizes linear pooling. That is, Weighted CAP entails linear pooling:

Linear Pooling If $c_A$ and $c_B$ are coherent, then the aggregation of $c_A$ and $c_B$ is $$\alpha c_A + (1-\alpha)c_B$$ for some $0 \leq \alpha \leq 1$.

What's more, when distance is measured by $SED$, Weighted CAP agrees with fix-then-pool and with pool-then-fix (providing the fixing is done using $SED$ as well). Thus, when we use $SED$, all of the methods for aggregating incoherent credences that we've considered agree. In particular, they all recommend the following credence in $X$: $$\frac{1}{2} + \frac{\alpha(c_A(X)-c_A(\overline{X})) + (1-\alpha)(c_B(X)  - c_B(\overline{X}))}{2}$$

However, the story is not nearly so neat and tidy if we measure the distance between two credence functions using $GKL$. Here's the credence in $X$ recommended by fix-then-pool:$$\alpha \frac{c_A(X)}{c_A(X) + c_A(\overline{X})} + (1-\alpha)\frac{c_B(X)}{c_B(X) + c_B(\overline{X})}$$ Here's the credence in $X$ recommended by pool-then-fix: $$\frac{\alpha c_A(X) + (1-\alpha)c_B(X)}{\alpha (c_A(X) + c_A(\overline{X})) + (1-\alpha)(c_B(X) + c_B(\overline{X}))}$$ And here's the credence in $X$ recommended by Weighted CAP: $$\frac{c_A(X)^\alpha c_B(X)^{1-\alpha}}{c_A(X)^\alpha c_B(X)^{1-\alpha} + c_A(\overline{X})^\alpha c_B(\overline{X})^{1-\alpha}}$$ For many values of $\alpha$, $c_A(X)$, $c_A(\overline{X})$, $c_B(X)$, $c_B(\overline{X})$ these will give three distinct results.


Friday, 10 March 2017

A dilemma for judgment aggregation

Let's suppose that Adila and Benoit are both experts, and suppose that we are interested in gleaning from their opinions about a certain proposition $X$ and its negation $\overline{X}$ a judgment of our own about $X$ and $\overline{X}$. Adila has credence function $c_A$, while Benoit has credence function $c_B$. One standard way to derive our own credence function on the basis of this information is to take a linear pool or weighted average of Adila's and Benoit's credence functions. That is, we assign a weight to Adila ($\alpha$) and a weight to Benoit ($1-\alpha$) and we take the linear combination of their credence functions with these weights to be our credence function. So my credence in $X$ will be $\alpha c_A(X) + (1-\alpha) c_B(X)$, while my credence in $\overline{X}$ will be $\alpha c_A(\overline{X}) + (1-\alpha)c_B(\overline{X})$.

But now suppose that either Adila or Benoit or both are probabilistically incoherent -- that is, either $c_A(X) + c_A(\overline{X}) \neq 1$ or $c_B(X) + c_B(\overline{X}) \neq 1$ or both. Then, it may well be that the linear pool of their credence functions is also probabilistically incoherent. That is,

$(\alpha c_A(X) + (1-\alpha) c_B(X)) + (\alpha c_A(\overline{X}) + (1-\alpha)c_B(\overline{X})) = $

$\alpha (c_A(X)  + c_A(\overline{X})) + (1-\alpha)(c_B(X) + c_B(\overline{X})) \neq 1$

But, as an adherent of Probabilism, I want my credences to be probabilistically coherent. So, what should I do?

A natural suggestion is this: take the aggregated credences in $X$ and $\overline{X}$, and then take the closest pair of credences that are probabilistically coherent. Let's call that process the coherentization of the incoherent credences. Of course, to carry out this process, we need a measure of distance between any two credence functions. Luckily, that's easy to come by. Suppose you are an adherent of Probabilism because you are persuaded by the so-called accuracy dominance arguments for that norm. According to these arguments, we measure the accuracy of a credence function by measuring its proximity to the ideal credence function, which we take to be the credence function that assigns credence 1 to all truths and credence 0 to all falsehoods. That is, we generate a measure of the accuracy of a credence function from a measure of the distance between two credence functions. Let's call that distance measure $D$. In the accuracy-first literature, there are reasons for taking $D$ to be a so-called Bregman divergence. Given such a measure $D$, we might be tempted to say that, if Adila and/or Benoit are incoherent and our linear pool of their credences is incoherent, we should not adopt that linear pool as our credence function, since it violates Probabilism, but rather we should find the nearest coherent credence function to the incoherent linear pool, relative to $D$, and adopt that. That is, we should adopt credence function $c$ such that $D(c, \alpha c_A + (1-\alpha)c_B)$ is minimal. So, we should first take the linear pool of Adila's and Benoit's credences; and then we should make them coherent.

But this raises the question: why not first make Adila's and Benoit's credences coherent, and then take the linear pool of the resulting credence functions? Do these two procedures give the same result? That is, in the jargon of algebra, does linear pooling commute with our procedure for making incoherent credences coherent? Does linear pooling commute with coherentization? If so, there is no problem. But if not, our judgment aggregation method faces a dilemma: in which order should the procedures be performed: aggregate, then make coherent; or make coherent, then aggregate.

It turns out that whether or not the two commute depends on the distance measure in question. First, suppose we use the so-called squared Euclidean distance measure. That is, for two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$SED(c, c') = \sum^n_{i=1} (c(X_i) - c'(X_i))^2$$ In particular, if $c$, $c'$ are defined on $X$, $\overline{X}$, then the distance from $c$ to $c'$ is $$(c(X) -c'(X))^2 + (c(\overline{X})-c'(\overline{X})^2$$ And note that this generates the quadratic scoring rule, which is strictly proper:
  • $\mathfrak{q}(1, x) = (1-x)^2$
  • $\mathfrak{q}(0, x) = x^2$
Then, in this case, linear pooling commutes with our procedure for making incoherent credences coherent. Given a credence function $c$, let $c^*$ be the closest coherent credence function to $c$ relative to $SED$. Then:

Theorem 1 For all $\alpha$, $c_A$, $c_B$, $$\alpha c^*_A + (1-\alpha)c^*_B = (\alpha c_A + (1-\alpha)c_B)^*$$

Second, suppose we use the generalized Kullback-Leibler divergence to measure the distance between credence functions. That is, for two credence functions $c$, $c'$ defined on a set of propositions $X_1$, $\ldots$, $X_n$,$$GKL(c, c') = \sum^n_{i=1} c(X_i) \mathrm{log}\frac{c(X_i)}{c'(X_i)} - \sum^n_{i=1} c(X_i) + \sum^n_{i=1} c'(X_i)$$ Thus, for $c$, $c'$ defined on $X$, $\overline{X}$, the distance from $c$ to $'$ is $$c(X)\mathrm{log}\frac{c(X)}{c'(X)} + c(\overline{X})\mathrm{log}\frac{c(\overline{X})}{c'(\overline{X})} - c(X) - c(\overline{X}) + c'(X) + c'(\overline{X})$$ And note that this generates the following scoring rule, which is strictly proper:
  • $\mathfrak{b}(1, x) = \mathrm{log}(\frac{1}{x}) - 1 + x$
  • $\mathfrak{b}(0, x) = x$
Then, in this case, linear pooling does not commute with our procedure for making incoherent credences coherent. Given a credence function $c$, let $c^+$ be the closest coherent credence function to $c$ relative to $GKL$. Then:

Theorem 2 For many $\alpha$, $c_A$, $c_B$, $$\alpha c^+_A + (1-\alpha)c^+_B \neq (\alpha c_A + (1-\alpha)c_B)^+$$

Proofs of Theorems 1 and 2. With the following two key facts in hand, the results are straightforward. If $c$ is defined on $X$, $\overline{X}$:
  • $c^*(X) = \frac{1}{2} + \frac{c(X)-c(\overline{X})}{2}$, $c^*(\overline{X}) = \frac{1}{2} - \frac{c(X) - c(\overline{X})}{2}$.
  • $c^+(X) = \frac{c(X)}{c(X) + c(\overline{X})}$, $c^+(\overline{X}) = \frac{c(\overline{X})}{c(X) + c(\overline{X})}$.

Thus, Theorem 1 tells us that, if you measure distance using SED, then no dilemma arises: you can aggregate and then make coherent, or you can make coherent and then aggregate -- they will have the same outcome. However, Theorem 2 tells us that, if you measure distance using GKL, then a dilemma does arise: aggregating and then making coherent gives a different outcome from making coherent and then aggregating.

Perhaps this is an argument against GKL and in favour of SED? You might think, of course, that the problem arises here only because SED is somehow naturally paired with linear pooling, while GKL might be naturally paired with some other method of aggregation such that that method of aggregation commutes with coherentization relative to GKL. That may be so. But bear in mind that there is a very general argument in favour of linear pooling that applies whichever distance measure you use: it says that if you do not aggregate a set of probabilistic credence functions using linear pooling then there is some linear pool that each of those credence functions expects to be more accurate than your aggregation. So I think this response won't work.

Wednesday, 1 March 2017

More on the Swamping Problem for Reliabilism

In a previous post, I floated the possibility that we might use recent work in decision theory by Orri Stefánsson and Richard Bradley to solve the so-called Swamping Problem for veritism. In this post, I'll show that, in fact, this putative solution can't work.

According to the Swamping Problem, I value beliefs that are both justified and true more than I value beliefs that are true but unjustified; and, we might suppose, I value beliefs that are justified but false more than I value beliefs that are both unjustified and false. In other words, I care about the truth or falsity or my beliefs; but I also care about their justification. Now, suppose we take the view, which I defend in this earlier post, that a belief in a proposition is more justified the higher the objective probability of that proposition given the grounds for that belief. Thus, for instance, if I base my belief that there was a firecrest in front of me until a few seconds ago on the fact that I saw a flash of orange as the bird flew off, then my belief is more justified the higher the objective probability that it was a firecrest given that I saw a flash of orange. And, whether there really was a firecrest in front of me, the value of my belief increases as the objective probability that there was given I saw a flash of orange increases.

Let's translate this into Stefánsson and Bradley's version of Richard Jeffrey's decision theory. Here are the components:
  • a Boolean algebra $F$
  • a desirability function $V$, defined on $F$
  • a credence function $c$, defined on $F$
The fundamental assumption of Jeffrey's framework is this:

Desirability For any partition $X_1$, ..., $X_n$, $$V(X) = \sum^n_{i=1} c(X_i | X)V(X\ \&\ X_i)$$ And, further, we assume Lewis' Principal Principle, where $C^x_X$ is the proposition that says that $X$ has objective probability $x$:

Principal Principle $$c(X_j | \bigwedge^n_{i=1} C^{x_i}_{X_i}) = x_i$$ Now, suppose I believe proposition $X$. Then, from what we said above, we can extract the following:
  1. $V(X\ \&\ C^x_X)$ is a monotone increasing and non-constant function of $x$, for $0 \leq x \leq 1$
  2. $V(X\ \&\ C^x_X)$ is a monotone increasing and non-constant function of $x$, for $0 \leq x \leq 1$
  3. $V(X\ \&\ C^x_X) > V(\overline{X}\ \&\ C^x_X)$, for $0 \leq x \leq 1$.
Given this, the Swamping Problem usually proceeds by identifying a problem with (1) and (2) as follows. It begins by claiming that the principle that Stefánsson and Bradley, in another context, call Chance Neutrality is indeed a requirement of rationality:

Chance Neutrality $$V(X_j\ \&\ \bigwedge^n_{i=1} C^{x_i}_{X_i}) = V(X)$$ Or, equivalently:

Chance Neutrality$^*$ $$V(X_j\ \&\ \bigwedge^n_{i=1} C^{x_i}_{X_i}) = V(X_j\ \&\ \bigwedge^n_{i=1} C^{x'_i}_{X_i})$$ This says that the truth of $X$ swamps the chance of $X$ in determining the value of an outcome. With the truth of $X$ fixed, its chance of being true becomes irrelevant.

The Swamping Problem then continues by noting that, if (1) or (2) is true, then my desirability function violates Chance Neutrality. Therefore, it concludes, I am irrational.

However, as Stefánsson and Bradley show, Chance Neutrality is not a requirement of rationality. To do this, they consider a further putative principle, which they call Linearity:

Linearity $$V(\bigwedge^n_{i=1} C^{x_i}_{X_i}) = \sum^n_{i=1} x_iV(X_i)$$ Now, Stefánsson and Bradley show

Theorem Suppose Desirability and the Principal Principle. Then Chance Neutrality entails Linearity.

They then argue that, since Linearity is not a rational requirement, neither can Chance Neutrality be -- since the Principal Principle is a rational requirement, if Chance Neutrality were too, then Linearity would be; and Linearity is not because it is violated in cases of rational preference, such as in the Allais paradox.

Thus, the Swamping Problem in its original form fails. It relies on Chance Neutrality, but Chance Neutrality is not a requirement of rationality. Of course, if we could prove a sort of converse of Stefánsson and Bradley's result, and show that, in the presence of the Principal Principle, Linearity entails Chance Neutrality, then we could show that a value function satisfying (1) is irrational. But we can't prove that converse.

Nonetheless, there is still a problem. For we can show that, in the presence of Desirability and the Principal Principle, Linearity entails that there is no desirability function $V$ that satisfies (1). Of course, given that Linearity is not a requirement of rationality, this does not tell us very much at the moment. But it does when we realise that, while Linearity is not required by rationality, veritists who accept the reliabilist account of justification given above typically do have a desirability function that satisfies Linearity. After all, they value a justified belief because it is reliable -- that is, it has high objective expected epistemic value. That is, they value a belief at its expected epistemic value, which is precisely what Linearity says.

Theorem Suppose $X$ is a proposition in $F$. And suppose $V$ satisfies Desirability, Principal Principle, and Linearity. Then it is not possible that the following are all satisfied: 
  • (Monotonicity) $V(X\ \&\ C^x_X)$ and $V(\overline{X}\ \&\ C^x_X)$ are both monotone increasing and non-constant functions of $x$ on $(0, 1)$;
  • (Betweenness) There is $0 < x < 1$ such that $V(X) < V(X\ \&\ C^x_X)$.

Proof. We suppose Desirability, Principal Principle, and Linearity throughout. We proceed by reductio. We make the following abbreviations:
  • $f(x) = V(X\ \&\ C^x_X)$
  • $g(x) = V(\overline{X}\ \&\ C^x_X)$
  • $F = V(X)$
  • $G = V(\overline{X})$
By assumption, we have:
  • (1f) $f$ is a monotone increasing and non-constant function on $(0, 1)$ (by Monotonicity);
  • (1g) $g$ is a monotone increasing and non-constant function on $(0, 1)$ (by Monotonicity);
  • (2) There is $0 < x < 1$ such that $F < f(x)$ (by Betweenness).
By Desirability, we have $$V(C^x_X) = c(X | C^x_X)V(X\ \&\ C^x_X) + c(\overline{X} | C^x_X) V(\overline{X}\ \&\ C^x_X)$$ By this and the Principal Principle, we have $$V(C^x_X)= x V(X\ \&\ C^x_X) + (1 - x)V(\overline{X}\ \&\ C^x_X)$$ So $V(C^x_X) = xf(x) + (1-x)g(x)$. By Linearity, we have $$V(C^x_X) = x V(X) + (1-x)V(\overline{X})$$ So $V(C^x_X) = xF + (1-x)G$. Thus, for all $0 \leq x \leq 1$, $$x V(X) + (1-x)V(\overline{X}) = x V(X\ \&\ C^x_X) + (1 - x)V(\overline{X}\ \&\ C^x_X)$$ That is,
  • (3) $xF + (1-x)G = xf(x) + (1-x)g(x)$
Now, by (3), we have $$g(x) = \frac{x}{1-x}(F - f(x)) + G$$ for $0 \leq x < 1$. Now, by (1f) and (2), there are $x < y < 1$ such that $F < f(x) \leq f(y)$. Thus, $F - f(y) \leq F - f(x) < 0$. And so $$\frac{y}{1-y}(F-f(y)) + G < \frac{x}{1-x}(F-f(x)) + G < 0$$ And thus $g(y) < g(x)$. But this contradicts (1g). Thus, there can be no such pair of functions $f$, $g$. Thus, there can be no such $V$, as required. $\Box$




Sunday, 12 February 2017

Chance Neutrality and the Swamping Problem for Reliabilism

Reliabilism about justified belief comes in two varieties: process reliabilism and indicator reliabilism. According to process reliabilism, a belief is justified if it is formed by a process that is likely to produce truths; according to indicator reliabilism, a belief is justified if it likely to be true given the ground on which the belief is based. Both are natural accounts of justification for a veritist, who holds that the sole fundamental source of epistemic value for a belief is its truth.

Against veritists who are reliabilists, opponents raise the Swamping Problem. This begins with the observation that we prefer a justified true belief to an unjustified true belief; we ascribe greater value to the former than to the latter; we would prefer to have the former over the latter. But, if reliablism is true, this means that we prefer a belief that is true and had a high chance of being true over a belief that is true and had a low chance of being true. For a veritist, this means that we prefer a belief that has maximal epistemic value and had a high chance of having maximal epistemic value over a belief that has maximal epistemic value and had a low chance of having maximal epistemic value. And this is irrational, or so the objection goes. It is only rational to value a high chance of maximal utility when the actual utility is not known; once the actual utility is known, this 'swamps' any consideration of the chance of that utility. For instance, suppose I find a lottery ticket on the street; I know that it comes either from a 10-ticket lottery or from a 100-ticket lottery; both lotteries pay out the same amount to the holder of the winning ticket; and I know the outcome of neither lottery. Then it is rational for me to hope that the ticket I hold belongs to the smaller lottery, since that would maximise my chance of winning and thus maximise the expected utility of the ticket. But once I know that the lottery ticket I found is the winning ticket, it is irrational to prefer that it came from the smaller lottery --- my knowledge that it's the winner 'swamps' the information about how likely it was to be the winner. This is known variously as the Swamping Problem or the Value Problem for reliabilism about justification (Zagzebski 2003, Kvanvig 2003).

The central assumption of the swamping problem is a principle that, in a different context, H. Orri Stefánsson and Richard Bradley call Chance Neutrality (Stefánsson & Bradley 2015). They state it precisely within the framework of Richard Jeffrey's decision theory (Jeffrey 1983). In that framework, we have a desirability function $V$ and a credence function $c$, both of which are defined on an algebra of propositions $\mathcal{F}$. $V(A)$ measures how strongly our agent desires $A$, or how greatly she values it. $c(A)$ measures how strongly she believes $A$, or her credence in $A$. The central principle of the decision theory is this:

Desirability  If the propositions $A_1$, $\ldots$, $A_n$ form a partition of the proposition $X$, then $$V(X) = \sum^n_{i=1} c(A_i | X) V(A_i)$$

Now, suppose the algebra on which $V$ and $c$ are defined includes some propositions that concern the objective probabilities of other propositions in the algebra.  Then:

Chance Neutrality  Suppose $X$ is in the partition $X_1$, \ldots, $X_n$. And suppose $0 \leq \alpha_1, \ldots, \alpha_n \leq 1$ and $\sum^n_{i=1} \alpha = 1$. Then $$V(X\ \&\ \bigwedge^n_{i=1} \mbox{Objective probability of $X_i$ is $\alpha_i$}) = V(X)$$

That is, information about the outcome of the chance process that picks between $X_1$, $\ldots$, $X_n$ `swamps' information about the chance process in our evaluation, which is recorded in $V$. A simple consequence of this: if $0 \leq \alpha_1, \alpha'_1 \ldots, \alpha_n, \alpha'_n \leq 1$ and $\sum^n_{i=1} \alpha_i = 1$ and $\sum^n_{i=1} \alpha'_i = 1$, then

$V(X\ \&\ \bigwedge^n_{i=1} \mbox{Objective probability of $X_i$ is $\alpha_i$}) = $
$V(X\ \&\ \bigwedge^n_{i=1} \mbox{Objective probability of $X_i$ is $\alpha'_i$})$

Now consider the particular case of this that is used in the Swamping Problem. I believe $X$ on the basis of ground $g$. I assign greater value to $X$ being true and justified than I do to $X$ being true and unjustified. That is, given the reliabilist's account of justification, if $\alpha$ is a probability that lies above the threshold for justification and $\alpha'$ is a probability that lies below that threshold --- for the veritist, $\alpha' < \frac{W}{R+W} < \alpha$ --- then

$V(X\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha'$}) <$
$V(X\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha$})$

And of course this violates Chance Neutrality.

Thus, the Swamping Problem stands or falls with the status of Chance Neutrality. Is it a requirement of rationality? Stefánsson and Bradley argue that it is not (Section 3, Stefánsson & Bradley 2015). They show that, in the presence of the Principal Principle, Chance Neutrality entails a principle called Linearity; and they claim that Linearity is not a requirement of rationality. If it is permissible to violate Linearity, then it cannot be a requirement to satisfy a principle that entails it. So Chance Neutrality is not a requirement of rationality.

In this context, the Principal Principle runs as follows:

Principal Principle $$c(X_i | \bigwedge^n_{i=1} \mbox{Objective probability of $X_i$ is $\alpha_i$}) = \alpha_i$$

That is, an agent's credence in $X_i$, conditional on information that gives the objective probability of $X_i$ and other members of a partition to which it belongs, should be equal to the objective probability of $X_i$. And Linearity is the following principle:

Linearity $$V(\bigwedge^n_{i=1} \mbox{Objective probability of $X_i$ is $\alpha_i$}) = \sum^n_{i=1} \alpha_iV(X_i)$$

That is, an agent should value a lottery at the expected value of its outcome. Now, as is well known, real agents often violate Linearity (Buchak 2014). The most famous violations are known as the Allais preferences (Allais 1953). Suppose there are 100 tickets numbered 1 to 100. One ticket will be drawn and you will be given a prize depending on which option you have chosen from $L_1$, $\ldots$, $L_4$:
  • $L_1$: if ticket 1-89, £1m; if ticket 90-99, £1m; if ticket 100, £1m.
  • $L_2$: if ticket 1-89, £1m; if ticket 90-99, £5m; if ticket 100, £0m
  • $L_3$: if ticket 1-89, £0m; if ticket 90-99, £1m; if ticket 100, £1m
  • $L_4$: if ticket 1-89, £0m; if ticket 90-99, £5m; if ticket 100, £0m
I know that each ticket has an equal chance of winning --- thus, by the Principal Principle, $c(\mbox{Ticket $n$ wins}) = \frac{1}{100}$. Now, it turns out that many people have preferences recorded in the following desirability function $V$: $$V(L_1) > V(L_2) \mbox{ and } V(L_3) < V(L_4)$$

When there is an option that guarantees them a high payout (\pounds 1m), they prefer that over something with 1% chance of nothing (\pounds 0) even if it also provides 10%  chance of much greater payout (£5m). On the other hand, when there is no guarantee of a high payout, they prefer the chance of the much greater payout (\pounds 5m), even if there is also a slightly greater chance of nothing (£0). The problem is that there is no way to assign values to $V(£0)$, $V(£1m)$, and $V(£5m)$ so that $V$ satisfies Linearity and also these inequalities. Suppose, for a reductio, that there is. By Linearity,
$$V(L_1) = 0.89V(£1\mathrm{m}) + 0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m})$$
$$V(L_2) = 0.89V(£1\mathrm{m}) + 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m}) $$
Then, since $V(L_1) > V(L_2)$, we have: $$0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m}) > 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m})$$ But also by Linearity, $$V(L_3) = 0.89V(£0\mathrm{m}) + 0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m})$$
$$V(L_4) = 0.89V(£0\mathrm{m}) + 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m})$$
Then, since $V(L_3) < V(L_4)$, we have: $$0.1 V(£1\mathrm{m}) + 0.01 V(£1\mathrm{m}) < 0.1 V(£5\mathrm{m}) + 0.01 V(£0\mathrm{m})$$
And this gives a contradiction. In general, an agent violates Linearity when she has any risk averse or risk seeking preferences.

Stefánsson and Bradley show that, in the presence of the Principal Principle, Chance Neutrality entails Linearity; and they argue that there are rational violations of Linearity (such as the Allais preferences); so they conclude that there are rational violations of Chance Neutrality. So far, so good for the reliabilist: the Swamping Problem assumes that Chance Neutrality is a requirement of rationality; and we have seen that it is not. However, reliabilism is not out of the woods yet. After all, the veritist's version of reliabilism that in fact assumes Linearity! They say that a belief is justified if it is likely to true. And they say this because a belief that is likely to be true has high expected epistemic value on the veritist's account of epistemic value. And so they connect justification to epistemic value by taking the value of a belief to be its expected epistemic value --- that is, they assume Linearity. Thus, if the only rational violations of Chance Neutrality are also rational violations of Linearity, then the Swamping Problem is revived. In particular, if Linearity entails Chance Neutrality, then reliabilism cannot solve the Swamping Problem.

Fortunately, even in the presence of the Principal Principle, Linearity does not entail Chance Neutrality. Together, the Principal Principle and Desirability entail:

$V(\mbox{Objective probability of $X$ given I have $g$ is $\alpha$}) =$

$\alpha V(X\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha$}) + $

$(1-\alpha) V(\overline{X}\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha$})$

And Linearity entails:

 $V(\mbox{Objective probability of $X$ given I have $g$ is $\alpha$}) = \alpha V(X) + (1-\alpha) V(\overline{X})$

So
$\alpha V(X) + (1-\alpha) V(\overline{X}) =$

$\alpha V(X\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha$}) + $

$(1-\alpha) V(\overline{X}\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha$})$

And, whatever the values of $V(X)$ and $V(\overline{X})$, there are values of $$V(X\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha$})$$ and $$V(\overline{X}\ \&\ \mbox{Objective probability of $X$ given I have $g$ is $\alpha$})$$
such that the above equation holds. Thus, it is at least possible to adhere to Linearity, yet violate Chance Neutrality. Of course, this does not show that the agent who adheres to Linearity but violates Chance Neutrality is rational. But, now that the intuitive appeal of Chance Neutrality is undermined, the burden is on those who raise the Swamping Problem to explain why such cases are irrational.

References


  • Allais, M. (1953). Le comportement de l’homme rationnel devant le risque: critique des postulats et axiomes de l'école Amáricaine. Econometrica, 21(4), 503–546.
  • Buchak, L. (2013). Risk and Rationality. Oxford University Press.
  • Kvanvig, J. (2003). The Value of Knowledge and the Pursuit of Understanding. Cambridge: Cambridge University Press.
  • Stefánsson, H. O., & Bradley, R. (2015). How Valuable Are Chances? Philosophy of Science, 82, 602–625.
  • Zagzebski, L. (2003). The search for the source of the epistemic good. Metaphilosophy, 34(12-28).

Monday, 6 February 2017

What is justified credence?

Aafira and Halim are both 90% confident that it will be sunny tomorrow. Aafira bases her credence on her observation of the weather today and her past experience of the weather on days that follow days like today -- around nine out of ten of them have been sunny. Halim bases his credence on wishful thinking -- he's arranged a garden party for tomorrow and he desperately wants the weather to be pleasant. Aafira, it seems, is justified in her credence, while Halim is not. Just as one of your full or categorical beliefs might be justified if it is based on visual perception under good conditions, or on memories of recent important events, or on testimony from experts, so might one of your credences be; and just as one of your full beliefs might be unjustified if it is based on wishful thinking, or biased stereotypical associations, or testimony from ideologically driven news outlets, so might your credences be. In this post, I'm looking for an account of justified credence -- in particular, I seek necessary and sufficient conditions for a credence to be justified. Our account will be reliabilist.

Reliabilism about justified beliefs comes in two varieties: process reliabilism and indicator reliabilism. Roughly, process reliabilism says that a belief is justified if it is formed by a reliable process, while indicator reliabilism says that a belief is justified if it is based on a ground that renders it likely. Reliabilism about justified credence also comes in two varieties; indeed, it comes in the same two varieties. And, indeed, of the two existing proposals, Jeff Dunn's is a version of process reliabilism (paper) while Weng Hong Tang offers a version of indicator reliabilism (paper). As we will see, both face the same objection. If they are right about what justification is, it is mysterious why we care about justification, for neither of the accounts connects justification to a source of epistemic value.  We will call this the Connection Problem.

I begin by describing Dunn's process reliabilism and Tang's indicator reliabilism. I argue that, understood correctly, they are, in fact, extensionally equivalent. That is, Dunn and Tang reach the top of the same mountain, albeit by different routes. However, I argue that both face the Connection Problem. In response, I offer my own version of reliabilism, which is both process and indicator, and I argue that it solves that problem. Furthermore, I show that it is also extensionally equivalent to Dunn's reliabilism and Tang's.

Reliabilism and Dunn on reliable credence


Let us begin with Dunn's process reliabilism for justified credences. Now, to be clear, Dunn takes himself only to be providing an account of reliability for credence-forming processes. He doesn't necessarily endorse the other two conjuncts of reliabilism, which say that a credence is justified if it is reliable, and that a credence is reliable if formed by a reliable process. Instead, Dunn speculates that perhaps being reliably formed is but one of the epistemic virtues, and he wonders whether all of the epistemic virtues are required for justification. Nonetheless, I will consider a version of reliabilism for justified credences that is based on Dunn's account of reliable credence. For reasons that will become clear, I will call this the calibrationist version of process reliabilism for justified credence. Dunn rejects it based on what I will call below the Graining Problem. As we will see, I think we can answer that objection.

For Dunn, a credence-forming process is perfectly reliable if it is well calibrated. Here's what it means for a process $\rho$ to be well calibrated:
  • First, we construct a set of all and only the outputs of the process $\rho$ in the actual world and in nearby counterfactual scenarios. An output of $\rho$ consists of a credence $x$ in a proposition $X$ at a particular time $t$ in a particular possible world $w$ -- so we represent it by the tuple $(x, X, w, t)$. If $w$ is a nearby world and $t$ a nearby time, we call $(x, X, w, t)$ a nearby output. Let $O_\rho$ be the set of nearby outputs -- that is, the set of tuples $(x, X, w, t)$, where $w$ is a nearby world, $t$ is a nearby time, and $\rho$ assigns credence $x$ to proposition $X$ in world $w$ at time $t$.
  • Second, we say that the truth-ratio of $\rho$ for credence $x$ is the proportion of nearby outputs $(x, X, w, t)$ in $O_\rho$ such that $X$ is true at $w$ and $t$.
  • Finally, we say that $\rho$ is well calibrated (or nearly so) if, for each credence $x$ that $\rho$ assigns, $x$ is equal to (or approximately equal to) the truth-ratio of $\rho$ for $x$.
For instance, suppose a process only ever assigns credence 0.6 or 0.7. And suppose that, 60% of the time that it assigns 0.6 in the actual world or a nearby world it assigns it to a proposition that is true; and 70% of the time it assigns 0.7 it assigns it to a true proposition. If, on the other hand, 59% of the time that it assigns 0.6 in the actual world or a nearby world it assigns it to a proposition that is true, while 71% of the time it assigns 0.7 it assigns it to a true proposition, then that process is not well calibrated, but it is nearly well calibrated. But if 23% of the time that it assigns 0.6 in the actual world or a nearby world it assigns it to a proposition that is true, while 95% of the time it assigns 0.7 it assigns it to a true proposition, then that process is not even nearly well calibrated.

This, then, is Dunn's calibrationist account of the reliability of a credence-forming process. Any version of reliabilism about justified credences that is based on it requires two further ingredients. First, we must use the account to say when an individual credence is reliable; second, we must add the claim that a credence is justified iff it is reliable. Both of these moves creates problems. We will address them below. But first it will be useful to present Tang's version of indicator reliabilism for justified credence. It will provide an important clue that helps us solve one of the problems that Dunn's account faces. And, having it in hand, it will be easier to see how these two accounts end up coinciding.

Tang's indicator reliabilism for justified credence


According to indicator reliabilism for justified belief, a belief is justified if the ground on which it is based is a good indicator of the truth of that belief. Thus, beliefs formed on the basis of visual experiences tend to be justified because the fact that the agent had the visual experience in question makes it likely that the belief they based on it is true. Wishful thinking, on the other hand, usually does not give rise to justified belief because the fact that an agent hopes that a particular proposition will be true -- which in this case is the ground of their belief -- does not make it likely that the proposition is true.

Tang seeks to extend this account of justified belief to the case of credence. Here is his first attempt at an account:

Tang's Indicator Reliabilism for Justified Credence (first pass)  A credence of $x$ in $X$ by an agent $S$ is justified iff
(TIC1-$\alpha$) $S$ has ground $g$;
(TIC2-$\alpha$) the credence $x$ in $X$ by $S$ is based on ground $E$;
(TIC3-$\alpha$) the objective probability of $X$ given that the agent has ground $g$ approximates or equals $x$ -- we write this $P(X | \mbox{$S$ has $g$}) \approx x$.

Thus, just as an agent's full belief in a proposition is justified if its ground makes the objective probability of that proposition close to 1, a credence $x$ in a proposition is justified if its ground makes the objective probability of that proposition close to $x$. There is a substantial problem here in identifying exactly to which notion of objective probability Tang wishes to appeal. But we will leave that aside for the moment, other than to say that he conceives of it along the lines of hypothetical frequentism -- that is, the objective probability of $X$ given $Y$ is the hypothetical frequency with which propositions like $X$ are true when propositions like $Y$ are true.

However, as Tang notes, as stated, his version of indicator reliabilism  faces a problem. Suppose I am presented with an empty urn. I watch as it is filled with 100 balls, numbered 1 to 100, half of which are white, and half of which are black. I shake the urn vigorously and extract a ball. It's number 73 and it's white. I look at its colour and the numeral printed on it. I have a visual experience of a white ball with '73' on it. On the basis of my visual experience of the numeral alone, I assign credence 0.5 to the proposition that ball 73 is white. According to Wang's first version of indicator reliabilism for justified credence, my credence is justified. My ground is the visual experience of the number on the ball; I have that ground; I base my credence on that ground; and the objective probability that ball 73 is white given that I have a visual experience of the numeral '73' printed on it is 50% -- after all, half the balls are white. Of course, the problem is that I have not used my total evidence -- or, in the language of grounds, I have not based my belief on my most inclusive ground. I had the visual experience of the numeral on the ball as a ground; but I also had the visual experience of the numeral on the ball and the colour of the ball as a ground. The resulting credence is unjustified because the objective probability that ball 73 is white given I have the more inclusive ground is not 0.5 -- it is close to 1, since my visual system is so reliable. This leads Tang to amend his account of justified credence as follows:

Tang's Indicator Reliabilism for Justified Credence  A credence of $x$ in $X$ by an agent $S$ is justified iff
(TIC1) $S$ has ground $g$;
(TIC2) the credence $x$ in $X$ by $S$ is based on ground $g$;
(TIC3) the objective probability of $X$ given that the agent has ground $g$ approximates or equals $x$ -- that is, $P(X | \mbox{$S$ has $g$}) \approx x$;
(TIC4) there is no more inclusive ground $g'$ such that (i) $S$ has $g'$ and (ii) the objective probability of $X$ given that the agent has ground $g'$ does not equal or approximate $x$ -- that is, $P(X | \mbox{$S$ has $g'$}) \not \approx x$.

This, then, is Tang's version of indicator reliabilism for justified credences.

Same mountain, different routes


Thus, we have now seen Dunn's process reliabilism and Tang's indicator reliabilism for justified credences. Is either correct? If so, which? In one sense, both are correct; in another, neither is. Less mysteriously: as we will see in this section, Dunn's process reliablism and Tang's indicator reliabilism are extensionally equivalent -- that is, the same credences are justified on both. What's more, as we will see in the final section, both are extensionally equivalent to the correct account of justified credence, which is thus a version of both process  and indicator reliabilism. However, while they get the extension right, they do so for the wrong reasons. A justified credence is not justified because it is formed by a well calibrated process; and it is not justified because it matches the objective chance given its grounds. Thus, Dunn and Tang delimit the correct extension, but they use the wrong intension. In the final section of this post, I will offer what I take to be the correct intension. But first, let's see why it is that the routes that Dunn and Tang take lead them both to the top of the same mountain.

We begin with Dunn's calibrationist account of the reliability of a credence-forming process. As we noted above, any version of reliabilism about justified credences that is based on this account requires two further ingredients. First, we must use the calibrationist account of reliable credence-forming processes to say when an individual credence is reliable. The natural answer: when it is formed by a reliable credence-forming process. But then we must be able to identify, for a given credence, the process of which it is an output. The problem is that, for any credence, there are a great many processes of which it might be the output. I have a visual experience of a piece of red cloth on my desk, and I form a high credence that there is a piece of red cloth on my desk. Is this credence the output of a process that assigns a high credence that that there is a piece of red cloth on my desk whenever I have that visual experience? Or is it the output of a process that assigns a high credence that there is a piece of red cloth on my desk whenever I have that visual experience and the lighting conditions in my office are good, while it assigns a middling credence that there is a piece of red cloth on my desk whenever I have that visual experience and the lighting conditions in my office are bad? It is easy to see that this is important. The first process is poorly calibrated, and thus unreliable on Dunn's account; the second process is better calibrated and thus more reliable on Dunn's account. This is the so-called Generality Problem, and it is a challenge that faces any version of reliabilism. I will offer a version of Juan Comesaña's solution to this problem below -- as we will see, that solution also clears the way for a natural solution to the Graining Problem, which we consider next.

Dunn provides an account of when a credence-forming process is reliable. And, once we have a solution to the Generality Problem, we can use that to say when a credence is reliable -- it is reliable when formed by a reliable credence-forming process. Finally, to complete the version of process reliablism about justified credence that we are basing on Dunn's account, we just need the claim that a credence is justified iff it is reliable. But this too faces a problem, which we call the Graining Problem. As we did above, suppose I am presented with an empty urn. I watch as it is filled with 100 balls, numbered 1 to 100, half of which are white, and half of which are black. I shake the urn vigorously and extract a ball. I look at its colour and the numeral printed on it. I have two processes at my disposal. Process 1 takes my visual experience of the numeral only, say '$n$', and assigns the credence 0.5 to the proposition that ball $n$ is white. Process 2 takes my visual experience of the numeral, '$n$', and my visual experience of the colour of the ball, and assigns credence 1 to the proposition that ball $n$ is white if my visual experience is of a white ball, and assigns credence 1 to the proposition that ball $n$ is black if my visual experience is of a black ball. Note that both processes are well calibrated (or nearly so, if we allow that my visual system is very slightly fallible). But we would usually judge the credence formed by the second to be better justified than the credence formed by the first. Indeed, we would typically say that a Process 1 credence is unjustified, while a Process 2 credence is justified. Thus, being formed by a well calibrated or nearly well calibrated process is not sufficient for justification. And, if reliability is calibration, then reliability is not justification and reliabilism fails. It is this problem that leads Dunn to reject reliabilism about justified credence. However, as we will see below, I think he is a little hasty.

Let us consider the Generality Problem first. To this problem, Juan Comesaña offers the following solution (paper). Every account of doxastic justification -- that is, every account of when a given doxastic attitude of a particular agent is justified for that agent -- must recognize that two agents may have the same doxastic attitude and the same evidence while the doxastic attitude of one is justified and the doxastic attitude of the other is not, because their doxastic attitudes are not based on the same evidence. The first might base her belief on the total evidence, for instance, whilst the second ignores that evidence and bases his belief purely on wishful thinking. Thus, Comesaña claims, every theory of justification needs a notion of the grounds or the basis of a doxastic attitude. But, once we have that, a solution to the Generality Problem is very close. Comesaña spells out the solution for process reliabilism about full beliefs:

Well-Founded Process Reliablism for Justified Full Beliefs  A belief that $X$ by an agent $S$ is justified iff
(WPB1) $S$ has ground $g$;
(WPB2) the belief that $X$ by $S$ is based on ground $g$;
(WPB3) the process producing a belief state $X$ based on ground $g$ is a reliable process.

This is easily adapted to the credal case:

Well-Founded Process Reliablism for Justified Credences  A credence of $x$ in $X$ by an agent $S$ is justified iff
(WPC1) $S$ has ground $g$;
(WPC2) the credence $x$ in $X$ by $S$ is based on ground $g$;
(WPC3) the process producing a credence of $x$ in $X$ based on ground $g$ is a reliable process.

Let us now try to apply Comesaña's solution to the Generality Problem to help Dunn's calibrationist reliabilism about justified credences. Recall: according to Dunn, a process $\rho$ is reliable if it is well calibrated (or nearly so). Consider the process producing a credence of $x$ in $X$ based on ground $g$ -- for convenience, we'll write it $\rho^g_{X,x}$. There is only one credence that it assigns, namely $x$. So it is well calibrated if that truth-ratio of $\rho^g_{X,x}$ for $x$ is equal to $x$. Now, $O_{\rho^g_{X,x}}$ is the set of tuples $(X, x, w, t)$ where $w$ is a nearby world and $t$ a nearby time where $\rho^g_{X,x}$ assigns credence $x$ to proposition $X$. But, by the definition of $\rho^g_{X,x}$, those are the nearby worlds and nearby times at which the agent has the ground $g$. Thus, the truth-ratio of $\rho^g_{X,x}$ for $x$ is the proportion of those nearby worlds and times at which the agent has the ground $g$ at which $X$ is true. And that, it seems to me, is the something like the objective probability of $X$ conditional on the agent having ground $g$, at least given the hypothetical frequentist account of objective probability of the sort that Tang favours. As above, we denote the objective probability of $X$ conditional on the agent $S$ having grounds $g$ as follows: $P(X | \mbox{$S$ has $g$})$. Thus, $P(X | \mbox{$S$ has $g$})$ is the truth-ratio of $\rho^g_{p,x}$ for $x$. And thus, a credence $x$ in $X$ based on ground $g$ is reliable iff $x$ is close to $P(X | \mbox{$S$ has $g$})$. That is,

Well-Founded Calibrationist Process Reliabilism for Justified Credences (first attempt) A credence of $x$ in $X$ by an agent $S$ is justified iff
(WCPC1) $S$ has ground $g$;
(WCPC2) the credence $x$ in $X$ by $S$ is based on ground $g$;
(WCPC3) the process producing a credence of $x$ in $X$ based on ground $g$ is a (nearly) well calibrated process -- that is, $P(X | \mbox{$S$ has $g$}) \approx x$.

But now compare Well-Founded Calibrationist Process Reliabilism, based on Dunn's account of reliable processes and Comesaña's solution to the Generality Problem, with Tang's first attempt at Indicator Reliabilism. Consider the necessary and sufficient conditions that each imposes for justification: TIC1 = WCPC1; TIC2 = WCPC2; TIC3 = WCPC3. Thus, these are the same account. However, as we saw above, Tang's first attempt to formulate indicator reliabilism for justified credence fails because it counts as justified a credence that is not based on an agent's total evidence; and we also saw that, once the Generality Problem is solved for Dunn's calibrationist process reliabilism, it faces a similar problem, namely, the Graining Problem from above. Tang amends his version of indicator reliabilism by adding the fourth condition TIC4 from above. Might we amend Dunn's calibrationist process reliabilism is a similar way?

Well-Founded Calibrationist Process Reliabilism for Justified Credences  A credence of $x$ in $X$ by an agent $S$ is justified iff
(WCPC1) $S$ has ground $g$;
(WCPC2) the credence $x$ in $X$ by $S$ is based on ground $g$;
(WCPC3) the process producing a credence of $x$ in $X$ based on ground $g$ is a (nearly) well calibrated process -- that is, $P(X | \mbox{$S$ has $g$}) \approx x$;
(WCPC4) there is no more inclusive ground $g'$ and credence $x' \not \approx x$, such that the process producing a credence of $x'$ in $X$ based on ground $g'$ is a (nearly) well calibrated process -- that is, $P(X | \mbox{$S$ has $g'$}) \approx x'$.

Since TIC4 is equivalent to WCPC4, this final version of process reliabilism for justified credences is equivalent to Tang's final version of his indicator reliabilism for justified credences. Thus, Dunn and Tang have reached the top of the same mountain, albeit by different routes
  

The third route up the mountain


Once we have addressed certain problems with the calibrationist version of process reliabilism for justified credence, we see that it agrees with the current best version of indicator reliabilism. This gives us a little hope that both have hit upon the correct account of justification. In the end, I will conclude that both have indeed hit upon the correct extension of the concept of justified credence. But that have done so for the wrong reasons, for they have not hit upon the correct intension.

There are two sorts of route you might take when pursuing an account of justification for a given sort of doxastic attitude, such as a credence or a full belief. You might look to intuitions concerning particular cases and try to discern a set of necessary and sufficient conditions that sort these cases in the same way that your intuitions do; or, you might begin with an account of epistemic value, assume that justification must be linked in some natural way to the promotion of epistemic value, and then provide an account of justification that vindicates that assumption. Dunn and Tang have each taken a route of the first sort; I will follow a route of the second sort.

I will adopt the veritist's account of epistemic value. That is, I take accuracy to be the sole fundamental source of epistemic value for a credence, where a credence in a true proposition is more accurate the higher it is; a credence in a false proposition is more accurate the lower it is. Given this account of epistemic value, what is the natural account of justification? Well, at first sight, there are two: one is process reliabilist; the other is indicator reliabilist. But, in a twist that should come as little surprise given the conclusions of the previous section, it will turn out that these two accounts coincide, and indeed coincide with the final versions of Dunn's and Tang's accounts that we reached above. Thus, I too will reach the top of the same mountain, but by yet another route.

Epistemic value version of indicator reliabilism


In the case of full beliefs, indicator reliabilism says this: a belief in $X$ by $S$ on the basis of grounds $g$ is justified iff the objective probability of $X$ given that $S$ has grounds $g$ is high --- that is, close to 1. Tang generalises this to the case of credence, but I think he generalises in the wrong direction; that is, he takes the wrong feature to be salient and uses that to formulate his indicator reliabilism for justified credence. He takes the general form of indicator reliabilism to be something like this: a doxastic attitude $s$ towards $X$ by $S$ on the basis of grounds $g$ is justified iff the attitude $s$ 'matches' the objective probability of $X$ given that $S$ has grounds $g$. And he takes the categorical attitude of belief in $X$ to 'match' high objective probability of $X$, and credence $x$ in $X$ to 'match' objective probability of $x$ that $X$. The problem with this account is that it leaves mysterious why justification is valuable. Unless we say that matching objective probabilities is somehow epistemic valuable in itself, it isn't clear why we should want to have justified doxastic attitudes in this sense.

I contend instead that the general form of indicator reliabilism is this:

Indicator reliabilism for justified doxastic attitude (epistemic value version)  Doxastic attitude $s$ towards proposition $X$ by agent $S$ is justified iff
(EIA1) $S$ has $g$;
(EIA2) $s$ in $X$ by $S$ is based on $g$;
(EIA3) if  $g' \subseteq g$ is a ground that $S$ has, then for every doxastic attitude $s'$ of the same sort as $s$, the expected epistemic value of attitude $s'$ towards $X$ given that $S$ has $g'$ is at most (or not much above) the expected epistemic value of attitude $s$ towards $X$ given that $S$ has $g'$.

Thus, attitude $s$ towards $X$ by $S$ is justified if $s$ is based on a ground $g$ that $S$ has, and $s$ is the attitude towards $X$ that has highest expected accuracy relative to the most inclusive grounds that $S$ has.

Let's consider this in the full belief case. We have:

Indicator reliabilism for justified belief (epistemic value version)  A belief in proposition $X$ by agent $S$ is justified iff
(EIB1) $S$ has $g$;
(EIB2) $s$ in $X$ by $S$ is based on $g$;
(EIB3) if  $g' \subseteq g$ is a ground that $S$ has, then
  1. the expected epistemic value of disbelief in $X$, given that $S$ has $g'$, is at most (or not much above) the expected epistemic value of belief in $X$, given that $S$ has $g'$;
  2. the expected epistemic value of suspension in $X$, given that $S$ has $g'$, is at most (or not much above) the expected epistemic value of belief in $X$, given that $S$ has $g'$.

To complete this, we need only an account of epistemic value. Here, the veritist's account of epistemic value runs as follows. There are three categorical doxastic attitudes towards a given proposition: belief, disbelief, and suspension of judgment. If the proposition is true, belief has greatest epistemic value, then suspension of judgment, then disbelief. If it is false, the order is reversed. It is natural to say that a belief in a truth and disbelief in a falsehood have the same high epistemic value -- following Kenny Easwaran (paper), we denote this $R$ (for `getting it Right'), and assume $R >0$. And it is natural to say that a disbelief in a truth and belief in a falsehood have the same low epistemic value -- again following Easwaran, we denote this $-W$ (for `getting it Wrong'), and assume $W > 0$. And finally it is natural to say that suspension of belief in a truth has the same epistemic value as suspension of belief in a falsehood, and both have epistemic value 0. We assume that $W > R$, just as Easwaran does. Now, suppose proposition $X$ has objective probability $p$. Then the expected epistemic utility of different categorical doxastic attitudes towards $X$ is given below:
  • Expected epistemic value of belief in $X$ = $p\cdot R + (1-p)\cdot(-W)$.
  • Expected epistemic value of suspension in $X$ = $p\cdot 0 + (1-p)\cdot 0$.
  • Expected epistemic value of disbelief in $X$ = $p\cdot (-W) + (1-p)\cdot R$.
Thus, belief in $X$ has greatest epistemic value amongst the possible categorical doxastic attitudes to $X$ if $p > \frac{W}{R+W}$;  disbelief in $X$ has greatest epistemic value if $p < \frac{R}{R+W}$; and suspension in $X$ has greatest value if $\frac{R}{R+W} < p < \frac{W}{R+W}$ (at $p = \frac{W}{R+W}$, belief ties with suspension; at $p = \frac{R}{R+W}$, disbelief ties with suspension). With this in hand, we have the following version of indicator reliabilism for justified beliefs:

Indicator reliabilism for justified belief (veritist version)  A belief in $X$ by agent $S$ is justified iff
(EIB1$^*$) $S$ has $g$;
(EIB2$^*$) the belief in $X$ by $S$ is based on $g$;
(EIB3$^*$) the objective probability of $X$ given that $S$ has $g$ is (nearly) greater than $\frac{W}{R+W}$;
(EIB4$^*$) there is no more inclusive ground $g'$ such that (a) $S$ has $g'$ and (b) the objective probability of $X$ given that $S$ has $g'$ is not (nearly) greater than $\frac{W}{R+W}$.

And of course this is simply a more explicit version of the standard version of indicator reliabilism. It is more explicit because it gives a particular threshold above which the objective probability of $X$ given that $S$ has $g$ counts as 'high', and above which (or not much below which) the belief in $X$ by $S$ counts as justified --- that threshold is $\frac{W}{R+W}$.

Note that this epistemic value version of indicator reliabilism for justified doxastic states also gives a straightforward account of when a suspension of judgment is justified. Simply replace (EIB3$^*$) and (EIB4$^*$) with:

(EIS3$^*$) the objective probability of $X$ given that $S$ has $g$ is (nearly) between $\frac{W}{R+W}$ and $\frac{R}{R+W}$;
(EIS4$^*$) there is no more inclusive ground $g'$ such that (a) $S$ has $g'$ and (b) the objective probability of $X$ given that $S$ has $g'$ is not (nearly) between $\frac{W}{R+W}$ and $\frac{R}{R+W}$.

And when a disbelief is justified. This time, replace (EIB3$^*$) and (EIB4$^*$)  with:

(EID3$^*$) the objective probability of $X$ given that $S$ has $g$ is (nearly) less than $\frac{R}{R+W}$;
(EID4$^*$) there is no more inclusive ground $g'$ such that (a) $S$ has $g'$ and (b) the objective probability of $X$ given that $S$ has $g'$ is not (nearly) less than $\frac{R}{R+W}$.

Next, let's turn to indicator reliabilism for justified credence. Here's the epistemic value version:

Indicator reliabilism for justified credence (epistemic value version) A credence of $x$ in proposition $X$ by agent $S$ is justified iff
(EIC1) $S$ has $g$;
(EIC2) credence $x$ in $X$ by $S$ is based on $g$;
(EIC3) if $g' \subseteq g$ is a ground that $S$ has, then for every credence $x'$, the expected epistemic value of credence $x'$ in $X$ given that $S$ has $g'$ is at most (or not much above) the expected epistemic value of credence $x$ in $X$ given that $S$ has $g'$.

Again, to complete this, we need an account of epistemic value for credences. As noted above, the veritist holds that the sole fundamental source of epistemic value for credences is their accuracy. There is a lot to be said about different potential measures of the accuracy of a credence -- see, for instance, Jim Joyce's 2009 paper 'Accuracy and Coherence', chapters 3 & 4 of my 2016 book Accuracy and the Laws of Credence, or Ben Levinstein's forthcoming paper 'A Pragmatist's Guide to Epistemic Utility'. But here I will say only this: we assume that those measures are continuous and strictly proper. That is, we assume: (i) we assume that the accuracy of a credence is a continuous function of that credence; and (ii) any probability $x$ in a proposition $X$ expects credence $x$ to be more accurate than it expects any other credence $x' \neq x$ in $X$ to be. These two assumptions are widespread in the literature on accuracy-first epistemology, and they are required for many of the central arguments in that area. Given veritism and the continuity and strict propriety of the accuracy measures, (EIC3) is provably equivalent to the conjunction of:

(EIC3$^*$) the objective probability of $X$ given that the agent has ground $g$ approximates or equals $x$ -- that is, $P(X | \mbox{$S$ has $g$}) \approx x$;
(EIC4$^*$) there is no more inclusive ground $g'$ such that (i) $S$ has $g'$ and (ii) the objective probability of $X$ given that the agent has ground $g'$ does not equal or approximate $x$ -- that is, $P(X | \mbox{$S$ has $g'$}) \not \approx x$.

But of course EIC3 = TIC3 and EIC4 = TIC4 from above. Thus, the veritist version of indicator reliabilism for justified credences is equivalent to Tang's indicator reliabilism, and thus to the calibrationist version of process reliabilism.

Epistemic value version of process reliabilism


Next, let's turn to process reliabilism. How might we give an epistemic value version of that? The mistake made by the calibrationist version of process reliabilism is of the same sort as the mistake made by Tang in his formulation of indicator reliabilism -- both generalise from the case of full beliefs in the wrong way by mistaking an accidental feature for the salient feature. For the calibrationist, a full belief is justified if it is formed by a reliable process, and a process is reliable if a high proportion of the beliefs it produces are true. Now, notice that there is a sense in which such a process is calibrated: a belief is associated with a high degree of confidence, and that matches, at least approximately, the high truth-ratio of the process. In fact, we want to say that this process is belief-reliable. For it is possible for a process to be reliable in its formation of beliefs, but not in its formation of disbeliefs. So a process is disbelief-reliable if a high proportion of the disbeliefs it produces are false. And we might say that a process is suspension-reliable if a middling proportion the suspensions it forms are true and a middling proportion are false. In each case, we think that,  corresponding to each sort of categorical doxastic attitude $s$, there is a fitting proportion $x$ such that a process is $s$-reliable if $x$ is (approximately) the proportion of truths amongst the propositions to which it assigns $s$. Applying this in the credal case gives us the calibrationist version of process reliabilism that we have already met -- a credence $x$ in $S$ is justified if it is formed by a process whose truth-ratio for a given credence is equal to that credence. However, being the product of a belief-reliable process is not the feature of a belief in virtue of which it is justified. Rather, a belief is justified if it is the product of a process that has high expected epistemic value.

Process reliabilism for justified doxastic attitude (epistemic value version)  Doxastic attitude $s$ towards proposition $X$ by agent $S$ is justified iff
(EPA1-$\beta$) $s$ is produced by a process $\rho$;
(EPA2-$\beta$) If $\rho'$ is a process that is available to $S$, then the expected epistemic value of $\rho'$ is at most (or not much more than) the expected epistemic value of $\rho$.

That is, a doxastic attitude is justified for an agent if it is the output of a process that maximizes or nearly maximizes expected epistemic value amongst all processes that are available to her. To complete this account, we must say which processes count as available to an agent. To answer this, recall Comesaña's solution to the Generality Problem. On this solution, the only processes that interest us have the form, process producing doxastic attitude $s$ towards $X$ on basis of ground $g$. Clearly, a process of this form is available to an agent exactly when the agent has ground $g$. This gives

Process Reliabilism about Justified Doxastic Attitudes (Epistemic value version) Attitude $s$ towards proposition $X$ by $S$ is justified iff
(EPA1-$\alpha$) $s$ is produced by process $\rho^g_{s, X}$;
(EPA2-$\alpha$) If  $g' \subseteq g$ is a ground that $S$ has, then for every doxastic attitude $s'$, the expected epistemic value of process $\rho^{g'}_{s', X}$ is at most (or not much more than) the expected epistemic value of process $\rho^{g}_{s, X}$.

Thus, in the case of full beliefs, we have:

Process reliabilism for justified belief (epistemic value version)  A belief in proposition $X$ by agent $S$ is justified iff
(EPB1) Belief in $X$ is produced by process $\rho^g_{\mathrm{bel}, X}$;
(EPB2) if  $g' \subseteq g$ is a ground that $S$ has, then
  1. the expected epistemic value of process $\rho^g_{\mathrm{dis}, X}$ is at most (or not much more than) the expected epistemic value of process $\rho^g_{\mathrm{bel}, X}$;
  2. the expected epistemic value of process $\rho^g_{\mathrm{sus}, X}$ is at most (or not much more than) the expected epistemic value of process $\rho^g_{\mathrm{bel}, X}$;

And it is easy to see that (EPB1) = (EIB1) + (EIB2), since belief in $X$ is produced by process $\rho^g_{\mathrm{bel}, X}$ iff $S$ has ground $g$ and a belief in $X$ by $S$ is based on $g$. Also, (EPB2) is equivalent to (EIB3). Thus, as for the epistemic version of indicator reliabilism, we get:

Indicator reliabilism for justified belief (veritist version) A belief in $X$ by agent $S$ is justified iff
(EPB1) $S$ has $g$;
(EPB2) the belief in $X$ by $S$ is based on $g$;
(EPB3) the objective probability of $X$ given that $S$ has $g$ is (nearly) greater than $\frac{W}{R+W}$;
(EPB4) there is no more inclusive ground $g'$ such that (a) $S$ has $g'$ and (b) the objective probability of $X$ given that $S$ has $g'$ is not (nearly) greater than $\frac{W}{R+W}$.

Next, consider how the epistemic value version of process reliabilism applies to credences.

Process reliabilism for justified credence (epistemic value version)  A credence of $x$ in proposition $X$ by agent $S$ is justified iff
(EPC1) the credence in $x$ is produced by process $\rho^g_{x, X}$;
(EPC2) if  $g' \subseteq g$ is a ground that $S$ and $x'$ is a credence, then the expected epistemic value of process $\rho^{g'}_{x', X}$ is at most (or not much more than) the expected epistemic value of process $\rho^g_{x, X}$.

As before, we see that (EPC1) is equivalent to (EIC1) + (EIC2). And, providing the measure of accuracy is strictly proper and continuous, we get that (EPC2) is equivalent to (EIC3). So, once again, we arrive at the same summit. The routes taken by Tang, Dunn, and the epistemic value versions of process and indicator reliabilism lead to the same spot, namely, the following account of justified credence:

Reliabilism for justified credence (epistemic value version)  A credence of $x$ in proposition $X$ by agent $S$ is justified iff
(ERC1) $S$ has $g$;
(ERC2) credence $x$ in $X$ by $S$ is based on $g$;
(ERC3) the objective probability of $X$ given that the agent has ground $g$ approximates or equals $x$ -- that is, $P(X | \mbox{$S$ has $g$}) \approx x$;
(ERC4) there is no more inclusive ground $g'$ such that (i) $S$ has $g'$ and (ii) the objective probability of $X$ given that the agent has ground $g'$ does not equal or approximate $x$ -- that is, $P(X | \mbox{$S$ has $g'$}) \not \approx x$.