36
Updating for Externalists J. Dmitri Gallow T he character I’ll call ‘the internalist’ holds that your evidence can never fail to tell you what your total evidence is. If your total evidence is e , then you must have the evidence that your total evidence is e . e character I’ll call ‘the externalist’ denies this (§1). An update is a strategy for revising your degrees of belief, or credences, in response to the outcome of an experiment. e internalist has their update: upon learning e , adopt your pre-experimental credences condi- tional on e . is is the rule of conditionalization. Salow (forthcoming) teaches that if the externalist adopts conditionalization, then they will be capable of en- gaging in deliberate self-delusion—designing experiments which are guaranteed to raise their credence that ϕ as high as they like, independent of whether ϕ is true or false (§3). is is not rational inquiry, and no sensible epistemology will call it such. e externalist should reject conditionalization. So the externalist is in need of an update. I have one to offer (§4). 1 Internalism and Externalism In general, to be an internalist is to think that some condition is within our epis- temic reach. To be an externalist is to deny this. Given some condition c , an internalist claims that, if you satisfy c , then you have access to the fact that you satisfy c . And an externalist believes that you can satisfy condition c without hav- ing access to the fact that you satisfy c . Different conditions and different kinds of access yield different breeds of internalism and externalism. Let the condition be possessing the total evidence e . Say that you have access to a fact iff it is entailed by your evidence. en, the internalist says: whenever e is your total evidence, your evidence entails that e is your total evidence. e externalist: on the con- trary, sometimes e can be your total evidence without you having the evidence that e is your total evidence. Internalism For all propositions e , and all times t , necessarily, if e is your total Draft of July 12, 2018; Word count: 10,599 Comments appreciated. B: [email protected] For helpful conversations and feedback on this material, I am indebted to Adam Bjorndahl, Catrin Campbell-Moore, Jeremy Goodman, Stephen Mackereth, Pablo Zendejas Medina, Jim Pryor, Teddy Seidenfeld, and Robert Steel. anks also to attendees of the Updating and Ex- perience conference at Ruhr University, Bochum and participants in the Formal Epistemology Seminar at Carneigie Mellon University. 1

Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Embed Size (px)

Citation preview

Page 1: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists

J. Dmitri Gallow†

T he character I’ll call ‘the internalist’ holds that your evidence can never failto tell you what your total evidence is. If your total evidence is e , then you

must have the evidence that your total evidence is e . The character I’ll call ‘theexternalist’ denies this (§1). An update is a strategy for revising your degrees ofbelief, or credences, in response to the outcome of an experiment. The internalisthas their update: upon learning e , adopt your pre-experimental credences condi-tional on e . This is the rule of conditionalization. Salow (forthcoming) teachesthat if the externalist adopts conditionalization, then they will be capable of en-gaging in deliberate self-delusion—designing experiments which are guaranteedto raise their credence that ϕ as high as they like, independent of whether ϕ istrue or false (§3). This is not rational inquiry, and no sensible epistemology willcall it such. The externalist should reject conditionalization. So the externalist isin need of an update. I have one to offer (§4).

1 Internalism and Externalism

In general, to be an internalist is to think that some condition is within our epis-temic reach. To be an externalist is to deny this. Given some condition c , aninternalist claims that, if you satisfy c , then you have access to the fact that yousatisfy c . And an externalist believes that you can satisfy condition c without hav-ing access to the fact that you satisfy c . Different conditions and different kinds ofaccess yield different breeds of internalism and externalism. Let the condition bepossessing the total evidence e . Say that you have access to a fact iff it is entailedby your evidence. Then, the internalist says: whenever e is your total evidence,your evidence entails that e is your total evidence. The externalist: on the con-trary, sometimes e can be your total evidence without you having the evidencethat e is your total evidence.

InternalismFor all propositions e , and all times t , necessarily, if e is your total

Draft of July 12, 2018; Word count: 10,599Comments appreciated. B: [email protected]

† For helpful conversations and feedback on this material, I am indebted to Adam Bjorndahl,Catrin Campbell-Moore, Jeremy Goodman, Stephen Mackereth, Pablo Zendejas Medina, JimPryor, Teddy Seidenfeld, and Robert Steel. Thanks also to attendees of the Updating and Ex-perience conference at Ruhr University, Bochum and participants in the Formal EpistemologySeminar at Carneigie Mellon University.

1

Page 2: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 2 of 30

evidence at t , then at t you have the evidence that e is your totalevidence at t

� (Tt e →Et Tt e )

ExternalismFor some proposition e , and some time t , possibly, you have thetotal time t evidence e without having the evidence at t that e isyour total time t evidence.

◊ (Tt e ∧¬Et Tt e )

‘Tt e ’ denotes the proposition that e is your total evidence at t . ‘Et e ’, that e isentailed by your evidence at t .

The plausibility of internalism and externalism will depend in part uponour conception of evidence. My primary focus here will be on the way thatthese theses interact with the principle of conditionalization (to be introducedbelow). Conditionalization presupposes that your total evidence will be thestrongest proposition about which certainty has been rationalized. For this rea-son, throughout, I will understand ‘Et e ’ as saying at least that, at t , it has becomerational for you to be certain that e . And I will understand ‘Tt e ’ as saying at leastthat, at t , it has become rational for you to be certain that e , and it has not becomerational for you to be certain about anything stronger than e . Beware: others use‘evidence’ differently.1 Insofar as others utilize a conception of evidence on whichyour total evidence can be e without it being rational for you to be certain thate , or one on which you can be certain of ϕ even when ϕ is not entailed by yourtotal evidence, the theses they dub ‘internalism’ and ‘externalism’ may differ fromthe ones I am discussing.

1.1 Internalism and Externalism with Kripke Frames

We may provide a standard Kripke semantics for the operators Et and Tt . Onthat semantics, Et is a familiar necessity modal—ðEt e ñ is true at w iff ð e ñ istrue at all worlds to which w bears a time t accessibility relation, Rt . Tt is lessfamiliar, but its semantics are simple enough: ðTt e ñ is true at w iff ð e ñ is true atall and only worlds to which w bears Rt . (Going forward, I suppress time indices.)If we assume that evidence is factive, then the internalist’s thesis, Te → ETe , isequivalent to the S5, or ‘negative access’, principle ¬Ee → E¬Ee . S5 says thatthe lack of evidence is always evidence itself. It is equivalent to the conjunction ofthe S4, or ‘positive access’, principle Ee →EEe and the Brouwer (‘B’) principle¬E¬Ee → e . S4 says that the possession of evidence is always evidence itself. Bsays that any evidence you might have, for all your evidence has to say, is true.

Williamson teaches that cases of perceptual illusion counterexample B. Inthe bad case, you look at a white wall illuminated with red lighting. In the goodcase, you look at a red wall illuminated with white lighting. In the bad case, it isfalse that the wall is red, but your evidence does not rule out that you are in the

1 In particular, Gallow (2014), Hild (1998b,a) and Schoenfield (forthcoming) (see §4).

Page 3: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

1. Internalism and Externalism 3 of 30

12

(a) (b) (c)

Figure 1: A distant and brief glimpse at the unmarked clock (1a) provides the evidencethat the clock hand is positioned within some interval of values [u, l ] (1b); and you’velearned that, if the clock hand is positioned at θ degrees from 12 o’clock, then the glimpsedoes not provide the evidence that the clock hand is no more than θ degrees from 12o’clock (1c). These assumptions contradict the ‘positive access’ principle for evidence,Eϕ→EEϕ.

good case. In the good case, you have the evidence that the wall is red. So, inthe bad case, your evidence does not rule out that you have the evidence that thewall is red. Even so, the wall is not red. So ¬E¬Er ∧¬r , where ‘r ’ is that thewall is red. So B is false. The internalist will deny that in the good case you havethe evidence that the wall is red. Rather, in both the good and the bad case, youmerely have the evidence that the wall appears red, and/or that you believe thewall is red. B is salvaged—though skepticism looms.

Williamson similarly teaches that cases in which our perceptual knowledge isinexact counterexample the ‘postive access’ principle S4. Off in the distance, youcatch a glimpse of an unmarked clock (see figure 1a). Your vision is good enoughfor you to get the evidence that the hand is on the right-hand side of the clock.And though you likely learn something stronger still, you don’t learn the preciselocation of the clock hand.2 At most, you learn that the clock hand is locatedin some interval (see figure 1b). Grant also that, if the clock hand is located atsome position l , then you won’t learn that the clock hand is located within someinterval that has l as an endpoint (see figure 1c). Grant not only that this is true,but that you’ve learned it.

These assumptions contradict S4.3 For the following three claims are incon-sistent (I use ‘θ’ for the position of the clock hand).

A1) The strongest proposition you learn about the position of the clock hand isthat it lies in some (non-trivial) interval—call it ‘[u, l ]’ (‘u ’ for upper and‘l ’ for lower).

A2) You have the evidence that, if the clock hand is located at l , then your

2 Throughout, I will use ‘learn that e ’ to mean ‘acquire the evidence e ’.3 We must also take on board the following assumptions, all of which are presupposed by the

usual possible worlds semantics: 1) the evidence operator E satisfies the K -axiom; i.e., E(ϕ→ψ)→Eϕ→Eψ; and 2) if ϕ is evidence and ψ is logically equivalent to ϕ, then ψ is evidenceas well.

Page 4: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 4 of 30

evidence doesn’t tell you that it is located no further than l .

E[θ = l → ¬E(θ ¶ l )]

A3) The possession of evidence is always evidence itself.

Eϕ→EEϕ

By contraposition on (A2),

E [E(θ ¶ l ) → θ ̸= l ](1)

Assuming that the evidence operator E satisfies the K -axiom, (1) entails (2).

EE(θ ¶ l ) →E (θ ̸= l )(2)

(A1) entails (3).

E(θ ¶ l )(3)

From (3) and (A3), we have

EE(θ ¶ l )(4)

From (4) and (2),

E(θ ̸= l )(5)

(5) contradicts (A1), which assured us that the strongest thing you learned aboutthe position of the clock hand was that it was within the interval [u, l ]. Since thisdoes not entail θ ̸= l , (A1) tells us that you cannot have learned it.4

So (A1), (A2), and (A3) are inconsistent. Williamson thinks that (A3), thepositive access principle, is the least plausible of the three; but others, like Salow(forthcoming) and Stalnaker (2009), choose instead to reject (A2) and retain thepositive access principle.5

4 The reader may be wondering whether this contradiction may be avoided by exchanging (A1)’sclosed interval [u, l ] for an open one (u, l )—call the resulting claim ‘(A1∗)’. (A1∗) will be in-consistent with and (A3) and the following principle, for any choice of ε > 0, no matter howsmall—the reasoning is exactly the same as in the body, mutatis mutandis:

E [θ = l − ε→¬E(θ < l )]

5 Stalnaker (2009) does not explicitly discuss the positive access principle for evidence; his focusis S4 for rational belief.

Page 5: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

1. Internalism and Externalism 5 of 30

1.2 Internalism and Externalism with Experiments

(A1) and (A2) are enough to counterexample S4, but we could make do with less.Say that the strongest thing you stand to learn is some proposition about theposition of the clock hand. Go on to say that you may learn that it lies in theinterval I1 (and no more); and likewise, you may learn that it lies in the intervalI2 (and no more). Add that I1 and I2 overlap, and you will have contradicted S4.For if you are in a position to learn one of two overlapping evidence propositions(and no more), then internalism’s S5 principle is false. If one potential piece ofevidence entails the other, then B is false. Else, S4 is false.

Call a set of propositions E = {e1, e2, . . . , eN } an experiment. Intuitively, Eis the set of propositions you might come to learn. Say you are conducting theexperiment E at time t iff the following is true of you.

B1) Your time t total evidence might be e1.

B2) Your time t total evidence might be e2.

...

BN ) Your time t total evidence might be eN .

BN + 1) It must be that: either your time t total evidence is e1, or your time t totalevidence is e2, or . . . , or your time t total evidence is eN .

This is a broad notion of ‘conducting an experiment’. All it takes to conduct anexperiment in this sense is for there to be a set of propositions you might cometo learn at t . Opening a drawer to find pens, checking the front page of the NewYork Times, and looking at your wristwatch all count as conducting an experimentin this sense.6

By definition, if you conduct the experiment E = {e1, e2, . . . , eN } at time t ,{Tt e1,Tt e2, . . . ,Tt eN } is a partition.7 The definition leaves open whether Eitself is a partition. Consider figure 2. You will either learn e1 (and no more)or e2 (and no more), and e1 and e2 are consistent. Even though {Tt e1,Tt e2}forms a partition, {e1, e2} does not. The internalist insists that, necessarily, anyexperiment you conduct is a partition. This is equivalent to their thesis. Thefollowing are all equivalent,

C1) For all e and all t , necessarily, Tt e →Et Tt e .

C2) For all e and all t , necessarily, ¬Et e →Et¬Et e .

C3) For all e and all t , necessarily, both Et e →Et Et e and ¬Et¬Et e → e .

C4) For all t , necessarily, if you conduct the experiment E at t , then E is apartition.

6 I borrow terminology from Greaves & Wallace (2006).7 For our purposes, a partition is a set of mutually exclusive and jointly exhaustive propositions—a

set of propositions exactly one of which must be true (read the ‘must’ as epistemic).

Page 6: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 6 of 30

Figure 2: There are four epistemically possible worlds, {w1,w2,w3,w4}. You conductthe experiment {e1, e2} at t , where e1 = {w1,w2,w3}, e2 = {w2,w3,w4},Tt e1 = {w1,w2},and Tt e2 = {w3,w4}. Though {Tt e1,Tt e2} is a partition, {e1, e2} is not.

The internalist accepts (C1–C4), while the externalist accepts their negations, (D1–D4).

D1) For some e and some t , possibly Tt e ∧¬Et Tt e .

D2) For some e and some t , possibly ¬Et e ∧¬Et¬Et e .

D3) For some e and some t , possibly either Et e ∧¬Et Et e or ¬Et¬Et e ∧¬e .D4) For some t , possibly you conduct a non-partitional experiment E at t .

It follows from internalism that, in any experiment you might conduct, if youmight learn e and no more, then e could not be true without you learning it—the propositions e and Te must be true at the same possibilities.

There are two ways of understanding the internalist thesis, corresponding totwo different readings of the ‘necessarily’s in (C1–C4). We could read them aseither metaphysical or epistemic necessity modals. On the metaphysical reading,internalism says that it is metaphysically impossible for you to ever possess totalevidence e without possessing the evidence that e is your total evidence. Onthe epistemic reading, internalism says that you will always be able to rule out, inadvance, that you will acquire total evidence e without also acquiring the evidencethat e is your total evidence. The corresponding readings of the ‘possibly’s in (D1–D4) give us epistemic and metaphysical flavors of externalism.

Both forms of internalism and externalism are interesting, but I will confineattention to the epistemic versions of the views. My topic is updating strategies,and your update strategy should depend upon what is epistemically possible foryou. Suppose—perhaps per impossibile—that, while it is metaphysically possiblefor Te to be true without you learning it, this is not epistemically possible. Thenyou could not rationally plan for the contingency in which you learn e withoutalso learning Te . From your benighted perspective, this contingency is impossi-ble. And it’s not rational to plan for the impossible.

Our disputants are therefore debating about the propriety of certain prospec-tive epistemic states. Is it rationally permissible for you to foresee possibilities inwhich you get the total evidence e without getting the evidence Te ? The inter-nalist says: no, definitely not. The externalist: yes, perhaps.

Page 7: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

1. Internalism and Externalism 7 of 30

1.3 Introspective Evidence

The internalist places restrictions of the kinds of evidence you can foresee yourselfreceiving. These restrictions can appear unduly strong. Consider the followingcase.

sneak peekYou and Bonnie are playing a game involving three cups and a ball.While your back is turned, Bonnie places the ball under one of thecups and shuffles them around. When she’s done, you will attempt toguess which cup hides the ball. If you guess correctly, you win; if not,Bonnie wins. You’re certain that there’s no funny business, so thatthe ball is either beneath cup 1, cup 2, or cup 3 (call these propositions‘1’, ‘2’, and ‘3’, respectively). At t , while the cups are being shuffled,your accomplice is going to attempt to distract Bonnie, and you’regoing to sneak a peek under the cup closest to you at t , if you can.However, you don’t know which cup will be closest to you at t , andtherefore, you don’t know which cup you’ll try to look under. Nordo you know whether you’ll be successful.

In sneak peek, it is reasonable to think that 1, 2, and 3 are each equally likely.Prima facie, you might acquire any of the following total evidence propositions:

E1) Nothing at all (⊤), which you will learn if you aren’t able to sneak a peekat t .

E2) The ball is not beneath cup 1 (¬1), which you will learn if you look beneathcup 1 at t and don’t see the ball.

E3) The ball is not beneath cup 2 (¬2), which you will learn if you look beneathcup 2 at t and don’t see the ball.

E4) The ball is not beneath cup 3 (¬3), which you will learn if you look beneathcup 3 at t and don’t see the ball.

E5) The ball is beneath cup 1 (1), which you will learn if you look beneath cup1 at t and see a ball.

E6) The ball is beneath cup 2 (2), which you will learn if you look beneath cup2 at t and see a ball.

E7) The ball is beneath cup 3 (3), which you will learn if you look beneath cup3 at t and see a ball.

If that’s right, then the experiment you are conducting at t does not form a par-tition. Since it is clearly possible to conduct this experiment, doesn’t sneak peaksuffice to establish externalism?

It does not. The internalist should grant that each of (E1–E7) could be thestrongest thing you learn about the location of the ball. However, if you are rational,you will be certain to receive more evidence than this. For instance, you will

Page 8: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 8 of 30

be certain to also receive the evidence of how your credences have changed inresponse to this evidence about the location of the ball. Call evidence of thiskind ‘introspective evidence’. Even though the consistent propositions ¬1 and¬2 both could be the strongest propositions you learn about the location of theball, neither ¬1 nor ¬2 could be the strongest proposition you learn full stop. Ifyou learn that the ball isn’t under cup 1, and you are rational, then you will becomecertain that ¬1, and you will, moreover, learn that you have become certain that¬1. If you learn that the ball isn’t under cup 2, and you are rational, then youmust also learn that you’ve become certain that the ball isn’t under cup 2. Andyou are certain in advance that you won’t both become certain that the ball isn’tunder cup 1 and that the ball isn’t under cup 2. Once your introspective evidenceis taken into account, your experiment will form a partition after all.8

Our externalist wished to model your (pre-experimental) credal state in sneakpeek with just three possibilities, 1, 2, and 3, each of which you took to be equallylikely:

1 2 31/3 1/3 1/3

This representation of your pre-experimental credal state gives our externalist allthey need to know in order to say how your credences about the location of theball should change after the experiment. At least, it does so if we suppose theprinciple of conditionalization. As I will understand it here, conditional-ization supposes that, prior to an experiment E = {e1, e2, . . . , eN }, you shouldhave a strategy for updating your credences in respponse to each potential evidenceproposition e ∈ E . For each e ∈ E , use ‘Ce ’ for the credence function you plan toadopt upon learning e and no more. Conditionalization says: Ce should beyour pre-experimental credence function, C , conditioned on the total evidencee , C (− | e ).9

conditionalizationIf you conduct the experiment E with the pre-experimental credencefunction C , then, for each e ∈ E , your strategy for responding tototal evidence e , Ce , should be to condition C on e . That is, for allpropositions ϕ,10

(condi) Ce (ϕ)!= C (ϕ | e )

8 Introspective evidence is not the only kind of evidence an internalist could appeal to in order tojustify their claim that, for all e , necessarily, Te →ETe . Still, this breed of internalism will bemy focus here.

9 I will be taking for granted throughout that a rational credence function will be a probability. I’llalso be making the simplifying assumptions throughout that a) the number of propositions overwhich C is defined is finite; and b) C assigns positive credence to every proposition compatiblewith your evidence.

10 I place an exclamation mark over the equals sign to indicate that the equality holds with norma-tive, and not descriptive, force. condi claims not that your strategy for responding to e will beto condition on e , but rather that it should be.

Page 9: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

1. Internalism and Externalism 9 of 30

By condi, if you were to get the evidence that the ball is not under cup 1, youshould transition to this new credal state:

1 2 30 1/2 1/2

And, if you were to get the evidence that the ball is not under cup 2, you shouldtransition to this new credal state:

1 2 31/2 0 1/2

While this representation serves the needs of our externalist perfectly well, it isnot complete. For, if you are in the experiment E = {1,2,3,¬1,¬2,¬3,⊤}, thenexactly one of the propositions in E will be your total evidence. For each e ∈ E ,there is some credence function, Ce , you plan to adopt post-experiment if yourtotal evidence is e . Let Ue (for update) be the proposition that you adopt the cre-dence function Ce . A complete representation of your credal state should includethese propositions as well. I assume you are certain to update to at least one of thecredence functions Ce .11 For each e ̸= e ∗, Ce will be distinct from Ce∗ , so you willnot adopt more than one of these credence functions.12 So {Ue1,Ue2, . . . ,UeN }will form a partition. So there should be some possible world for each non-zerocell in the 3 × 7 grid in figure 3a. (I assume away possibilities at which Ue butnot e—rational certainty is factive, and you have chosen your plans accordingly.In the figure, I suppose that you are equally likely to update to each Ce .)

Introspective propositions like these are crucial for our internalist. Accordingto them, if you learn that the ball is under cup 1, then you must additionallylearn that you have updated to the credence function C1, U1. By condi, if youstand to learn U1, then you must plan to become certain of it. The same goes forevery other proposition about the position of the ball. So, while our externalistmay think that your experiment is the non-partitional {1,2,3,¬1,¬2,¬3,⊤}, ourinternalist will insist that it is instead the partitional {U1, U2, U3, U¬1, U¬2,U¬3, U⊤}.

Suppose you are, actually, in the very center cell of the grid in figure 3a—fourth row, second column. The ball is under cup 2, you check cup 1, and findit empty. If your total evidence is just that the ball is not under cup 1—if you donot additionally learn that you’ve updated to C¬1—then conditioning on yourtotal evidence will take you to the post-experimental credence distribution shownin figure 3b. On the other hand, if you additionally acquire the introspectiveevidence that you’ve updated to C¬1, then conditioning on this total evidencewill take you to the post-experimental credence distribution shown in figure 3c.

Sneak peek is an overly simplistic example. Most externalists will think thatyou acquire some introspective evidence in cases like sneak peek. The cases in11 That is: I assume that you have the ability to bind yourself to your plans well enough that you

foresee no possibility of your future self not even trying to stick to the plan. I will continue toassume this throughout.

12 For all e , e is your total evidence iff e is the strongest proposition about which certainty hasbecome rational. So Ce should be certain that e and no more, and Ce∗ should be certain thate ∗ and no more. Since e ̸= e ∗, Ce should not be equal to Ce∗ .

Page 10: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 10 of 30

1 2 3U1 6/42 0 0U2 0 6/42 0U3 0 0 6/42U¬1 0 3/42 3/42U¬2 3/42 0 3/42U¬3 3/42 3/42 0U⊤ 2/42 2/42 2/42

1/3 1/3 1/3(a)

1 2 3U1 0 0 0U2 0 6/28 0U3 0 0 6/28U¬1 0 3/28 3/28U¬2 0 0 3/28U¬3 0 3/28 0U⊤ 0 2/28 2/28

0 1/2 1/2

(b)

1 2 3U1 0 0 0U2 0 0 0U3 0 0 0U¬1 0 1/2 1/2U¬2 0 0 0U¬3 0 0 0U⊤ 0 0 0

0 1/2 1/2

(c)

Figure 3: Figure 3b shows the result of conditioning the pre-experimental credencedistribution from figure 3a on ¬1. Figure 3c shows the result of conditioning the pre-experimental credence distribution from figure 3a on U¬1.

which they think we lack introspective access and conduct non-partitional exper-iments will be more complicated and more psychologically plausible. My goal isnot to argue for externalism, but rather to develop it, so I’ll continue to focus onsimple examples like sneak peek. The lessons we learn there will carry over tomore realistic cases.

The distribution in figure 3a has an interesting property: when it comes toyour opinions about the position of the ball, conditioning on the true introspec-tive proposition describing your update gives exactly the same result as condi-tioning just on your total evidence about the position of the ball. That is, thoseintrospective propositions do not give you any information about the position ofthe ball beyond the information provided by your non-introspective evidence. Ifthat’s so, then let’s say that your experiment is introspectively neutral.

introspective neutralityAn experiment E = {(U)e1, (U)e2, . . . , (U)eN }, where each e ∈ Eis a proposition about the partition P , is introspectively neutraliff your pre-experimental credences are such that, for all e and allpropositions ϕ about the partition P :

C (ϕ |Ue ) = C (ϕ | e )

Page 11: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

1. Internalism and Externalism 11 of 30

(I write ‘(U)ei ’ to include both the possibility that the experiment is of the formEi nt = {Ue1,Ue2, . . . ,UeN }, as the internalist has it, or Ee x t = {e1, e2, . . . , eN },as the externalist may think. The definition of introspective neutrality is intendedto characterize both internalist and externalist experiments. The important pointis not the form your experiment takes, but rather whether conditioning on yourtotal evidence about the partition P 13 gives the same result as conditioning on thetrue proposition describing your post-experimental credences.)

Introspective neutrality is a nice feature of an experiment. For the internalist,it means that, if you’re updating your degrees of belief about the partition P inaccordance with condi, you don’t have to worry about the import of your intro-spective evidence. Not all experiments are introspectively neutral. For instance,consider the following variation of sneak peek: without knowing anything aboutthe location of the ball, you guess that it is under cup 2. Then, Bonnie decides toshow you that the ball is not under the first cup, and gives you a chance to changeyour guess. You’ve watched Bonnie play this game many times before, and youknow that she only reveals an empty cup and gives people a chance to change theirguess when they’ve guessed correctly.14 So we can represent your credal state—after making your guess but before learning that the ball is not under the firstcup—like this:

1 2 3U¬1 0 1/6 0U¬3 0 1/6 0U⊤ 2/6 0 2/6

1/3 1/3 1/3

This experiment will not be introspectively neutral. For conditioning on U¬1will leave you certain that the ball is under cup 2, while conditioning on ¬1 willleave you thinking that ball is as likely to be under cup 2 as it is to be under cup3.

Introspective neutrality is not guaranteed, but in a very general class of cases,it is within our reach. Given any credal distribution over the non-introspectivepropositions 1,2, and 3, there will be some way of apportioning credence to thenon-zero cells of the 3 × 7 grid in figure 3a such that the experiment Eext = {1, 2,3, ¬1, ¬2, ¬3, ⊤} (or the experiment Eint = {U1, U2, U3, U¬1, U¬2, U¬3,U⊤}) is introspectively neutral. And the same goes not just for the partitionP = {1,2,3} and the experiment Eext (Eint ), but any other partition and any other

13 ϕ is your total evidence about the partition P iff ϕ is the strongest disjunction of cells of Pwhich is entailed by your total evidence. In terms of a Kripke frame, ϕ is your total evidenceabout P at a world w iff ϕ is the proposition∪

pi ∈P :pi∩{x |wRt x}̸=∅

pi

14 This was a strategy adopted by Monty Hall, the host of the game show Let’s Make a Deal, whenactually playing the famous ‘Monty Hall problem’ with those who he believed would choose toswitch—see Tierney (1991).

Page 12: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 12 of 30

1 2 3U¬1 0 1/6 2/6U¬3 2/6 1/6 0

1/3 1/3 1/3

Figure 4: Your pre-experimental credal state before conducting the experiment E ={¬1,¬3}.

experiment in which you could learn anything at all about that partition. For anexplanation of why this is so, see the proof of Proposition 1 in the appendix.

2 Externalism, Conditionalization, and Reflection

In this section, I will argue that the externalist cannot accept both condition-alization and the recommendations of van Fraassen (1984, 1995)’s principle ofreflection in a particular kind of experiment. Moreover, the externalist can-not plausibly reject the recommendations of reflection in this experiment. Iwill conclude, then, that the externalist should reject conditionalization. Thisconclusion will be further substantiated in §3 with Salow’s observation that con-ditionalization allows the externalist to engage in deliberate self-delusion.

Suppose that externalism is correct, and you are conducting the experimentE = {¬1,¬3}, where {1,2,3} is a partition and each cell is equally likely. Perhapsyou’re trying to find the ball in Bonnie’s cups, you’ve guessed that it’s under cup2, and you know that Bonnie will reveal some empty cup you haven’t guessed,either cup 1 or cup 3.15 Then, you’ll find yourself in the pre-experimental credalstate shown in figure 4. Suppose you are a conditionalizer, so that you are certainthat you will either condition on ¬1 or ¬3. Notice: if you condition on ¬1, yourcredence that 2 will rise to 1/2, since C (2 | ¬1) = 1/2. And if you condition on¬3, your credence that 2 will rise to 1/2, since C (2 | ¬3) = 1/2. So, prior toconducting the experiment, you are certain that your credence that 2 will rise to1/2. Why wait? Cut to the chase—go ahead and adopt a credence of 1/2 in 2 beforelooking. If you are a conditionalizer, then you should cut to the chase. But cuttingto the chase is inconsistent with being a conditionalizer. For, if your credenceremains at 1/2 after learning¬1, you will not have updated by conditioning on¬1.No matter your credence in 2, C (2 | ¬1) > C (2).16 If you are a conditionalizer,then you shouldn’t be. So you shouldn’t be.

Cutting to the chase is recommended by van Fraassen’s principle of reflec-tion. As I’ll understand it here, reflection says that you should defer to yourpost-experimental self.17 You defer to your post-experimental self just in case, for

15 Cf. the ‘Monty Hall’ problem from Selvin (1975), and the ‘three prisoners paradox’ in Gardner(1961).

16 Recall from footnote 9: I assume that C (ϕ) = 0 iff ϕ is inconsistent with your evidence.17 van Fraassen’s original principle enjoins you to defer to your future self, for all future times.

Because I restrict it to apply to experiments in which you have plans for updating, the principle Icall ‘reflection’ escapes many of the counterexamples to van Fraassen’s principle (see Briggs

Page 13: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

2. Externalism, Conditionalization, and Reflection 13 of 30

every proposition ϕ, your current credence that ϕ is your best estimate of thecredence in ϕ which you will have post-experiment. Since C is a probability,your best estimates are given by your expectations. Suppose that your experimentis E = {e1, e2, . . . , eN }, and you are certain that you will adopt one of the post-experimental credence functions in {Ce1

,Ce2, . . . , CeN

}. As before, let Ue be theproposition that, post-experiment, you update to the credence functionCe . Then,reflection says:

reflectionYour pre-experimental credence that ϕ should be equal to your ex-pectation of the credence that ϕ you will have post-experiment,

(reflection) C (ϕ) !=∑e∈E

Ce (ϕ) ·C (Ue )

If you are a conditionalizer, then you violate reflection in the experiment E ={¬1,¬3}. You plan to raise your credence that 2 to 1/2 no matter what, so youexpect your post-experimental credence that 2 to be 1/2, yet your pre-experimentalcredence that 2 is 1/3.

Perhaps your post-experimental self is not worthy of deference. In externalistexperiments, you will foresee possibilities in which your post-experimental cre-dence is not the rational one to adopt, given your evidence (I return to this fea-ture of externalism in §4—see also Elga, 2013). And an irrational future versionof yourself is not one deserving of deference. You should instead defer to yourrational future self—that is, for each ϕ, your pre-experimental credence that ϕshould be your best estimate of the credence that ϕ which it would be rationalfor your future self to adopt, given their evidence. Call this principle rationalreflection.18 The credence function Ce is the rational one to adopt iff your totalevidence is e , Te . So rational reflection says:

rational reflectionYour pre-experimental credence that ϕ should be equal to your ex-pectation of your rational post-experimental credence that ϕ,

(rat-ref) C (ϕ) !=∑e∈E

Ce (ϕ) ·C (Te )

If you are certain to update to Ce iff e is your total evidence, C (Ue↔Te ) = 1,say that you are immodest; else, say you are modest.19 If you are immodest, thenreflection and rat-ref will agree. However, if you are modest, then reflection

(2009) for a nice taxonomy of those counterexamples). Those which remain involve credencesde se. Credence de se will force us to reject or qualify reflection and conditionalizationboth. Still, I’ll ignore credence de se for the nonce.

18 See Christensen (2010). The principle discussed by Christensen is a purely synchronic prin-ciple requiring deference to your currently rational credences. Here, I generalize the principle tocover your future rational credences.

19 Beware: this terminology is idiosyncratic. Others will call you ‘immodest’ iff you are less thancertain that your current degrees of belief are rational.

Page 14: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 14 of 30

and rat-ref may come apart. In general, in non-partitional experiments, you willmodestly foresee possibilities of error. So, in general, in externalist experiments,reflection and rat-ref will come apart.

However, if condi is correct, then, when it comes to your credence that 2 inthe experiment E = {¬1,¬3}, both reflection and rat-ref speak with a singlevoice. Pre-experiment, you know that you will either receive the total evidence ¬1or ¬3. If your total evidence is ¬1, then condi counsels to raise your credence in2 to 1/2. And condi says exactly the same thing if your total evidence is ¬3. So,prior to conducting the experiment, you are certain that there is a fully rationalagent—viz., the rational version of your future self—who accepts your preciseepistemic standards, who has strictly more evidence than you do, and whose cre-dence that 2 is 1/2. Why wait? Cut to the chase—given what you know, you arein a position to rationally reason your way to adopting a credence of 1/2 in theproposition 2 before looking. If you should be a conditionalizer, then you shouldcut to the chase, but cutting to the chase is inconsistent with being a condition-alizer. No matter your credence that 2, C (2 | ¬1) > C (2). If you should be aconditionalizer, then you shouldn’t be. So you shouldn’t be.

What does this show? We might think that it shows us that the followingclaims are inconsistent:20

F1) Externalism

F2) Conditionalization

F3) (Rational) reflection

But in fact these claims are not, on their own, inconsistent. There is a wideclass of non-partitioning experiments in which the conditionalizer may satisfyreflection. (And likewise for rat-ref.) Take the prior credence distributionshown in figure 3a, and suppose you will conduct the non-partitioning experi-ment E = {1,2,3,¬1,¬2,¬3,⊤}. An exercise for the reader: pick any propo-sition ϕ (any disjunction of non-zero cells) from the 3×7 grid in figure 3a, andthen calculate both C (ϕ) and

∑e∈E C (ϕ | e ) ·C (Ue ). You will find that they are

equal, no matter which of the 212 = 4,096 possible propositions you pick. Andthere was nothing particularly special about the distribution in figure 3a. Anyother experiment which is introspectively neutral has the same property. That is,if your experiment is introspectively neutral, and you update with condi, thenyou will satisfy reflection. To understand why this is so, consult Corollary 1in the appendix.21 (Likewise: re-label the rows in figure 3a so as to replace ‘Ue ’with ‘Te ’, and you have an externalist experiment which satisfies rat-ref. Saythat an experiment E = {e1, e2, . . . , eN }, where each ei is about the partition P , is

20 I believe that the first to explicitly note this inconsistency was Hild (1998a,b).21 Some readers will think that this follows trivially from the definition of introspective neutrality

and the law of total probability. Not so. The definition of introspective neutrality only claimsthat C (ϕ |Ue ) = C (ϕ | e ) for propositionsϕ about the partition P (that is, propositions whichare disjunctions of cells of P ). The principle reflection applies to all propositions; not justthose about the partition P .

Page 15: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

2. Externalism, Conditionalization, and Reflection 15 of 30

evidentially neutral iff your pre-experimental credences are such that, for all e ∈ Eand all propositions ϕ about P , C (ϕ | Te ) = C (ϕ | e ). If your experiment isevidentially neutral, the recommendations of condi will abide by rat-ref. Tounderstand why, consult Corollary 2 in the appendix.)

The additional assumption needed to get a contradiction out of (F1), (F2),and (F3) is that you may conduct an experiment like the one shown in figure 4.What is it about this experiment that leads to a contradiction between (F1), (F2),and (F3)? As a first pass characterization: this is an experiment in which someproposition may be ruled out, but another proposition (in this case, 2) is guaran-teed in advance to not be ruled out. It is for this reason that, in our experiment,2’s credence may go up but definitely won’t go down. Of course, the trivial propo-sition ⊤ is also guaranteed to not be ruled out in any experiment. The differencebetween ⊤ and 2 is that 2, unlike ⊤, is an atom of the experiment E .

Given the experiment E = {e1, e2, . . . , eN }, an atom of E is a non-empty con-junction of (negations of ) the propositions in E . If you draw a Venn diagram con-taining a circle for each potential piece of evidence in E , the atoms of E are thesmallest inscribed regions. By way of illustration, in our experiment E = {¬1,¬3},¬1∩¬3 = 2 is an atom, as are ¬¬1∩¬3 = 1 and ¬1∩¬¬3 = 3. And these arethe only atoms of this experiment, since all other conjunctions of (negations of )propositions in E are empty.

Iff there’s some atom of the experiment which is guaranteed to not be ruledout, no matter what, the intersection of all propositions in E will have positivecredence, C

�∩E�> 0. If, in addition, some proposition with positive credence

may be ruled out (i.e., C�∩

E�< 1), then, by condi, the proposition

∩E is

guaranteed to have its (rational) credence not go down, and its (rational) credencemay go up. So your expectation of your future (rational) credence that

∩E will

be higher than your current credence that∩

E , and you will violate reflection(rat-ref). So, if there is any experiment in which 0 < C

�∩E�< 1, then (F1),

(F2), and (F3) are inconsistent.22

An externalist could deny the assumption that you may find yourself in experi-ments like these, but this would strain credibility. Suppose you find yourself in anexperiment in which you might learn anything about the location of the ball, andthen, a trustworthy confidant informs you that Bonnie definitely won’t show youthe ball, and she definitely won’t reveal what’s under cup 2. You should then becertain that the strongest thing you’ll learn about the position of the ball is either¬1 or ¬3; and it’s unclear how this new information you’ve acquired would makeany difference with respect to whether you will additionally gain introspectiveevidence about how your credences have changed.

So the externalist faces a choice: they can either deny condi or they can denyboth reflection and rat-ref. Elga (2007, 2013) affords an argument that theyshould reject rat-ref.23

22 This is a sufficient, but not necessary, condition for getting a contradiction out of (F1), (F2), and(F3). Even if

∩E = ∅, E could lead to a contradiction between (F1), (F2), and (F3). Some, but

not all, introspectively non-neutral experiments like this will contradict (F1), (F2), and (F3).23 Elga only argues for a synchronic principle saying how your current credence that ϕ should

Page 16: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 16 of 30

As I made the case for cutting to the chase, I said: prior to performing theexperiment, you are certain that your future rational self will know all that youknow and more besides. It was on this basis that I recommended deferring totheir opinion about whether 2. Perhaps this claim was in error. Perhaps there issomething you now know that your future rational self does not. For you nowknow that your future rational self is rational. In particular, you know that theyhave conditioned on their total evidence. However, if your total evidence ends upbeing ¬1, then your rational future self will give positive credence to their totalevidence being ¬3. And, if their total evidence is ¬3, then they are irrationallyconfident in 3. Similarly, if your total evidence ends upon being ¬3, then yourrational future self will give non-zero credence to their total evidence being ¬1.And, if their total evidence is ¬1, then they are irrationally confident in 1. Soyour future rational self will not be certain that they are rational, though you arenow certain that they will be. In general, you should not defer to agents when youare certain of matters about which they are ignorant; rather, you should defer tothem only after apprising them of the information you have that they lack. So, youshould only defer to the opinion of your future rational self after informing themof what their total evidence is. Once they have this extra information, both of yourfuture rational selves will give the proposition 2 credence 1/3, since C (2 |T¬1) =C (2 |T¬3) = 1/3.24 In general, the externalist should defer to their rational post-experimental self in the manner prescribed by ‘new rational reflection’.25

new rational reflectionYour pre-experimental credence that ϕ should be equal to your ex-pectation of your rational post-experimental credence that ϕ, onceyour rational post-experimental self has been told what its total evidenceis.

(new rat-ref) C (ϕ) !=∑e∈E

Ce (ϕ |Te ) ·C (Te )

Holding fixed the factivity of evidence, new rat-ref follows from condi. Sinceexternalism and condi are consistent, externalism, condi, and new rat-refare consistent as well.

This diagnosis of the conflict between (F1), (F2), and (F3) is clever but notultimately persuasive. The foregoing considerations do nothing to blunt the ar-gument for cutting to the chase. Even supposing that, before carrying out theexperiment E = {¬1,¬3}, you know something that your future rational selfdoes not—viz, what their total evidence is—this knowledge of yours is irrelevantwith respect to the question of what degree of belief in the proposition 2 is ra-tional. It’s true that, if your evidence is ¬1, then your post-experiment self willthink they may be overly confident in 3, and if your evidence is ¬3, then they’ll

relate to your current credences about which credence that ϕ is rational. Here, I generalize hisdiscussion—perhaps in ways he would not endorse.

24 See Hall (1994)’s distinction between a database expert and an analyst expert and Elga’s distinc-tion between an expert and a guru.

25 Again, the principle Elga calls ‘new rational reflection’ is synchronic.

Page 17: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

3. Reflection and Biased Inquiry 17 of 30

think they may be overly confident in 1. However, in either case, they will becertain that their credence that 2 is rational. For they can, post-experiment, runprecisely the same argument that you are able to run pre-experiment. They cansay to themselves: “Either my total evidence was ¬1 or it was ¬3. If it was ¬1,then it’s rational for me to have credence 1/2 that 2. If it was ¬3, then it’s rationalfor me to have credence 1/2 that 2. So, either way, it’s rational for me to havecredence 1/2 that 2.” Post-experiment, no matter what you learn, you will be ra-tionally certain that 1/2 is the rational credence to have in the proposition 2. It’shard to see why the fact that your future rational self is uncertain about whethertheir credence in some other proposition is rational gives any reason to not adopttheir credence in the proposition 2, which is certainly rational.

In the second place, it’s simply not the case that, pre-experiment, you knowsomething that your rational post-experimental self does not. It’s true that yourfuture rational self does not have the evidence of what their total evidence is, butneither does your pre-experimental self. Likewise, it’s true that your future ratio-nal self doesn’t know whether a post-experimental credence of 1/2 in 1 is rational;but neither does your pre-experimental self. Your post-experimental rational selfhas all the evidence your pre-experimental self has, and more besides. And theyshare your precise epistemic standards. So there is no reason why you should notregard their opinion as better informed than your own. Assuming condi, theyare certain to have a rational degree of belief of 1/2 that 2. So, assuming condi,you too should have a credence of 1/2 that 2.

That’s not an endorsement of reflection or rat-ref. Nor is it a criticism ofnew rat-ref. It is simply an endorsement of reflection and rat-ref’s advice tothe conditionalizer performing the experiment E = {¬1, ¬3}: don’t wait—cut tothe chase. The advice is sound, but it is inconsistent with condi. If the externalistis or should be a conditionalizer, they shouldn’t be. So they shouldn’t be. For allwe’ve said so far, perhaps the externalist should also reject reflection and rat-ref. But, at a minimum, they should reject condi. (To lay my cards on the table:the update I offer the externalist in §4 entails reflection, and contradicts bothrat-ref and new rat-ref in conditions of modesty.)

3 Reflection and Biased Inquiry

Externalists face a choice between the principles of reflection and condition-alization. They cannot plausibly endorse both. I argued that, whatever external-ists should think of reflection generally, they should accept its recommenda-tions in one particular experiment; and, in that experiment, its recommendationsconflict with conditionalization. So externalists should reject conditional-ization. There is additionally reason to accept the principle of reflection infull generality.

Salow (forthcoming) teaches that reflection is more than a principle of ex-pert deference. Whether or not your post-experimental self is worthy of epistemicdeference, the principle of reflection has important work to do in preventingrational agents from engaging in deliberate self-delusion. If your update strategy

Page 18: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 18 of 30

violates reflection, you will expect to raise your credence in some proposition,irrespective of whether or not that proposition is true.

To borrow Salow’s example: let ‘p ’ be the proposition that you are popular(or that you’re not—whichever you’d prefer to believe). Suppose that your rationaldegree of belief in p is 1/3 (though the precise value won’t matter). Suppose thatit’s possible for you to conduct the experiment E = {¬1,¬3} from figure 4, andassume condi. Then, here’s a recipe for raising your credence that p no matterwhat. First, tell a confidant who knows the truth about p to place the ball undercup 2 iff p. If ¬p, then they should flip a coin to decide between cup 1 and3. Tell them to reveal to you an empty cup, but not under any circumstance toreveal what’s under cup 2. condi counsels to raise your credence that 2 from 1/3to 1/2, no matter what. Since you are certain that p↔ 2, your credence that pwill likewise rise from 1/3 to 1/2. And there’s no reason this experiment need beconducted only once. Run through the whole exercise again, and plan to raiseyour credence that p to 2/3, no matter what; and—why not?—again, raising it to4/5 no matter what; and again, raising it to 8/9; and so on and so forth. If it iswithin your power to design experiments like E = {¬1,¬3}, and if it is rationalto strategize with condi, then it is rational to plan to become as confident inthe proposition that you’re popular as you wish. Assuming it is rational to followthrough on a rationally-formed plan, your future rational self will be as confidentthat p as you wish them to be.

Let us be frank: this is not rational inquiry. This is self-delusion, no more.And no sensible epistemology will deem rational the person who plans to walkaway from this series of experiments nearly certain that they are popular. This isnon-negotiable. So let us lay it down as a principle.

no self-delusionA rational agent may not design an experiment and strategize tobecome more confident in some proposition, no matter the exper-iment’s outcome.

No self-delusion prohibits an extreme variety of biased inquiry—inquirywhich is guaranteed to leave you more confident of some proposition. The reasonswe have to prohibit this kind of biased inquiry carry over to inquiries which wemerely expect to leave us more confident in some proposition. Say that an updatestrategy is biased in favor of a propositionϕ iff, when enacting that strategy, yourexpectation of your post-experimental credence that ϕ is greater than your pre-experimental credence that ϕ. Similarly, an update strategy is biased against ϕiff your expectation of your post-experimental credence that ϕ is less than yourpre-experimental credence that ϕ. Thus, an update strategy is unbiased iff, for allϕ,

C (ϕ) =∑e∈E

Ce (ϕ) ·C (Ue )

Given this understanding of when an update strategy is biased, Salow endorses:

no biased inquiryA rational agent will not have a biased update strategy.

Page 19: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

4. Updating for Externalists 19 of 30

Salow’s insight is that the principle of reflection is equivalent to no biasedinquiry. To strategize to update in defiance of reflection is to bias your inquiry.Biasing your inquiry is irrational, so reflection is rationally required.

If E = {e1, e2, . . . , eN } is your experiment, then, by the definition of ‘ex-periment’, {Te1,Te2, . . . ,TeN } must form a partition. By factivity, necessarily,Te → e . The internalist says that E must be a partition. If Te → e for each e ∈ Eand both E and {Te1,Te2, . . . ,TeN } are partitions, then they must be one andthe same partition, and it must be that, necessarily, Te ↔ e . Suppose furtherthat rationality requires immodesty. Then, reflection follows from condi. Sothe immodest, internalist conditionalizer will always abide by no biased inquiry.

Granting that experiments like {¬1,¬3} from figure 4 are possible, the fol-lowing are inconsistent:

G1) Externalism

G2) Conditionalization

G3) No self-delusion

No self-delusion is non-negotiable. A rational agent cannot structure theirinquiry so as to become arbitrarily confident in a proposition, no matter what.Salow concludes that externalism is false. Perhaps that is the correct lesson todraw. However, I believe that a plausible version of externalism is left standing.This is a version of externalism which accepts no self-delusion by denyingconditionalization. In the following section, I will provide the externalist withan alternative to conditionalization. This alternative will always satisfy theprinciple of reflection, so it will never permit biased inquiry.

4 Updating for Externalists

I offer a new update strategy to the externalist; but mine is not the only exter-nalist update on the market. Gallow (2014) offers an update custom tailored tohandle cases in which the Brouwer principle is violated—cases in which e is false,though your evidence doesn’t rule out that you have the evidence e . Gallowunderstands these as cases in which your evidence is theory-dependent. There aretwo hypotheses: that Sabeen has slipped you a psychotropic drug, d , and thatshe hasn’t, ¬d . The drug renders you incapable of properly categorizing flavorexperiences. Without the drug, you are able to recognize how things taste to you.With the drug, your beliefs about how things taste to you correlate not at all withthe way they actually taste to you. You bite into the pear. Gallow says: whatyour evidence is depends upon which background theory is true. If¬d , then yourevidence is that the pear tastes sweet, s . If d , you have no evidence at all. In thiscase, Gallow suggests representing the input to your update with the set of or-dered pairs, {< ¬d , s >,< d ,⊤ >}. More generally, in cases of theory-dependentevidence, you will have an input {< ti , ei >}i , with the interpretation that foreach i , if ti is true, then your evidence is ei . Then, Gallow endorses holisticconditionalization.

Page 20: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 20 of 30

holistic conditionalizationIf C is your pre-experimental credence function and {< ti , ei >}iis the input acquired in the experiment, then your rational post-experimental credence function, C +, is such that, for every proposi-tion ϕ,

(hcondi) C +(ϕ) =∑iC (ϕ | ti ∩ ei ) ·C (ti )

On our use of ‘evidence’, ‘your evidence is e ’ entails that certainty in e hasbeen rationalized. Gallow’s rule tells you to be less than certain in your evidence.So his use of ‘evidence’ is not our own. Still, we can translate from his idiolect toours. When Gallow says that s is your total evidence if the background theory¬d is true and ⊤ is your total evidence if d , he says that ¬d → s is the strongestproposition about which certainty is rationalized. Thus, when Gallow says thatexperience has provided the input {< ¬d , s >,< d ,⊤ >}, we may hear: ‘yourtotal evidence is ¬d → s ’. More generally, when Gallow says that experience hasprovided the input {< ti , ei >}i , we may hear: ‘your total evidence is

∩i ti → ei ’.

hcondi correctly handles some externalist experiments, but it cannot handleall. Experiments like the one from figure 4 do not plausibly involve any theory-dependent evidence. In such cases, hcondi reduces to condi. So hcondi leadsto rational self-delusion in these cases. hcondi has other problems as well. Forinstance, it holds fixed your credence in the various background theories, ti . Butsuppose you know in advance that the chance of the pear tasting sour is one in amillion, while the chance of Sabeen slipping you the drug is one half. Then youshould plan to be very confident you’ve been slipped the drug if the pear tastessour. Gallow recognizes this problem, and suggests a rule for updating yourcredence in background theories, but the rule is rather complicated and Gallowno longer endorses it.26

Hild (1998a,b) and Schoenfield (forthcoming) both endorse an updatestrategy which we may call ‘evidential conditionalization’. According tothis strategy, you should update by conditioning, not on your total evidence, butrather on the proposition that it is your total evidence.

evidential conditionalizationIf you conduct the experiment E , with the pre-experimental cre-dence function C , then, for each e ∈ E , your strategy for respondingto total evidence e , Ce , should be to condition C on the propositionthat e is your total evidence. That is, for all propositions ϕ,

(evcondi) Ce (ϕ)!= C (ϕ | Te )

where ‘Te ’ is the proposition that e is your total evidence.

Hild and Schoenfield’s use of ‘evidence’ also differs from our own. They eachthink that e can be your total evidence even when the proposition that e is your26 personal communication.

Page 21: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

4. Updating for Externalists 21 of 30

total evidence is stronger than e itself. On our use of ‘total evidence’, if Te is true,and Te is stronger than e , then—by definition!—it cannot be rational to becomecertain that Te . Yet evcondi tells you to become certain that e is your total evi-dence even then. So, in evcondi, ‘total evidence’ cannot mean ‘strongest propo-sition about which certainty has been rationalized’. When Hild or Schoenfieldsays ‘your total evidence is e ’, I will write ‘Te ’, to distinguish their terminologyfrom our own. As Hild and Schoenfield use ‘total evidence’, evcondi is anupdate strategy for externalists. However, as we have chosen to use the term,evcondi entails both internalism and conditionalization.

Proof. Your experiment contains all propositions which may be your total evi-dence. If Hild and Schoenfield say {e1, e2, . . . , eN } is your experiment, then{Te1,Te2, . . . ,TeN } forms a partition. If Hild and Schoenfield call ei yourtotal evidence, then evcondi says it is rational for you to become certain thatTei , and nothing stronger. So we call Tei your total evidence. So we say that{Te1,Te2, . . . ,TeN } is your experiment. So what we call your experiment forms apartition, which is equivalent to internalism. evcondi says to conditionalize onwhat we call your total evidence. So evcondi entails conditionalization.

So whatever merit evcondi may otherwise have, it is not an externalist update, aswe are using that term.

Think of it like this: experience may teach that e .27 If so, say that e is ex-periential evidence for you. If experience teaches that e and no more, then saythat e is your total experiential evidence, and write ‘Te ’. If experience may teacheach ei ∈ {e1, e2, . . . , eN } (and no more), then say that {e1, e2, . . . , eN } is yourexperiential experiment. If {e1, e2, . . . , eN } is your experiential experiment, thenthe possibilities are partitioned by {Te1,Te2, . . . ,TeN }. Hild and Schoenfield’s‘evidence’ is our ‘experiential evidence’. Their ‘total evidence’ is our ‘total experi-ential evidence’.

In response to an experience teaching that e , you plan to adopt a new credencefunction—call it ‘Ce ’ and let ‘Ue ’ be the proposition that you have updated toCe . Take a simple case: experience will either teach e1 (and no more) or e2 (andno more). The propositions e1 and e2 are consistent, so {e1, e2} does not partitionthe possibilities. Even so, {Te1,Te2} is a partition. Suppose you are immodest—for each i , you are certain that Uei ↔ Tei . Then, the partitions {Ue1,Ue2}and {Te1,Te2} align; they draw the same distinction. In this special case, yourresponse to an experience teaching ei should be certainty in Tei . So your totalevidence (experiential and non-experiential alike) will be Te1 iff Te1 is true, andTe2 iff Te2 is true. So the partitions {TTe1,TTe2} and {Te1,Te2} will likewisealign. In this special case, your experiment will form a partition, whether or notyour experiential experiment does. (See figure 5a.)

Hild and Schoenfield think your experiential experiment need not forma partition, though they insist that your experiment always will. This is a thesis

27 ‘Experience teaches that’ is a broad and ecumenical notion. I assume very little about it. But Ido assume that it is factive—if experience teaches that e , then e is true.

Page 22: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 22 of 30

worth calling ‘externalism’, though it is not the thesis we have called ‘externalism’.28Call the former thesis ‘lowercase externalism’—the latter, ‘uppercase externalism’.evcondi is a rule for lowercase externalists. It applies whenever your experimentforms a partition, whether or not your experiential experiment does.

Schoenfield argues for evcondi as follows. Represent an update strategy, σ ,with a function from possible worlds to post-experimental credences. If σ (w ) =C , then C is the post-experimental credence function σ says to adopt in the pos-sibility w . Choose some measure of the accuracy of a credence function at a world,A (C ,w )—but choose so thatA is strictly proper.29 An update strategy is goodto the extent that you expect its outputs to be accurate. Thus, the ideal strategy isthe one which maps each world w to the omniscient credence function at w—theone which gives credence 1 to propositions true at w and credence 0 to propo-sitions false at w . Though ideal, this strategy is not available to you. You areonly able to update in response to your experience—so the input to your strategyshould be not a possible world, but rather the state-of-affairs Te . Thus, we shouldrequire that, for all w,w∗ ∈ Te , σ (w ) = σ (w∗). If we understand an update strat-egy being available in terms of this constraint, then the available strategy whichmaximizes expected accuracy is evcondi.30 You should select an available updatestrategy which maximizes expected accuracy. So you should strategize to updatewith evcondi.31

I do not disagree with Hild or Schoenfield. Their focus is narrower thanmine. Their rule is for lowercase externalists who are not uppercase externalists.Neither evcondi nor Schoenfield’s accuracy argument in its favor apply whenyou are modest. Take a simple case: experience will either teach that e1 (and nomore) or e2 (and no more). The propositions e1 and e2 are consistent, so eventhough {Te1,Te2} partitions the possibilities, {e1, e2} does not. If experienceteaches e1, you plan to update to Ce1

, and if experience teaches e2, you plan toupdate toCe2

—but you are modest. You foresee possibilities in which you updateto Ce2

even though experience teaches e1; and you foresee possibilities in whichyou update to Ce1

even though experience teaches e2. (See figure 5b.)A parable: you stand in a room with a television screen and two buttons.

Outside of the room, your credences are displayed. The screen will either showthat e1 or it will show that e2. If you see that e1, you will push button 1. If you seethat e2, you will push button 2. Each button corresponds to a different update toyour credences outside. Normally, pressing button 1 will trigger the first update,while pressing button 2 will trigger the second. Sometimes, the buttons misfire.Sometimes, you push button 1 and update 2 is triggered. Sometimes, you push

28 This externalist thesis is contested—Lewis (1996, 1999) holds that your experiential experimentwill always be a partition. (I assume that, for Lewis, experience teaches that e iff e is describesyour experience in full detail.)

29 A is strictly proper iff every probability function expects itself to have a strictly higherA -valuethan any other credence function. See Oddie (1997), Gibbard (2008), Predd et al. (2009),Joyce (2009), and Pettigrew (2012) for more on strict propriety.

30 This follows from Corollary 2 in Greaves & Wallace (2006).31 Hild (1998b) offers a diachronic Dutch-book argument for evcondi. Like Schoenfield’s ac-

curacy argument, it presupposes that you are immodest.

Page 23: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

4. Updating for Externalists 23 of 30

(a) (b)

Figure 5: e1 = {w1,w2,w3} and e2 = {w2,w3,w4}. In w1 and w2, your total experientialevidence is e1. In w3 and w4, your total experiential evidence is e2. In figure 5a, youcorrectly update to Ce1

, Ue1, in both w1 and w2, and you correctly update to Ce2, Ue2,

in w3 and w4. In figure 5b, in contrast, you incorrectly update to Ce2in w2, and you

incorrectly update to Ce1in w3.

button 2 and update 1 is triggered. While you get to see what’s on the screen, youdon’t get to see your updated credences. Once the buttons are pressed, you are notin a position to know what your credences are. Consider figure 5b. Think of w1 asthe possibility in which the screen shows thate1, you push button 1, and update1 is triggered. Think of w2 as the possibility in which the screen shows that e1,you push button 1, but the button misfires, so that update 2 is triggered. Thinkof w3 as the possibility in which the screen shows that e2, you push button 2, butthe button misfires and update 1 is triggered. And think of w4 as the possibilityin which the screen shows that e2, you push button 2, and update 2 is triggered.The television screen is your experience. The buttons are your plans for update.You are modest; your plans may go awry, in which case your credences will notchange as planned. You lack introspective access, wherefore these errors may notbe detected.

Which update should each button trigger? Answer #1: button 1 should triggeran update to certainty that the television shows that e1, and button 2 shouldtrigger an update to certainty that the television shows that e2. So long as thebuttons work properly (so long as you respond rationally) you will end up beingcertain of a truth. Of course, if the buttons misfire (if you respond irrationally)then you will end up being certain of a falsehood. But this shows only that thebuttons should not misfire (you should not be irrational). Answer #2: button1 should trigger an update to certainty that update 1 has taken place, Ue1, andbutton 2 should trigger an update to certainty that update 2 has taken place, Ue2.The buttons may misfire, but even so, the resulting credences will end up beingaccurate. The screen shows that e1, you push button 1, but by fluke, update 2 istriggered. You end up certain that update 2 was triggered, contrary to plan. But,lo and behold, update 2 was triggered—the error has undone itself! Answer #3:Answers #1 and #2 are both right—button 1 should trigger an update to certaintyin both Te1 and Ue1. And button 2 should trigger an update to certainty inboth Te2 and Ue2. That’s because you should be certain, before pushing, thatUei ↔ Tei—i.e., you should be immodest. Updating to Ce1

when Te1 or Ce1

Page 24: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 24 of 30

when Te2 is irrational. Rationality requires certainty that you are and will remainrational. So rationality requires immodesty. To the extent that you are modest,you are irrational. Answer #4: button 1 should only trigger an update to certaintythat e1, and button 2 should only trigger an update to certainty that e2. Supposethe screen shows that e1 and you push button 1. Your only reasons to thinkupdate 1 was triggered are statistical, so it is irrational to be certain that update 1has been triggered. If button 1 mandates certainty that Te1, you open yourself upto the possibility of being certain of a falsehood, should the button malfunction.Opening yourself up to this possibility is irrational. So certainty in either Te1 orUe1 is irrational.

To each of these answers corresponds an accuracy-maximization argumentlike Schoenfield’s. In this framework, which answer we endorse depends uponwhich strategies we say are available. As before, let σ be a function from worldsto post-experimental credences. Answer #1 says that σ is an available strategyiff it takes, as input, which button is pressed. Button i is pressed iff Tei . Soσ ’s output should not vary so long as Tei is held constant. So σ is availableiff, for all w,w∗ ∈ Tei , σ (w ) = σ (w∗). Then, the available σ which maximizesexpected accuracy will be such that σ (w ) = C (− | Tei ), for all w ∈ Tei (this is justevcondi). Answer #1 is vindicated. Answer #2 says that σ is an available strategyiff its outputs correspond to an update. So σ ’s outputs should not vary as Uei isheld constant—for all w,w∗ ∈Uei , σ (w ) = σ (w∗). Then, the available strategywhich maximizes expected accuracy will be such that σ (w ) = C (− |Uei ), for allw ∈Uei . Answer #2 is vindicated. Answer #3 agrees with both of the constraintson availability given by answers #1 and #2, for it sees them as equivalent. Answer#3 forbids you from being modest and foreseeing any possibility of error. It saysthat rationality requires your prior doxastic state to be as in figure 5a. In that case,the available strategy which maximizes expected accuracy will be evcondi.

The uppercase externalist should be unhappy with each of these construals ofwhat it is for a strategy to be available.32 Answer #3 is flatly inconsistent with theirposition. Answer #1 says that σ is available iff which button you push determinesσ ’s outputs. Which button to press is the object of choice, but it is not itself theupdate. But σ is meant to correspond to a choice of update. Answer #2 allows σto represent an update, but the update it represents is, like the omniscient update,not an object of choice for you. Your choice is which button to press, and notwhich update will actually be triggered. So the uppercase externalist will want adifferent account of which strategies are available.

In w1 and w4, the buttons do not malfunction. In those possibilities, theupdate is under your control. In w2 and w3, the button malfunctions. In thosepossibilities, which update is triggered is not under your control. Nonetheless,something still is under your control: it is under your control which update willbe triggered if the button malfunctions, and which update will be triggered if itdoes not. In w2 and w3, then, your choice is to update to Ce1

with probabilityC (Ue1 | e1 ∩ e2) and to Ce2

with probability C (Ue2 | e1 ∩ e2). Though nopure strategy is available to you, this mixed strategy is. The externalist should

32 Cf. Steel (forthcoming).

Page 25: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

4. Updating for Externalists 25 of 30

say: a strategy σ is available iff there are two credences Ce1and Ce2

, such thatσ (w1) = Ce1

, σ (w4) = Ce2, and

σ (w2) = σ (w3) = Ce1·C (Ue1 | e1 ∩ e2) + Ce2

·C (Ue2 | e1 ∩ e2)Just to distinguish this sense of availability from the others, if a strategy is availablein this sense, say that it is an actionable strategy. Suppose we measure accuracywith the quadratic measure.33 Then, in our simple case, the actionable strategywhich maximizes expected accuracy is this:

Ce1(ϕ) = C (ϕ | ¬e2) ·C (Ue1 | ¬e2) +C (ϕ | e1 ∩ e2) ·C (Ue1 | e1 ∩ e2)

Ce2(ϕ) = C (ϕ | ¬e1) ·C (Ue2 | ¬e1) +C (ϕ | e1 ∩ e2) ·C (Ue2 | e1 ∩ e2)

Recall the notion of the atoms of an experiment. Given the experiment E ={e1, e2, . . . , eN }, an atom of E is a non-empty conjunction of (negations of ) thepropositions e ∈ E . In our simple experiment E = {e1, e2}, e1 ∩¬e2 = ¬e2 is anatom, as are ¬e1 ∩ e2 = ¬e1 and e1 ∩ e2. And these are the only atoms of thisexperiment, since all other conjunctions of (negations of ) the propositions in Eare empty. Denote the set of all atoms of E with ‘A[E]’. In general, what is underyour control is how to update when you take your evidence to be e—though youmay get it wrong. You may take e to be your total evidence when in fact your totalevidence is e ∗ instead. For each atom a ∈A[E], a will entail some of the evidencepropositions ei ∈ E . For each of these ei ’s, there is a world w ∈ a at which youtake your evidence to be ei . Given that you are in a, the probability you will takeyour evidence to be ei is C (Uei | a). For each a and each w ∈ a, your choiceat w is to adopt each Ce with probability C (Ue | a). So the externalist shouldsay that, when performing an experiment E , a strategy σ is actionable iff, for eacha ∈A[E] and each w ∈ a,

σ (w ) =∑e∈E

Ce ·C (Ue | a)

Suppose you conduct an experiment E , which may or may not be a partition,you measure accuracy with the quadratic measure, and you wish to select an ac-tionable strategy which maximizes expected accuracy. Then you should select thestrategy I will call ‘externalist conditionalization’. This strategy says: if youget the total evidence e , change your credence in each atom a ∈ A[E] to be yourprior credence in a conditional on Ue , and for each atom, a, leave alone yourcredences in all propositions conditional on a.

externalist conditionalizationIf you conduct the experiment E with the pre-experimental credence

33 The quadratic measure,Q, says that the accuracy of a credence function C at world w is

Q(C ,w ) = −∑ϕ

(νw (ϕ)−C (ϕ))2

where νw (ϕ) is the truth-value of the proposition ϕ at world w .

Page 26: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 26 of 30

function C , then, for each e ∈ E , your strategy for responding tothe total evidence e , Ce , should be to change your credence in eacha ∈ A[E] to your pre-experimental credence in a given Ue , andleave alone your credence in all propositions conditional on a. Thatis, for each ϕ,

(excondi) Ce (ϕ)!=∑

a∈A[E]C (ϕ | a) ·C (a |Ue )

(Proposition 2 in the appendix shows that excondi is the actionable strategywhich maximizes expected quadratic accuracy. It is also the actionable strategywhich maximizes expected logarithmic accuracy.34 Though I don’t have a proof,I conjecture that it is the actionable strategy which maximizes expected accuracygiven any strictly proper accuracy measure.)

Note that excondi entails reflection.

Proof. ∑e∈E

Ce (ϕ) ·C (Ue ) =∑e∈E

∑a∈A[E]

C (ϕ | a) ·C (a |Ue ) ·C (Ue )

=∑

a∈A[E]C (ϕ | a) ·∑

e∈EC (a |Ue ) ·C (Ue )

=∑

a∈A[E]C (ϕ | a) ·C (a)

= C (ϕ)

So excondi will never permit self-delusion; it will always abide by no biasedinquiry.

Consider the non-partitional experiment {¬1,¬3}, given the pre-experimentalcredal state shown on the left-hand-side of figure 6. The result of updating withexcondi on the total evidence ¬1 is shown on the right-hand-side of figure 6.Ex-conditioning on ¬1 leaves you just as confident about the position of the ballas you would have been after conditioning upon T¬1 or U¬1. Unlike condition-ing on T¬1, updating with excondi leaves you less than certain about what yourtotal evidence is. Post-experiment, you save 1/6th of your credence for the possi-bility that your total evidence was ¬3, and not ¬1. And unlike conditioning onU¬1, updating with excondi leaves you less than certain about what your post-experimental credences are. Post-experiment, you save 1/6th of your credence forthe possibility that you are certain of ¬3.34 The logarithmic measure,L , says that the accuracy of a credencne function C at a world w is

L (C ,w ) =∑ϕ

ln [| (1− νw (ϕ))−C (ϕ) |]

where νw (ϕ) is the truth-value of the proposition ϕ at world w .

Page 27: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

4. Updating for Externalists 27 of 30

1 2 3U¬1∩T¬1 0 1/12 4/12U¬1∩T¬3 0 1/12 0U¬3∩T¬1 0 1/12 0U¬3∩T¬3 4/12 1/12 0

1/3 1/3 1/3

¬1−→

1 2 3U¬1∩T¬1 0 1/12 8/12U¬1∩T¬3 0 1/12 0U¬3∩T¬1 0 1/12 0U¬3∩T¬3 0 1/12 0

0 1/3 2/3

Figure 6: Given the prior credal state shown on the left, the credal state shown on theright is the result of updating with excondi on the total evidence ¬1.

In some special cases, excondi reduces to condi. Suppose that your exper-iment E forms a partition. Then the atoms of your experiment are just the po-tential evidence propositions themselves, A[E] = E . Suppose further that youare immodest—for each e ∈ E , C (Ue ↔ Te ) = 1. Since E is a partition,C (Te ↔ e ) = 1. So C (Ue ↔ e ) = 1. Thus, C (e ∗ | Ue ) = 1 if e ∗ = eand 0 if e ∗ ̸= e . So

Ce (ϕ) =∑

a∈A[E]C (ϕ | a) ·C (a |Ue )

=∑e∗∈E

C (ϕ | e ∗) ·C (e ∗ |Ue )

= C (ϕ | e )Even if your experiment is non-partitional or you are modest, so long as yourexperiment is introspectively neutral, updating with excondi will be equivalentto updating with condi. (To understand why, see the proof of Proposition 2 inthe appendix.)

In other special cases, excondi reduces to Gallow’s hcondi. Assume yourinitial credence that you’ve not been slipped the drug, ¬d , is 4/5, your credencethat you have, d , is 1/5, you think the pear is as likely to taste sweet to you, s asnot, ¬s , and you think that whether the pear tastes sweet to you is independent ofwhether you’ve been slipped the drug. You will either learn that ¬d → s (and nomore) or that ¬d →¬s (and no more). Your experiment is {¬d → s ,¬d →¬s}.If you’ve not been slipped the drug, then you will learn ¬d → s and update toC¬d→s iff the pear tastes sweet to you, and you will learn ¬d →¬s and update toC¬d→¬s iff the pear does not taste sweet to you. On the other hand, if you’ve beenslipped the drug, then you’re just as likely to update to Cn→r as Cn→¬r . Then,your pre-experimental credal state is shown in the top of figure 7. The result ofupdating on ¬d → s with excondi is shown on the bottom of figure 7. This isexactly the result of updating on {< ¬d , s >,< d ,⊤ >} with hcondi.

More generally, suppose that you conduct the ‘theory-dependent’ experimentE = {t → e1, t → e2, . . . , t → eN }, where {e1, e2, . . . , eN } is a partition, and foreach ei , C (t | U(t → ei )) = C (t ). Then the atoms of your experiment will beA[E] = {t ∩ e1, t ∩ e2, . . . , t ∩ eN ,¬t }. Suppose you learn t → ei . In Gallow’sidiolect, you have acquired the input {< t , ei >,< ¬t ,⊤ >}. Ex-conditioning on

Page 28: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 28 of 30

s ∩¬d ¬s ∩¬d s ∩ d ¬s ∩ dU(¬d → s ) 8/20 0 1/20 1/20U(¬d →¬s ) 0 8/20 1/20 1/20

4/10 4/10 1/10 1/10

↓ ¬d → s

s ∩¬d ¬s ∩¬d s ∩ d ¬s ∩ dU(¬d → s ) 16/20 0 1/20 1/20U(¬d →¬s ) 0 0 1/20 1/20

8/10 0 1/10 1/10

Figure 7: Given the prior credal state shown on top, the credal state shown on thebottom is the result both of updating with hcondi on {< ¬d , s >,< d ,⊤ >}, and ofupdating with excondi on ¬d → s .

t → ei yields

Ct→ei(ϕ) =

∑a∈A[E]

C (ϕ | a) ·C (a |U(t → ei ))

= C (ϕ | t ∩ ei ) ·C (t ∩ ei |U(t → ei )) + C (ϕ | ¬t ) ·C (¬t |U(t → ei ))= C (ϕ | t ∩ ei ) ·C (t |U(t → ei )) + C (ϕ | ¬t ) ·C (¬t |U(t → ei ))= C (ϕ | t ∩ ei ) ·C (t ) + C (ϕ | ¬t ) ·C (¬t )

which is the result of updating on the input {< t , ei >,< ¬t ,⊤ >} with hcondi.I assumed that C (t | U(t → ei )) = C (t ). This is required for excondi to

agree with hcondi, but won’t hold in general. This is for the good, since the factthat hcondi always holds fixed your credence in the background theory t wasa problem with that update rule. It is not a problem excondi shares. If C (t |U(t → ei )) > C (t ), then learning t → ei will confirm the background theoryt . If C (t |U(t → ei )) < C (t ), learning t → ei will disconfirm the backgroundtheory t . If the chance of the pear tasting sour is one in a million, while Sabeen isas likely as not to have slipped you the drug, then C (d |U(¬d →¬s ))≫ C (d ),so excondi will set C¬d→¬s (d ) much higher than C (d ).

Consider a simplified version of Williamson’s clock. You know in advancethat the clock hand will point at one of 4 positions. Call them ‘1’, ‘2’, ‘3’, and ‘4’.If it is at 2, then your total evidence will be that it is somewhere between 1 and3—or, equivalently, that it’s not at 4. (See figure 8a.) And the same is true forpositions 1, 3, and 4. No matter where the hand is, the most you’ll learn is thatit’s not in the position opposite. (We’ve simplified past the point of psychologicalplausibility, but what we learn from the n = 4 case will carry over to arbitrarilyhigh n, so it’s a harmless abstraction.) If the hand is at position 2, so that yourtotal evidence is that it isn’t at position 4, you think it 4/5ths likely that you’llcorrectly adopt C¬4, but you foresee some possibility of error, so you save 1/5thof your credence for your issuing response C¬1 or C¬3 instead, each with equalprobability. And the same is true not just for position 2, but for positions 1, 3, and

Page 29: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

4. Updating for Externalists 29 of 30

(a)

1 2 3 4T¬3∩U¬2 1/40 0 0 0T¬3∩U¬3 8/40 0 0 0T¬3∩U¬4 1/40 0 0 0T¬4∩U¬3 0 1/40 0 0T¬4∩U¬4 0 8/40 0 0T¬4∩U¬1 0 1/40 0 0T¬1∩U¬4 0 0 1/40 0T¬1∩U¬1 0 0 8/40 0T¬1∩U¬2 0 0 1/40 0T¬2∩U¬1 0 0 0 1/40T¬2∩U¬2 0 0 0 8/40T¬2∩U¬3 0 0 0 1/40

1/4 1/4 1/4 1/4(b)

1 2 3 4T¬3∩U¬2 1/100 0 0 0T¬3∩U¬3 8/100 0 0 0T¬3∩U¬4 1/100 0 0 0T¬4∩U¬3 0 8/100 0 0T¬4∩U¬4 0 64/100 0 0T¬4∩U¬1 0 8/100 0 0T¬1∩U¬4 0 0 1/100 0T¬1∩U¬1 0 0 8/100 0T¬1∩U¬2 0 0 1/100 0T¬2∩U¬1 0 0 0 0T¬2∩U¬2 0 0 0 0T¬2∩U¬3 0 0 0 0

1/10 8/10 1/10 0(c)

Figure 8: If the clock hand is at position 2, then your total evidence will be that it issomewhere between 1 and 3, T¬4 (figure 8a). Given the prior credal state in figure 8b,the result of updating on ¬4 with excondi is shown in figure 8c.

Page 30: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Updating for Externalists 30 of 30

4 as well. If you antecedently think that the clock hand is equally likely to be atany of the positions, your prior credal state is shown in figure 8b.

The result of updating on ¬4 with excondi is shown in figure 8c. After ex-conditioning on¬4, you will think that the hand is most likely at 2 (80%), thoughyou’ll save some credence for it being at 1 or 3 (10% each). While you will thinkthat your total evidence was most likely ¬4 (80%), you will also think it mighthave been either ¬3 or ¬1 (10% each). And you will think that, most likely, youhave ex-conditioned on ¬4 (66%), though you may instead have ex-conditionedon ¬3 or ¬1 instead (16% each), and you’ll even put aside some credence (2%) forthe possibility that you’ve ex-conditioned on ¬2.

Page 31: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

A Technicalities

Throughout, it will help to have the following notational convention: given a partitionP = {p1, . . . , pN }, I’ll write ‘e D P ’ just in case there are some cells of P , p1, p2, . . . , pMsuch that e =

∪Mi=1 pi—that is, e D P iff e is a (non-empty) union of cells in the partition

P .

Proposition 1. Given any finite N -cell partition P = {p1, p2, . . . , pN } and any probabilitydistribution µ over {e | e D P }, there is some credence distribution, call it ‘Cµ’, over allpropositions ϕ D {Ue ∩ p | e D P, p ∈ P } such that, for every e D P , Cµ(e ) = µ(e ), andsuch that the experiment E = {(U)e | e D P } is introspectively neutral—that is, for eachϕ, e D P ,

Cµ(ϕ |Ue ) = Cµ(ϕ | e )The propositions says that, if you’ve got an experiment in which your total evidence mightbe (that you’ve update upon) anything at all about the partition P , then there will alwaysbe some credence function for which this experiment is introspectively neutral and whichagrees with any pre-experimental opinions about P you might have.

Proof. A bit of notation: given a cell p of the partition P , I’ll use ‘{↑ p}’ for the set of allpropositions about P which p entails, or, equivalently, the set of all disjunctions of cells ofP which include p as a disjunct. (Given the lattice of propositions about P , {↑ p} is theprincipal ultrafilter of p.) We will use µ, defined over the potential evidence propositione D P , to define a credence function Cµ over all propositions ϕD {Ue ∩ p | e D P, p ∈P }. For each proposition e D P , set

Cµ(Ue ) def=µ(e )∑

eiDP µ(ei )

and for each cell p ∈ P , setCµ(p |Ue ) def= µ(p | e )

This suffices to determine the credence of every ϕD {Ue ∩ p | e D P, p ∈ P }.We now must show two things: firstly, that the credence function Cµ, so defined,

agrees with µ about all cells of the partition P ; that is, for each p ∈ P , Cµ(p) = µ(p).Secondly, we must show that, relative to Cµ, the experiment is introspectively neutral—that is, that for each potential evidence proposition, e , and each proposition about P ,ϕ,

(in) Cµ(ϕ |Ue ) = Cµ(ϕ | e )To do so, we’ll use the following lemma:

Lemma 1. For any p ∈ P ,#{↑ p} =∑

eDPµ(e )

That is: the cardinality of the principal ultrafiler for any cell of the partition P is equal to thesum of the probability of each potential evidence proposition.

Proof. Note that the cardinality of {↑ p} is the same for every p ∈ P . And since, for eache D P , µ(e ) =

∑i :pi∩e ̸=∅µ(pi ),∑

eDPµ(e ) =

N∑i=1µ(pi ) · #{↑ pi }

Page 32: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

= #{↑ p} ·N∑i=1µ(pi )

= #{↑ p}

We now show that the credence function Cµ agrees with µ about all cells of thepartition P . For each cell p ∈ P ,

Cµ(p) =∑eDP

C (p |Ue ) ·C (Ue )

=∑eDP

µ(p | e ) ·C (Ue )

=∑

e∈{↑p}µ(p | e ) ·C (Ue ) +

∑e /∈{↑p}

µ(p | e )︸ ︷︷ ︸=0

·C (Ue )

=

∑e∈{↑p}µ(p | e ) ·µ(e )∑

e jDP µ(e j )

=

∑e∈{↑p}µ(p ∩ e )∑

e jDP µ(e j )

=

∑e∈{↑p}µ(p)∑e jDP µ(e j )

= µ(p) · #{↑ p}∑e jDP µ(e j )

Then, from Lemma 1, it follows that Cµ(p) = µ(p). Moreover, since both µ and Cµ areprobability functions (µ by stipulation; and this is easily confirmed for Cµ), it followsthat, for any ϕD P , Cµ(ϕ) = µ(ϕ).

We finally show that, relative to Cµ, the experiment E = {(U)e | e D P } is introspec-tively neutral. Recall that, by definition, for any p ∈ P , and any e D P , Cµ(p |T(P )e ) =µ(p | e ). Therefore, for any ϕD P ,

Cµ(ϕ |Ue ) =∑p∈P :p⊆ϕ

Cµ(p |Ue )

=∑p∈P :p⊆ϕ

µ(p | e )

= µ(ϕ | e )= Cµ(ϕ | e )

(The final equality follows because we have already shown that Cµ(ϕ) = µ(ϕ), for allϕD P .)

Proposition 2. Among actionable strategies, excondi maximizes expected quadratic accu-racy.

Proof. If σ is actionable, then for each e ∈ E , there is a contingency plan Ce , and, foreach a ∈ A[E] and each w ∈ a, σ (w ) =∑e∈E C (Ue | a) ·Ce . For each proposition ϕ,the quadratic accuracy of an actionable strategy σ at world w is given by

Q(σ ,ϕ,w ) =∑e∈E

C (Ue | a) · (νw (ϕ)−Ce (ϕ))2

Page 33: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

(where νw (ϕ) is the truth-value of the proposition ϕ at world w .) Then, for each propo-sition ϕ, the expected quadratic accuracy of an actionable strategy σ is

Q(σ ,ϕ) = − ∑a∈A[E]

∑w∈a

C (w ) ·∑e∈E

C (Ue | a) · (νw (ϕ)−Ce (ϕ))2

or

Q(σ ,ϕ) = −∑e∈E

� ∑a∈A[E]

C (Ue | a) ∑w∈ϕ∩a

C (w ) · (1−Ce (ϕ))2

!

+∑

a∈A[E]C (Ue | a)

∑w∈¬ϕ∩a

C (w ) ·Ce (ϕ)2

!�Pick an e ∈ E and take the first partial derivative of Q(σ ,ϕ) with respect to Ce (ϕ). Setit to zero, and solve for Ce (ϕ). We get:∑

a∈A[E]C (Ue | a) ∑

w∈ϕ∩aC (w ) · (1−Ce (ϕ)) =

∑a∈A[E]

C (Ue | a) ∑w∈¬ϕ∩a

C (w ) ·Ce (ϕ)

(1−Ce (ϕ)) ·∑

a∈A[E]C (Ue | a) ∑

w∈ϕ∩aC (w ) = Ce (ϕ) ·

∑a∈A[E]

C (Ue | a) ∑w∈¬ϕ∩a

C (w )

(1−Ce (ϕ)) ·∑

a∈A[E]C (Ue | a) ·C (ϕ∩ a) = Ce (ϕ) ·

∑a∈A[E]

C (Ue | a) ·C (¬ϕ∩ a)∑a∈A[E]

C (Ue | a) ·C (ϕ∩ a) = Ce (ϕ) ·∑

a∈A[E]C (Ue | a) ·C (a)∑

a∈A[E]C (Ue | a) ·C (ϕ∩ a) = Ce (ϕ) ·C (Ue )

∑a∈A[E]

C (Ue ∩ a)C (a)

·C (ϕ∩ a) = Ce (ϕ) ·C (Ue )

∑a∈A[E]

C (Ue ∩ a)C (Ue )

· C (ϕ∩ a)C (a)

= Ce (ϕ)

So, for each e ∈ E ,Ce (ϕ) =

∑a∈A[E]

C (ϕ | a) ·C (a |Ue )

is a critical point. As the reader may verify for themselves, the Hessian matrix for thefunctionQ(σ ,ϕ) is negative definite, so this critical point is the local maximum in (0,1)N .

Proposition 3. If your experiment is introspectively neutral, then updating with excondi isequivalent to updating with condi.

Proof. First note that, if the experiment is introspectively neutral, then, for all atoms ofthe experiment a, C (a |Ue ) = C (a | e ). Therefore, for all propositions ϕ,

Ce (ϕ) =∑

a∈A[E]C (ϕ | a) ·C (a | e )

Moreover, since the a’s range over atoms of the experiment, every a will either entail e ,in which case a = a ∩ e , or it will entail ¬e , in which case C (a | e ) = 0. So the above isequivalent to

Ce (ϕ) =∑

a∈A[E]a∩e ̸=∅

C (ϕ | a ∩ e ) ·C (a | e )

Page 34: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

=∑

a∈A[E]a∩e ̸=∅

C (ϕ∩ a ∩ e )C (a ∩ e ) ·

C (a ∩ e )C (e )

=∑

a∈A[E]a∩e ̸=∅

C (ϕ∩ a | e )

= C (ϕ | e )

Corollary 1. If your experiment is introspectively neutral, then condi suffices for reflec-tion.

Proof. If your experiment is introspectively neutral, then updating with condi is equiv-alent to updating with excondi (from Proposition 3). And updating with excondisuffices for reflection (see §4).

Corollary 2. If your experiment is evidentially neutral (see page 15), then condi suffices forrat-ref.

Proof. Consider the update

Ce (ϕ) =∑

a∈A[E]C (ϕ | a) ·C (a |Te )

If your experiment is evidentially neutral, then, for all atoms of the experiment a, C (a |Te ) = C (a | e ). Mutatis mutandis, the proof from Proposition 3 then establishes thatCe (ϕ) = C (ϕ | e ). So, if your experiment is evidentially neutral, this update is equiv-alent to condi. Mutatis mutandis, the proof that excondi entails reflection from §4establishes that this update entails rat-ref.

References

Briggs, Rachael. 2009. “Distorted Reflection.” Philosophical Review, vol. 118 (1): 59–85.[13]

Christensen, David. 2010. “Rational Reflection.” Philosophical Perspectives, vol. 24:121–140. [13]

Elga, Adam. 2007. “Reflection and Disagreement.” Noûs, vol. 41 (3): 478–502. [15], [16]

—. 2013. “The puzzle of the unmarked clock and the new rational reflection principle.”Philosophical Studies, vol. 164: 127–139. [13], [15], [16]

Gallow, J. Dmitri. 2014. “How to Learn from Theory-Dependent Evidence; or Com-mutativity and Holism: A Solution for Conditionalizers.” The British Journal for thePhilosophy of Science, vol. 65 (3): 493–519. [2], [19], [20], [27]

Gardner, Martin. 1961. The Second Scientific American Book of Mathematical Puzzlesand Diversions. Simon and Schuster, New York. [12]

Gibbard, Allan. 2008. “Rational Credence and the Value of Truth.” InOxford Studies inEpistemology, Tamar Gendler & John Hawthorne, editors, vol. 2, 143–64. OxfordUniversity Press, Oxford. [22]

Page 35: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

Greaves, Hilary & David Wallace. 2006. “Justifying Conditionalization: Condition-alization Maximizes Expected Epistemic Utility.” Mind, vol. 115 (495): 607–632. [5],[22]

Hall, Ned. 1994. “Correcting the Guide to Objective Chance.” Mind, vol. 103 (412):505–517. [16]

Hild, Matthias. 1998a. “Auto-Epistemology and Updating.” Philosophical Studies,vol. 92: 321–361. [2], [14], [20]

—. 1998b. “The Coherence Argument Against Conditionalization.” Synthese, vol. 115:229–258. [2], [14], [20], [21], [22]

Joyce, James M. 2009. “Accuracy and Coherence: Prospects for an Alethic Epistemologyof Partial Belief.” In Degrees of Belief, F. Huber & C. Schmidt-Petri, editors, 263–97.Springer, Dordrecht. [22]

Lewis, David K. 1996. “Elusive Knowledge.” Australasian Journal of Philosophy, vol. 74 (4):549–567. [22]

—. 1999. “Why Conditionalize?” In Papers in Metaphysics and Epistemology, vol. 2,chap. 23, 403–407. Cambridge University Press, Cambridge. [22]

Oddie, Graham. 1997. “Conditionalization, Cogency, and Cognitive Value.” BritishJournal for the Philosophy of Science, vol. 48: 533–41. [22]

Pettigrew, Richard. 2012. “An Improper Introduction to Epistemic Utility Theory.”In EPSA Philosophy of Science: Amsterdam 2009, Henk W. de Regt, Stephan Hart-mann & Samir Okasha, editors, 287–301. Springer Netherlands, Dordrecht. doi:10.1007/978-94-007-2404-4_25. URL http://dx.doi.org/10.1007/978-94-007-2404-4_25. [22]

Predd, Joel, Robert Seiringer, Elliot H. Lieb, Daniel N. Osherson, H. VincentPoor & Sanjeev R. Kulkarni. 2009. “Probabilistic Coherence and Proper ScoringRules.” IEEE Transations on Information Theory, vol. 55 (10): 4786–4792. [22]

Salow, Bernhard. forthcoming. “The Externalist’s Guide to Fishing for Compliments.”Mind. [1], [4], [12], [17], [18], [19]

Schoenfield, Miriam. forthcoming. “Conditionalization does not (in general) Maxi-mize Expected Accuracy.” Mind. [2], [20], [21], [22], [24]

Selvin, Steve. 1975. “A Problem in Probability.” The American Statistician, vol. 29 (1):67–71. [12]

Stalnaker, Robert C. 2009. “On Hawthorne and Magidor on Assertion, Context, andEpistemic Accessibility.” Mind, vol. 118 (470): 399–409. [4]

Steel, Robert. forthcoming. “Anticipating Failure and Avoiding It.” Philosophers’ Im-print. [24]

Tierney, John. 1991. “Behind Monty Hall’s Doors: Puzzle, Debate and Answer?” TheNew York Times. URL http://www.nytimes.com/1991/07/21/us/behind-monty-hall-s-doors-puzzle-debate-and-answer.html. [11]

Page 36: Updating for Externalists - pitt.edupitt.edu/~jdg83/publication/pdfs/ufe.pdf · (2009) for a nice taxonomy of those counterexamples). Those which remain involve credences dese. Credence

van Fraassen, Bas C. 1984. “Belief and the Will.” The Journal of Philosophy, vol. 81 (5):235–256. [12]

—. 1995. “Belief and the Problem of Ulysses and the Sirens.” Philosophical Studies, vol. 77:7–37. [12]

Williamson, Timothy. 2000. Knowledge and its Limits. Oxford University Press, Oxford.[2]

—. 2014. “Very Improbable Knowing.” Erkenntnis, vol. 79 (5): 971–999. [3], [4]