Upload
leonardo-ruivo
View
30
Download
11
Tags:
Embed Size (px)
Citation preview
The Epistemology of Disagreement
This page intentionally left blank
The Epistemology of Disagreement New Essays
edited by
David Christensen
and Jennifer Lackey
1
3Great Clarendon Street, Oxford, OX2 6DP,
United Kingdom
Oxford University Press is a department of the University of Oxford.It furthers the University’s objective of excellence in research, scholarship,
and education by publishing worldwide. Oxford is a registered trade mark ofOxford University Press in the UK and in certain other countries
© in this volume the several contributors 2013
The moral rights of the authors have been asserted
First Edition published in 2013
Impression: 1
All rights reserved. No part of this publication may be reproduced, stored ina retrieval system, or transmitted, in any form or by any means, without the
prior permission in writing of Oxford University Press, or as expressly permittedby law, by licence or under terms agreed with the appropriate reprographics
rights organization. Enquiries concerning reproduction outside the scope of theabove should be sent to the Rights Department, Oxford University Press, at the
address above
You must not circulate this work in any other formand you must impose this same condition on any acquirer
British Library Cataloguing in Publication Data
Data available
ISBN 978–0–19–969837–0
Printed by the MPG Printgroup, UK
Contents
List of Contributors vii
Introduction 1
David Christensen and Jennifer Lackey
Part I. The Debate between Conciliatory and Steadfast Theorists
A. Steadfastness
1. Disagreement Without Transparency: Some Bleak Thoughts 9
John Hawthorne and Amia Srinivasan
2. Disagreement and the Burdens of Judgment 31
Thomas Kelly
3. Disagreements, Philosophical, and Otherwise 54
Brian Weatherson
B. Conciliation
4. Epistemic Modesty Defended 77
David Christensen
5. A Defense of the (Almost) Equal Weight View 98
Stewart Cohen
Part II. Disagreement in Philosophy
6. Philosophical Renegades 121
Bryan Frances
7. Disagreement, Defeat, and Assertion 167
Sanford Goldberg
8. Can There Be a Discipline of Philosophy? And Can It Be Founded
on Intuitions? 190
Ernest Sosa
vi contents
Part III. New Concepts and New Problems
in the Epistemology of Disagreement
9. Cognitive Disparities: Dimensions of Intellectual Diversity
and the Resolution of Disagreements 205
Robert Audi
10. Perspectivalism and Refl ective Ascent 223
Jonathan L. Kvanvig
11. Disagreement and Belief Dependence: Why Numbers Matter 243
Jennifer Lackey
Index 269
List of Contributors
Robert Audi , University of Notre Dame
David Christensen , Brown University
Stewart Cohen , University of Arizona and University of St Andrews
Bryan Frances , Fordham University
Sanford Goldberg , Northwestern University
John Hawthorne , University of Oxford
Thomas Kelly , Princeton University
Jonathan L. Kvanvig , Baylor University
Jennifer Lackey , Northwestern University
Ernest Sosa , Rutgers University
Amia Srinivasan , University of Oxford
Brian Weatherson , University of Michigan
This page intentionally left blank
Disagreement is a familiar part of our lives. We often fi nd ourselves faced with people
who have beliefs that confl ict with our own on everything from the existence of God
and the morality of abortion to the location of a local restaurant. Much of the recent
work in the literature on the epistemology of disagreement has centered on how much
belief-revision, if any, is required in order for belief to be rational in light of this
disagreement.
Some philosophers advocate positions toward what might be called the “concilia-
tory” (or “conformist”) end of the spectrum. On their views, many of the beliefs people
hold on a wide range of disputed issues—from the controversial to the mundane—need
to be either substantially revised or altogether abandoned. Other philosophers advocate
positions toward what might be called the “steadfast” (or “non-conformist”) end of the
spectrum. On their views, most of those holding opinions on disputed issues need not
lower their confi dence in the face of disagreement, unless there are non-disagreement-
related reasons for doing so. Of course, this vastly oversimplifi es the discussion. Most
epistemologists hold that conciliatory responses are appropriate in some cases and
steadfast responses in others. But there still seem to be clear diff erences in the overall
degree of belief-revision various philosophers’ positions require.
Naturally, a given philosopher’s general placement on the conciliatory-steadfast
spectrum will often be determined by her theoretical understanding of how, if ever,
fi nding out about disagreement calls for adjusting one’s confi dence on the disputed
topic. There are two central factors that play theoretically important roles here.
Perhaps most obviously, the degree of belief-revision called for by an agent’s learning
of the disagreement of others will depend on what the agent believes—or, perhaps bet-
ter, what the agent has good reason to believe—about the epistemic credentials of those
others with whom she disagrees. Two dimensions of epistemic appraisal stand out here.
The fi rst dimension concerns the other person’s familiarity with the evidence and argu-
ments bearing on the disputed issue. Much of the literature concentrates on cases where
the agent has reason to think that the other person is roughly equally well-acquainted
with the relevant evidence and arguments. In fact, in cases where an agent reasonably
Introduction
David Christensen and Jennifer Lackey
2 introduct ion
takes there to be signifi cant disparities between her acquaintance with relevant evidence
and the other person’s acquaintance, it’s much less clear that interesting epistemological
issues arise.
The second obviously important dimension of epistemic appraisal has to do with the
other person’s competence at correctly evaluating evidence and arguments of the rele-
vant sort. This dimension of assessment may address not only the other person’s general
cognitive abilities, but also the likelihood that the other person’s general competences
are impaired in the current instance. Here again, the literature has often concentrated on
cases where the agent has good reason to believe that the other person is approximately
her equal. When two people are roughly equal along both dimensions, they are said to
be epistemic peers.
Another theoretical factor that fi gures into many discussions of disagreement is
whether, and to what extent, an agent assessing the epistemic credentials of those who
disagree with her must make this assessment in a way that is independent of her own rea-
soning on the disputed issue. Philosophers whose positions fall toward the conciliatory
end of the spectrum tend to think that the agent’s assessments must be independent in
this way. The idea is roughly that, insofar as disagreement of an equally informed person
suggests that the agent may have misevaluated the evidence or arguments, it would
be illegitimate for her to reject this possibility by relying on the very reasoning that the
disagreement called into question. It has also been argued that violating independence
would enable an agent to employ illegitimate bootstrapping-style reasoning for the con-
clusion that she was better than her apparent peer at assessing the evidence.
On the other hand, philosophers whose positions fall more toward the steadfast end
of the spectrum tend to reject any such demand for independent assessment of the oth-
er’s epistemic credentials. Their idea is roughly that doing so would prevent certain
agents—for instance, those who can see perfectly well what the evidence and arguments
support—from using the evidence and arguments to support their belief in the very
claim that the evidence and arguments actually do support.
Some of the papers in the present book enter directly into the debate between concili-
atory and steadfast views. John Hawthorne and Amia Srinivasan, Thomas Kelly, and Brian
Weatherson all weigh in with attacks on conciliatory views or defenses of steadfastness.
Hawthorne and Srinivasan, approaching the disagreement issue from the perspective
of “knowledge-fi rst” epistemology, develop diffi culties for views according to which a
subject who knows that P should stop believing that P when confronted by disagree-
ment (even by apparent epistemic superiors). They argue that no completely satisfying
solution to the disagreement problem is likely to be forthcoming. Kelly rejects the con-
ciliationist-friendly claim that an agent’s assessment of the other person’s epistemic cre-
dentials must be independent of her reasoning on the disputed issue. And Weatherson
attacks a highly conciliatory view of disagreement on the grounds that it is self-
undermining: it cannot coherently be believed, given the disagreement of others.
The papers by David Christensen and Stewart Cohen defend controversial aspects of
conciliationist positions. Christensen argues that the sort of self-undermining that
i ntroduct ion 3
characterizes conciliatory views aff ects many plausible epistemic principles, and is not,
in the end, a defect. Cohen defends a conciliatory view of disagreement from the charge,
due mainly to Kelly, that it prevents an agent from taking correct account of the original
evidence and arguments bearing on the disputed issue.
The other papers are aimed not so much at exploring the question of how much
beliefs should be revised in the face of disagreement, but at developing or extending our
theoretical understanding of the epistemology of disagreement in other ways. (Of
course, some of these papers approach their topic from a perspective that takes a stand
on the question of how much revision is required.) Three papers—by Bryan Frances,
Sanford Goldberg, and Ernest Sosa—are especially concerned with a kind of disagree-
ment that will be of particular concern to most readers of this book: disagreement about
philosophy.
Frances, from the perspective of a conciliatory view, argues that disagreement with
philosophical superiors serves to undermine a large number of our ordinary beliefs
about the world—unless large parts of philosophy are bunk. Goldberg argues that the
broad, systematic sort of disagreement we see in philosophy renders our philosophical
beliefs unjustifi ed, and that this would seem to show typical philosophical assertions are
unwarranted—unless we can break the link between warranted assertion and justifi ed
belief. And Sosa writes to defend philosophical practice—in particular, the practice of
forming beliefs on the basis of armchair judgments—against recent criticisms by experi-
mental philosophers who cite apparently intractable disagreements between diff erent
philosophers’ armchair judgments.
Finally, Robert Audi, Jonathan Kvanvig, and Jennifer Lackey tackle some general
theoretical issues that bear on disagreement.
Audi explores dimensions along which agents can exhibit cognitive disparities in
their attitudes toward various propositions, and applies some of the distinctions he draws
to the disagreement issue. Kvanvig locates the epistemology of disagreement within a
broader normative framework that is fallibilist without requiring special norms of
excusability, and that makes room for rational disagreement. And Lackey argues against
an assumption made by many: that when an agent is disagreeing with a number of epis-
temic peers, their disagreement counts for more than the disagreement of a single peer
only if their beliefs are independent from one another.
The philosophers represented here include some who have contributed actively to
the disagreement literature already, as well as some who are exploring the issue for the
fi rst time. With one exception (Sosa’s paper), all of the essays are new. It is our hope that
this volume will help deepen and expand our understanding of some epistemic phe-
nomena that are central to any thoughtful believer’s engagement with other believers.
This page intentionally left blank
PART I
The Debate between Conciliatory and Steadfast Theorists
This page intentionally left blank
A. Steadfastness
This page intentionally left blank
1 Dashed hopes
The question at the centre of this volume is: what ought one to do, epistemically speaking, when
faced with a disagreement? Faced with this question, one naturally hopes for an answer that
is principled, general, and intuitively satisfying. We want to argue that this is a vain hope.
Our claim is that a satisfying answer will prove elusive because of non-transparency : that
there is no condition such that we are always in a position to know whether it obtains.
When we take seriously that there is nothing, including our own minds, to which we
have assured access, the familiar project of formulating epistemic norms is destabilized. In
this paper, we will show how this plays out in the special case of disagreement. But we
believe that a larger lesson can ultimately be extracted from our discussion: namely, that
non-transparency threatens our hope for fully satisfying epistemic norms in general.
To explore how non-transparency limits our prospects for formulating a satisfying
disagreement norm, we will put forward what we call the Knowledge Disagreement Norm
(KDN). This norm falls out of a broadly knowledge-centric epistemology: that is, an
epistemology that maintains that knowledge is the telos of our epistemic activity. We will
then explore the ways in which KDN might be thought to be defective. In particular, we
will explore the ways in which KDN fails to satisfy some common normative or evaluative
intuitions. We will show, in turn, how this failure is a result of the non-transparency of
knowledge: that is, the fact that one is not always in a position to know whether oneself,
or someone else, knows a given proposition. We will then argue that this kind of failure is
inescapable, as any plausible epistemic norm will feature a non-transparent condition. We
1
Disagreement Without
Transparency
Some Bleak Thoughts 1
J ohn H awthorne and A mia S rinivasan
1 Thanks to Cian Dorr for detailed comments, as well as to audiences at the ‘Epistemology of Philosophy’
Conference at the University of Cologne, the IUC-Dubrovnik ‘Mind and World’ Summer School, the
Oxford Philosophy Faculty, and the Central States Philosophical Association 2011.
10 john hawthorne and amia srinivasan
will conclude with some tentative remarks about what this might mean for the disa-
greement debate in particular and the project of formulating satisfying epistemic
norms more generally. Ultimately we leave it to the reader to decide how bleak our
case really is.
2 Some semantic preliminaries
When addressing the question ‘What ought one to do, epistemically speaking, in the
face of disagreement?’ it can be useful to refl ect on the semantics of ought-claims, and,
in particular, the way in which ought-claims are notoriously context-sensitive. For
‘ought’ is one of a family of modals that, while retaining a barebones logical structure
across its uses, is fl exible in its contribution to the meaning of sentences. For example,
there is a use of ‘ought’ connected to particular desires or ends, as in: ‘The burglar ought
to use the back door, since it’s unlocked.’ 2 There is also a use connected to legal or moral
norms, as in: ‘You ought to go to jail if you murder someone.’ 3 And there is a use that is
(arguably) connected to the evidential situation of an individual or group, as in: ‘He
ought to be in London by now.’ 4
Even within any one of these broad categories, there is considerable scope for con-
text-dependence. For example, the truth conditions of a deontic ought-claim will also
be sensitive to which facts are held fi xed in the conversational context. For example,
when one says of a criminal ‘he ought to have gone to jail’, one is holding fi xed the fact
of the crime. By contrast, when one says ‘he ought to have never started on a life of
crime’, one isn’t holding fi xed the fact of the crime. Indeed it is plausible to suppose—
and contemporary semantic wisdom does indeed suppose—that the semantic contri-
bution of ‘ought’ is in general contextually sensitive to both a relevant domain of
situations (the ‘modal base’), and to a relevant mode of ranking those situations (‘the
ordering source’). 5 On a popular and plausible version of this account, ‘It ought to be
that p ’ is true relative to a domain of situations and a mode of ordering just in case there
is some situation in the domain such that p holds at it and at all situations ranked equal to or
higher than it. Where there are only fi nitely many situations, this is equivalent to the con-
straint that p holds at all of the best situations. Note that this toy semantics has the conse-
quence that fi nite closure holds for ought-claims. If ‘It ought to be that p 1 . . . . ought to be
that p n ’ is true for a fi nite set of premises, then ‘It ought to be that q ’ is true for any proposition
2 This use is sometimes called ‘bouletic’.
3 This use is typically called ‘deontic’.
4 This is the so-called ‘epistemic’ use. For further discussion see John Hawthorne, ‘The Epistemic Ought’
(in progress). The above taxonomy is not intended to be canonical or exhaustive.
5 The more technical terminology is due to Angelika Kratzer, who has done a great deal to promote the
picture wherein modals like ‘can’, ‘ought’, and ‘must’ have an invariant core logical structure across their uses,
and where a semantic value gets set by domain and mode of ordering. See notably Angelika Kratzer, ‘What
“Must” and “Can” Must and Can Mean’, Linguistics and Philosophy 1 (1977): 337–55. See also Kai von Fintel
and Sabine Iatridou, ‘Anatomy of a Modal Construction’, Linguistic Inquiry 38 (3) (2007): 445–83 .
disagreement without transparency: some bleak thoughts 11
entailed by that set. 6 , 7 This style of semantics is by no means sacrosanct, but it is one that
will be in the background of our thinking in this paper. From this perspective, the ques-
tion ‘What ought one to do when one encounters a disagreement?’ can thus be clarifi ed
by considering how the key contextual factors—ordering source and modal base—are
being resolved at the context at which the question is being raised.
3 The Knowledge Disagreement Norm
From the perspective of a knowledge-centric epistemology—that is, an epistemology
that takes the most central goal of our epistemic activity to be knowledge—it is natural
to rank outcomes with knowledge over outcomes of withholding belief, which are in
turn ranked over outcomes of knowledge-less belief: 8
The Knowledge Disagreement Norm (KDN): In a case of disagreement about whether p , where S
believes that p and H believes that not- p :
(i) S ought to trust H and believe that not- p iff were S to trust H, this would result in S’s
knowing not- p 9
6 Note that within the framework of this semantics, apparent counterexamples to fi nite closure will be
handled by appeal to context-shift. ‘It ought to be that some murderers are never released from jail’ sounds
true, while ‘It ought to be that there are some murderers’ sounds false, and yet the that-clause of the former
claim entails that of the latter. The standard resolution of this problem within the framework presented here
says that contexts in which one says ‘It ought to be that there are some murderers’ would almost inevitably
take a modal base that includes some non-murderer worlds (rendering the claim false), but contexts in which
one utters the fi rst sentence tend to hold fi xed many more facts, including that there are some murderers.
7 There will however be counterexamples to countable closure, the principle that extends the closure
principle to any countable set of premises. Relevant here are infi nitary normative dilemmas—that is, cases in
which whatever one does of an infi nite range of actions, one does something that one ought not to do. Sup-
pose God allows one to fi ll out a blank cheque to a deserving charity with any natural number in pounds
sterling. And suppose the relevant ranking is by size of gift; the bigger the gift, the better. One ought to write
some amount down since situations in which one does this clearly outrank the situation in which one writes
nothing down. But one ought not give exactly one pound since there are situations where one gives more
than one pound that outrank any situation where one gives one pound. And one ought not give exactly two
pounds for analogous reasons. . . . Hence anything one does is something that one ought not to do. Here we
can generate a counterexample to countable closure by noting that the countable set of premises of the form
‘It ought to be that one does not give exactly N pounds’ (where N is a natural number greater or equal to 1)
entails that it ought to be that one does not give some non-zero natural number of pounds.
8 We are operating with an altogether natural gloss on ‘S and H disagree about whether p ’, where this
requires that S believe p and H believe not- p . At least as we ordinarily use the term, it is less natural to
describe a case in which S believes p and H is agnostic about p as a case in which S ‘disagrees’ with H as to
whether p .
9 We might want to screen off cases in which the closest world in which one trusts H, the truth-value of
p is diff erent to the truth-value of p at the actual world. (In essence, this would involve restricting the modal
base to worlds that match the actual world with regard to the truth-value of p ). We can do this with a slightly
more complicated counterfactual: S ought to trust H that not- p iff were S to trust H, and p have the truth-
value it actually does, this would result in S’s knowing not- p . Alternatively, we might want to complicate the
test as follows: S ought to trust H iff (i) were S to trust H, S would come to know not- p , and (ii) were S to
stick to her guns, S wouldn’t come to know p . A test case: Fred says to you ‘You believe everything I say’.
You actually don’t believe this claim, and for that reason the claim isn’t true; however, if you were to believe
the claim, it would be both true and known by you . . . For the sake of simplicity we ignore these interesting
complications. Thanks to Cian Dorr here.
12 john hawthorne and amia srinivasan
(ii) S ought to dismiss H and continue to believe that p iff were S to stick to her guns this
would result in S’s knowing p , and
(iii) in all other cases, S ought to suspend judgment about whether p . 10
According to KDN, one should be ‘conciliatory’ in the face of disagreement—that is,
give up one’s belief that p and trust one’s disagreeing interlocutor that not- p —just in
case so trusting would lead one to know that not- p . Since (it is generally granted) trust-
ing someone who knows is a method of acquiring knowledge oneself, (i) recommends
that S trust H in cases where H knows that not- p . 11 Being conciliatory in such cases will
lead S to greater knowledge. 12 According to KDN, one should be ‘dogmatic’ in the face
of disagreement—that is, dismiss one’s interlocutor and continue to believe p —if one
knows that p . What about disagreement cases where neither S nor H knows whether p ?
In such a case, KDN demands that S suspend judgment. 13
There are a few things worth noting from the outset about KDN. First, KDN is
inspired by knowledge-centric epistemology, an epistemology that takes the telos of belief
and epistemic practice more generally to be knowledge. 14 One might have, by contrast, a
10 By suspending judgment we mean that one doesn’t fl at out believe the proposition or its negation. This
is compatible with being very confi dent that a proposition is true or false; for example, one may be very
confi dent that one is going to lose a lottery without fl at out believing that one will.
11 We ignore complications having to do with cases where even though H knows p , H wouldn’t know p
were S to trust H.
12 Of course, there are cases in which trusting someone who knows might not result in knowing one-
self—and certain kinds of disagreement might produce such cases. We discuss a view on which disagreement
(or a subset of disagreements) ‘defeats’ the knowledge-transferring power of trust shortly.
13 Tying ought-claims to counterfactuals gives contestable results in some cases. Suppose S is asked to
donate some small amount of money to charity, and that it would be best (morally speaking) for S to donate
the money and then continue on with her life as usual. But suppose further that, if S donated the money,
she would fl y into a rage and go on a killing spree. Insofar as we test for the truth of ‘S ought to do A’ by
looking at the closest world where S does A, it is then true that S ought to make the donation and return
to life as normal, but not true that S ought to make the donation. Thus, tying ought-claims to counterfactuals
runs the risk of violating the principle that if S ought to A and B, then S ought to A. (For a possible fi x see
the next footnote).
14 An attachment to a knowledge-centric epistemology need not lead us in a forced march to KDN.
KDN ranks actions according to their counterfactual outcomes. But insofar as the domain of possibilities
relevant to a normative claim concerning an action type involves more than the closest world where the
action type is performed, there will be natural modes of ordering that violate KDN. Imagine, for example,
that S currently knows p , and that if she were to continue to believe p for her current reasons , she would con-
tinue to know p ; however, if S were to stick to her guns with regard to p , her basis for believing p would, as
it happens, change to poor reasons, such that she would no longer know p . If the domain of worlds getting
ranked includes worlds where the basis does not change, ‘S ought to stick to her guns and not change her
basis’ will be true, and (recall fi nite closure) so will ‘S ought to stick to her guns’, even though the counter-
factual ‘If S were to stick to her guns she would maintain knowledge’ is false. One can improve on KDN
with the following amendment: one ought to stick to one’s guns iff there is some way (where context pro-
vides the relevant space of ways of acting) of sticking to one’s guns such that were one to stick to one’s guns
in that way one would maintain knowledge (and similarly for the other clauses). Take the person who
believes p for bad reasons but has a good basis available. In a context in which the modal base includes worlds
where the basis is switched, the claim ‘S ought to stick to her guns’ will come out true. In contexts where
the subject’s bad basis is held fi xed, the ought-claim will come out false. This kind of context-dependence
is exactly what one should expect. We do not dispute the need for such a refi nement of KDN, and perhaps
no counterfactual formulation is in the end problem-free. But as our discussion does not turn on the letter
of KDN, we take the liberty of ignoring the needed refi nements in the main text.
disagreement without transparency: some bleak thoughts 13
justifi cation-centric epistemology, according to which the telos of belief is mere justifi ed
belief, or a truth-centric epistemology, according to which the telos of belief is mere true
belief. Each of these alternative views could lead to disagreement norms that are analo-
gous to the knowledge-centric KDN. While we will not discuss these possibilities here,
much of the discussion that follows applies to them as well.
Second, KDN says nothing about how S and H should respond in cases where their
disagreement is a matter of divergent credences as opposed to confl icts of all-or-nothing
belief. Despite the prevailing trend in the disagreement debate, our discussion will for
the most part proceed without mention of credences. We fi nd this a natural starting
point; our pre-theoretic grip on the phenomenon of disagreement tends to be in terms
of confl icts of all-or-nothing belief.
Third, note that compliance with KDN will not necessarily result in overall know-
ledge maximization. Suppose S disagrees with H about whether Jack and Jill went up
the hill together. If H were to trust S, H would come to know that Jack and Jill indeed
went up the hill together, but he would also abductively come to believe a cluster of false
propositions based on the (false) hypothesis that Jack and Jill are having an aff air. In
short, KDN is locally consequentialist with respect to the telos of knowledge. Less local
consequentialisms are of course also possible, and we shall return to this issue in due
course.
Fourth, KDN’s gesture towards knowledge as epistemic telos can be unpacked in vari-
ous ways, corresponding to diff erent meta-epistemological views. On one gloss, the
relevant ‘ought’ is bouletic/desire-based: what makes KDN true is that the ‘ought’ is
grounded in an (actual or idealized) desire of the disagreeing parties to maintain or gain
knowledge about the disputed issue. On another gloss, the relevant ‘ought’ is based on a
ranking implicit in certain, say, social norms, thereby rendering the ‘ought’ in KDN a
kind of deontic ‘ought’. On a more robustly realist tack, one might think the ‘ought’ of
KDN is tied to a valuational structure that is desire- and social norm-transcendent. We
shall not attempt to adjudicate between these diff erent views here, remaining silent for
the most part on questions of meta-epistemology.
Fifth, and fi nally, conditions (i) and (ii) of KDN will have less bite to the extent that
disagreement has the eff ect of automatically defeating knowledge or automatically
defeating the knowledge-transferring capacity of trust. Now, no one thinks that all
instances of disagreement have these defeat-eff ects. For, in many cases of disagreement—
in particular, where only one of the two disagreeing parties is an expert, or where one
party possesses more evidence than the other—it is obviously true that one can con-
tinue to know in the face of disagreement, and that one can come to know by trusting
one’s disagreeing interlocutor. For example, imagine that Tim believes no one is at
home; he calls Ana on her mobile from his offi ce, and expresses this belief. Ana dis-
agrees—because she, in fact, is at home. Obviously Ana continues to know that some-
one (indeed, she) is at home, and Tim can come to know this himself by trusting Ana.
While disagreement does not always destroy knowledge and the knowledge-transmit-
ting power of trust, it is a live question whether it does so in a wide range of the more
14 john hawthorne and amia srinivasan
interesting cases of disagreement. A vexed question, central to the disagreement debate,
is whether knowledge is defeated in certain kinds of cases involving ‘peers’. 15 Many
favour a view on which knowledge is defeated in cases of peer disagreement. In general,
the greater the number of cases in which disagreement defeats knowledge or the know-
ledge-transferring capacities of trust, the more cases of disagreement will be relegated to
the auspices of (iii). That is, the more disagreement defeats knowledge or the know-
ledge-conferring power of trust, the more KDN will recommend suspending judg-
ment. In the following discussion, we will assume for the most part that knowledge (and
the knowledge-conferring power of trust) is left undefeated by disagreement. As a result,
those who are sympathetic with defeatist views will fi nd our descriptions of some cases
of disagreement to be jarring or even incoherent. In due course, we shall discuss whether
and to what extent various purported limitations of KDN can be overcome by taking
the phenomenon of defeat more seriously.
4 The normative inadequacy of KDN
Many will already be unhappy with KDN. In particular, one might be worried about
cases in which one is simply not in a position to know what specifi c action—being dog-
matic, conciliatory, or suspending judgment—KDN recommends. For there are possi-
ble cases of disagreement in which one knows but fails to know that one knows, 16 and
cases in which one doesn’t know but isn’t in a position to know that one doesn’t know.
(And even more obviously, cases where one’s disagreeing interlocutor knows but one
does not know that he knows—indeed any case where one’s disagreeing interlocutor
knows will fi t this profi le at the time of disagreement since if one knew one knew at t
one wouldn’t be disagreeing at t ). In such cases, even if one knows that one ought to
conform to KDN, one is not in a position to know what specifi c action one should
undertake to achieve that conformity. We might say that in such cases, one is not in a
position to know what KDN ‘demands of one’. Imagine the following situation: Sally
and Harry are disagreeing about whether p . In fact, p is true, and Sally knows this, but she
isn’t in a position to know that she knows this. 17 Harry (falsely) believes not- p , and
(as very often happens), he is not in a position to know that he doesn’t know this. Imag-
ine further that Sally can maintain her knowledge that p by being dogmatic and that
15 Typically what is meant by ‘peer’ in the disagreement literature is opaque, in part because the relevant
notion of evidence used in glosses on peerhood is not fully spelled out. For example, if knowledge is evi-
dence and peers by defi nition possess the same (relevant) evidence, then disagreements in which one person
knows and the other does not are ipso facto not peer disagreements. We shall as far as possible avoid talk of
peerhood in our discussion, at least in part because we do not want to get involved in tendentious issues
about the nature of evidence. Note that KDN is meant to apply to all instances of disagreement, including
‘peer’ disagreements if there be any.
16 Defenders of the KK principle will deny that such cases are possible. For a general argument against
the KK principle, see Timothy Williamson, Knowledge and Its Limits (Oxford: Oxford University Press,
2000) .
17 Perhaps because a belief that she knows p , while true, would not be suffi ciently safe for knowledge.
disagreement without transparency: some bleak thoughts 15
Harry can come to know p by trusting Sally. 18 Since neither party is in a position to
know the facts about knowledge relevant to KDN, neither party is in a position to know
precisely what action KDN demands of him or her. 19 To be somewhat more precise, we
might say that KDN is not perfectly operationalizable , where a norm N (of the form ‘S
ought to F in circumstances G’) is perfectly operationalizable iff , whenever one knows
N and is in G, one is in a position to engage in a piece of knowledgeable practical
reasoning 20 of the form:
(1) I am in circumstances G
(2) I ought to F in G
(3) I can F by A-ing
where A is a basic (mental or physical) act type that one knows how to perform. As the
case of Sally and Harry shows, KDN is not perfectly operationalizable. This is because
one is not always in a position to know whether one knows, and not always in a position
to know whether one’s interlocutor knows. In other words, the relevant kinds of know-
ledge-related conditions are non-transparent , where a condition C is transparent just in
case, whenever it obtains, one is in a position to know whether it obtains. 21 Since know-
ledge is non-transparent, KDN is not perfectly operationalizable. 22
Prima facie, KDN’s imperfect operationalizability is troubling, and seems to count
against KDN. But how should we make sense of this intuition?
As a fi rst pass, we might worry that its imperfect operationalizability makes KDN
ineligible as advice . That is, while KDN might specify some ideal ranking of outcomes, it
is not the kind of thing that can be off ered to actual agents as a guide to action. For an
ought-claim to be a suitable vehicle of advice (this line of thinking goes), it must use an
ordering source whose application to candidate actions can be known. If the ‘ought’
relies on a mode of ranking such that one sometimes can’t know how a candidate action
is ranked, then that use of ‘ought’ is unsuitable for an advisory role.
18 Of course there is an extended sense in which, insofar as Harry is in a position to trust Sally, he is in a
position to come to know that his belief that not- p is not a case of knowledge. For once he comes to know
p he can deduce that his former belief that not- p is not a case of knowledge. ‘In a position to know’ is nor-
mally used with a narrower ambit. And in any case Harry is not in a position to know whether KDN
demands trust in advance of engaging in trust.
19 There is also the case where Harry believes not- p and knows he doesn’t know not- p but doesn’t know
whether Sally knows not- p . Here Harry is in a position to know that KDN recommends the cessation of
belief but not in a position to know whether KDN recommends trust or instead suspension.
20 By ‘knowledgeable’ practical reasoning we mean practical reasoning that involves knowing each of the
premises involved.
21 We borrow this terminology from Williamson (2000), ch. 4. Williamson defi nes a condition C as lumi-
nous just in case, whenever S is in C, she is in a position to know she is in C. Let us say that a condition C
is absence-luminous just in case, whenever S is not in C, she is in a position to know she is not in C; and (fol-
lowing Williamson) that a condition C as transparent just in case it is both luminous and absence-luminous.
22 Of course a little more than the transparency of knowledge would be needed for knowledge of the
relevant counterfactuals. Still we take it the most obvious and central obstacle to the perfect operationaliz-
ability of KDN is the non-transparency of knowledge.
16 john hawthorne and amia srinivasan
How far does this worry take us? First, note that it often depends on conversational
context whether, in proff ering a bit of advice, one presupposes operationalizability. Sup-
pose you are advising Jane on the giving of an award, and you say: ‘You ought to give the
award to the person who just walked through the door.’ Uttered in a typical context, this
presupposes that Jane knows (or is at least in a position to know) who just walked
through the door. But one could also reasonably advise Jane as follows: ‘You ought to
give the award to the most deserving person. I realise that it’s often diffi cult to tell who
the most deserving person is.’ Here, one is recommending that the award be given to the
most deserving person, but one by no means expects the recommendation to be opera-
tionalizable in the sense above. But so long as one does not falsely presuppose operation-
alizability, it is far from clear that there is anything ipso facto wrong about articulating an
imperfectly operationalizable norm as advice. After all, there can be instances in which
one can’t incorporate a norm in knowledgeable practical reasoning but nonetheless has
good evidence about what a norm recommends. Suppose Hanna gives you the advice:
‘You ought to put out as many chairs as there are guests.’ You have good evidence
that there will be six guests, but you don’t know this. Hanna’s advice is hardly improper
or useless, despite your not being able to incorporate it into knowledgable practical
reasoning.
Indeed, even if off ering a norm as advice presupposed a sort of operationalizability, this
is at most a constraint on advice at a context , not in general. That is, just because there are
cases in which KDN exhibits operationalizability-failures, this does not preclude it from
ever being useful as advice; it will count as advice in those contexts, at least, when it is
operationalizable. So while it is false that whenever we know, we know we know, it is
perfectly plausible that there are plenty of disagreement cases in which we both know
and know we know. In such cases, one might well know what KDN demands of one.
(Of course one will never know KDN demands trust in a situation in which one’s inter-
locutor knows p and one believes not- p and where such knowledge would be transmit-
ted by trust—though insofar as one knows one doesn’t know p one will be in a position
to know that KDN entails that ought to stop believing p .) 23
If the conditions relevant to KDN were transparent, then every (rational) attempt to
conform to KDN would be successful. But since they are non-transparent, (rational)
attempts to conform to KDN might fail. For this reason KDN can easily fail to be good
advice because trying to follow it, or exhorting others to follow it, does not guarantee
conformity with it. To what extent do these mismatches between trying and succeeding
constitute a major defi ciency of KDN? 24
The fi rst thing to say here is that it is no count against KDN in particular that trying to
comply with it does not always succeed. For this fate will likely affl ict any norm. This is
23 Leaving aside recherché cases, if there be such, where one knows that one knows p and also believes
not- p .
24 Or we might think that this shows us that KDN is incomplete; to borrow some terminology from
ethics, we might say that KDN is a ‘criterion of right’, but that what is (also) needed is a ‘decision procedure’
that tells agents how to decide what to do. Thanks to Andreas Mogensen.
disagreement without transparency: some bleak thoughts 17
very obvious in, say, the case of act utilitarianism, since trying to be a good utilitarian
will (likely) result in less overall happiness than might have otherwise been enjoyed—
say, by following the norms of commonsense morality. But it is also plausibly true with
relatively more transparent norms. For example, suppose we had an ordering source that
ranked actions according to how much one felt like doing them, and thus a norm that
said that one ought to do what one most felt like doing. One is generally in a position to
know what one feels like doing. But of course one can be mistaken about what one feels
like doing, and one can be mistaken about how much one feels like doing something
compared to something else. Insofar as the facts about what and how much one feels like
doing things are non-transparent, then trying to do that which one most feels like doing
will not always be a guaranteed way of doing that which in fact ranks highest (even if
one always succeeds in doing what one tries to do). Since no plausible ordering source is
transparent (we return to this theme shortly), no plausible norm is such that trying to do
what one ought to do will guarantee in fact doing what one ought to do. 25
Of course, one could still ask: what general advice should we give about responding
to disagreement, given that we want to maximize conformity with KDN over some
extended period of time? Probably, exhorting people to conform to KDN isn’t the best
way of doing this. Indeed, the answer to this question will turn on a vast number of open
empirical questions about how people respond to disagreement and advice, relevant
trade-off s between agents’ storage capacity and the compliance-generation of particular
bits of advice, and so on. This computation becomes even more vexed when we con-
sider the eff ects that certain bits of advice will have in the long run or across a range of
possible worlds. And it’s worth remembering that the answer to this question might
turn out to be very odd indeed. Given the oddities of human psychology, it might turn
out that we ought to, say, advise people to, inter alia, drink a glass of water or take a deep
breath when confronted by disagreement. In all likelihood, the correct answer to this
question is not going to resemble the kind of norms that are put forward by epistemolo-
gists—whether it be KDN or any other candidate disagreement norm.
In sum, the fact that trying to comply with KDN will not guarantee compliance with
KDN is itself of no especial concern, for it is plausibly true of any norm. If instead we
simply want to know what general advice we should disseminate with regard to dis-
agreement, then we have switched over to an empirical question that, while perhaps
interesting, is well beyond the purview of the standard debate about disagreement.
Worries that turn on the advisory role of norms have not amounted to much. Here is
a second (and we think, better) pass at what is worrying about KDN: it severs a natural
tie between an agent’s evaluative status—how praiseworthy or blameworthy that agent
is—and facts about what the agent ought to do. The imperfect operationalizability of
KDN generates cases in which one does what one ought to do (according to KDN) but
is intuitively blameworthy for so doing, and conversely cases in which one does what one
25 Note also that if there are no transparent conditions, then no decision procedure will be perfectly
operationalizable either.(See fn. 24.)
18 john hawthorne and amia srinivasan
ought not to do (according to KDN) but is intuitively blameless for so doing. Consider
the following:
Clairvoyant Maud. 26 Maud is a clairvoyant, and uses her clairvoyance to come to know that the
British prime minister is in New York, though she doesn’t know that she knows this. Her friends,
who are members of Parliament and therefore usually know the whereabouts of the prime
minister, assure her that the prime minister is in fact at 10 Downing Street. Maud, moreover,
doesn’t even believe she is clairvoyant, as she has been exposed to plenty of evidence that sug-
gests that clairvoyance is impossible. Nonetheless, Maud dismisses her friends and continues to
believe that the prime minister is New York.
Let us stipulate that it is possible to gain knowledge through clairvoyance, and that
although Maud’s evidence that clairvoyance is impossible means that she isn’t in a posi-
tion to know that she knows that the prime minister is in New York, she nonetheless
does know his location. 27 Then Maud, in being dogmatic, conforms to KDN; if she were
instead to be conciliatory in the face of the disagreement, she would lose her knowledge
that the prime minister is in New York. Nonetheless, it seems that Maud is doing some-
thing epistemically irresponsible by being dogmatic. We feel a strong intuitive pull
towards the judgment that Maud is doing what she ought not do, for she is maintaining
a belief even when she has overwhelming (albeit misleading) evidence that she isn’t
clairvoyant, and thus doesn’t know the disputed proposition. We can’t help thinking that
Maud is playing with epistemic fi re, exhibiting poor habits of mind that just happen, in
this rare case, to serve her well. Thus, KDN allows for instances of what we might call
‘blameworthy right-doing’: that is, cases in which S intuitively does something blame-
worthy, though according to KDN she does what she ought to do.
Cases of blameworthy right-doing can also be generated in instances where one trusts
a disagreeing interlocutor whom one has (misleading) reason to believe doesn’t know.
For example, imagine that one of Maud’s friends, John, despite his evidence that Maud is
not clairvoyant and thus doesn’t know that the prime minister is in New York, trusts Maud
and comes to believe that he is. Here, John is doing what he ought to do— assuming that
the knowledge-transmitting capacity of trust is not defeated by the disagreement—though
it seems, intuitively, that he is blameworthy for doing so. In both of these instances of
blameworthy right-doing, the agents conform to KDN while not being in a position to
know that they are doing what they ought to do. Generally, these instances of blamewor-
thy right-doing are instances where S conforms to KDN but where it is likely on S’s
evidence that she doesn’t so conform. Let’s call this evidential blameworthy right-doing.
26 This case is adapted from Laurence BonJour, ‘Externalist Theories of Empirical Knowledge’, Midwest
Studies in Philosophy , 5 (1980), 53-73. BonJour uses this case to argue against a simple reliabilist theory of
knowledge, on which S knows p just in case p and S uses a reliable method to believe p . BonJour concludes
from the Clairvoyant Maud case that having overwhelming but misleading evidence that one’s method isn’t
reliable defeats knowledge. In response to BonJour’s case, many externalists embrace a view on which justi-
fi cation/knowledge can be defeated by misleading evidence. We discuss this ‘defeatist’ view shortly.
27 In other words, her knowledge that the prime minister is in New York is undefeated by the evidence against
clairvoyance.
disagreement without transparency: some bleak thoughts 19
Arguably, there might also be instances of blameworthy right-doing where one knows
one is conforming to KDN. For example, consider the following case:
Bridge Builder . Simon is an expert bridge engineer. He is constructing a bridge to span a large
river, which thousands of commuters will cross each day. Simon has done the relevant calcula-
tions, and knows precisely how many struts are required to hold up the bridge. Simon’s col-
league, Arthur, a more junior but competent engineer, disagrees with Simon’s assessment, saying
that more struts are required.
Let us stipulate that Simon not only knows how many struts are required, but also
knows that he knows this. Arthur, while almost always right himself, makes mistakes on
a few more occasions, and Simon knows this. According to KDN, Simon should dis-
miss Arthur and be dogmatic about the number of struts required. Indeed, Simon is in
a position to know that he should do this, since ( ex hypothesi ) he not only knows how
many struts are required, but moreover knows that he knows. Nonetheless, if Simon
were to simply dismiss Arthur, we would likely feel that this would be problematic.
Why is this?
The problem isn’t that for all Simon knows Arthur might be right, since Simon, ex
hypothesi , knows Arthur is wrong. And the problem with dismissing Arthur can’t be that
for all Simon knows, it will turn out upon consulting with Arthur that Simon doesn’t
now know after all. If, as we are supposing in the present discussion, disagreement doesn’t
defeat knowledge, then knowing one knows renders an eventuality like Bridge Builder
live. 28 What seems problematic about Simon’s dismissal of Arthur is that Simon is instill-
ing in himself a bad habit—that is, a habit of boldly going on even in the face of disa-
greement, a habit that might easily lead him to disastrous consequences. Our nervousness
about Simon’s dogmatism, we would like to suggest, turns on our recognition that if
Simon were in a case where he in fact didn’t know how many struts were required, the
habit he is instilling in himself in the case where he does know might easily lead him to
act similarly dogmatically, thus building an unsafe bridge and threatening the lives of
thousands. Of course, if Simon were always in a position to know when he didn’t know,
there would be no such risk. That is, if Simon could always perfectly distinguish between
cases in which he knows and doesn’t know, the habit he is instilling in himself would be
fi ne. But since there are not unlikely eventualities in which Simon isn’t in a position to
know that he doesn’t know—again, because knowledge is non-transparent—the habit
he is instilling in himself by dismissing Arthur is problematic. Human beings are not
creatures for whom the absence of knowledge is generally luminous; as such, it is simply
not possible for humans to be dogmatic in cases where they know and not also be dog-
matic in cases where they falsely believe they know. That is, unless they are so selectively
dogmatic in cases where they know they know that superfi cially similar situations do
28 A defeat-friendly approach brings its own oddities. If Simon knows that he knows, and knows that
disagreement will automatically destroy his knowledge, then Simon seems to have excellent reason not to
consult with Arthur in the fi rst place.
20 john hawthorne and amia srinivasan
not arise. 29 We might call the kind of blameworthy right-doing displayed by Simon
habitual . 30
Conversely, KDN also results in cases of ‘blameless wrongdoing’, cases in which S
intuitively does something that is not blameworthy, but where she does what (according
to KDN) she ought not to do. For example, if Clairvoyant Maud were conciliatory in
the face of the disagreement with her high-powered friends, or if Bridge Builder Simon
were to doubt himself upon learning that Arthur disagreed with him, they would be
doing something intuitively praiseworthy, or at least blameless. But according to KDN,
they would be failing to do what they ought to do.
Let us call the fact that KDN generates instances of both blameworthy right-doing
and, conversely, virtuous wrongdoing, the problem of normative divergence . There is a
clear moral analogue to this problem. Take the following case:
Grenade . A soldier is holding a grenade that is about to detonate, and he must decide to throw it
either to his left or to his right.
Let’s assume that act consequentialism is the correct moral theory (or at least, more
plausibly, that it is the correct moral theory with respect to Grenade). Then we might say
that what the soldier ought to do is to conform to the following norm:
Consequentialist Norm (CN): If S is faced with the choice of doing only either A or B, S ought
to do A if it would produce less harm than doing B, ought to do B if it would produce less harm
than doing A, and is permitted to do either if A and B would produce equal harm.
Imagine that the soldier in Grenade has misleading evidence that more harm will be
done if he throws the grenade to the right. If he throws the grenade to the right, then he
does (according to CN) what he ought not to have done, for he performed the action
that resulted in greater harm. Nonetheless, he is obviously not blameworthy for doing
what he does. This is an instance of blameless wrongdoing. Now suppose instead the
soldier throws the grenade to the left, because he wants to maximize the possible harm of
his action. In fact, his action minimizes the actual harm done; nonetheless, we certainly
don’t want to say that his action was praiseworthy . As such, the claim that (as CN entails)
the soldier ought to throw the grenade to the left does not supply the grounds for
appropriate normative evaluation of the soldier’s actions. Both KDN and CN, then, suf-
fer from the problem of normative divergence. That is, both link ‘ought’ to an ordering
source that implies that there is no straightforward tie between what agents ought to do
and the evaluative status of their actions or their character.
This, we take it, is what is most deeply troubling about KDN: it fails to secure a natu-
rally hoped-for tie between what agents ought to do and agents’ evaluative status. To
29 Suppose, for example, that while one often knows that one knows the result of an arithmetical sum,
one sticks to one’s guns only in cases where one has triple checked.
30 Structurally similar issues arise when it comes to double-checking one’s answer. Suppose Simon knows
that he knows but is someone who would not know that he does not know in reasonably similar calculation
situations. In that case, and especially when a lot is at stake, we might fi nd it praiseworthy were he to suspend
his belief only to reinstate it once he has double-checked his calculations.
disagreement without transparency: some bleak thoughts 21
accommodate this worry, one might try to modify KDN in one of two ways. One might
think that, in addition to the ‘ought’ associated with KDN, we require a distinct kind of
ranking of epistemic acts (yielding a distinct but associated use of ‘ought’), a ranking
that is directly tied not to the value of various outcomes but rather to the level of praise-
worthiness/blameworthiness of the agent’s action , considered as an attempt to achieve
those outcomes. This modifi cation would require two senses of ‘ought’ (an ‘objective
ought’ and a ‘subjective ought’) as it applies to disagreement cases. Let us call this the
two-state solution . However, it might be thought that bifurcation involved in the two-
state solution creates an unacceptable rift between knowledge and epistemically virtu-
ous conduct, and in particular, that this rift drains knowledge of its normative status.
So, one might attempt to tinker with the conditions on knowledge such that the two
phenomena—conforming to KDN and doing what is epistemically praiseworthy—
line up. Note, after all, that in the most obvious cases of what we labelled ‘blameworthy
right-doing’ and ‘blameless wrongdoing’, there was a mismatch between which act
would in fact retain/produce knowledge, and which action was likely on the evidence to
retain/produce knowledge. If, for example, we allowed Maud’s evidence that she does
not know to preclude her knowing—hence treating the original description of
Clairvoyant Maud as incoherent—then those kind of putative cases of blameworthy
right-doing would be ruled out. Let us call such a solution, on which epistemically
blameworthy behaviour is incompatible with knowledge, the defeatist solution . Indeed,
the desire to have an epistemology that captures our intuitions about epistemic blame-
worthiness seems to be a major motivation for standard defeatist views. Such views off er
prima facie hope of making knowledge incompatible with non-virtuous epistemic
conduct. By accepting such a view, we might hope to rescue KDN from the problem of
normative divergence. We will discuss the defeatist solution in the next section, and the
two-state solution in the one following.
5 Defeatism
How far does defeatism take us in overcoming normative divergence? Even if know-
ledge were incompatible with its being likely on one’s evidence that one does not know,
this will not suffi ce to collapse an outcome-driven ‘ought’ and a evaluation-driven
‘ought’. For on any plausible account, the absence of knowledge is compatible with the
likelihood of its presence. Suppose S does not know p but has plenty of evidence that she
does, is confronted with disagreement, and sticks to her guns. In so doing, S is in viola-
tion of KDN, but intuitively blameless; this is an instance of blameless wrongdoing. At
most, then, a defeatist solution will do away with certain instances of blameworthy
right-doing (e.g. Clairvoyant Maud).
How well might a defeatist view deal with blameworthy right-doing? Recall that
the phenomenon of blameworthy right-doing divided into two sorts: evidential and
habitual . In an instance of evidential blameworthy right-doing (e.g. Clairvoyant Maud),
S conforms to KDN though it is likely on her evidence that she is in violation of it.
22 john hawthorne and amia srinivasan
In an instance of habitual blameworthy right-doing (e.g. Bridge Builder), S conforms
to KDN, and it is likely on her evidence that she is conforming to it, but nonetheless
she does something epistemically blameworthy by inculcating in herself a dangerously
dogmatic habit.
Let us take habitual blameworthy right-doing fi rst. In these cases, S knows p and
knows that she knows p , but in conforming to KDN inculcates in herself a habit that
makes it likely that she will be dogmatic in similar cases in which she does not in fact
know. According to the defeatist solution, on which blameworthy epistemic behaviour
is incompatible with knowledge, such cases are impossible. Is this a plausible response?
To get a better grip on such cases, let us take a practical analogy. Suppose two tennis
players are each in situations where they know they know they can’t reach the ball in
time. One player—call him Andi—gives up. Another, call him Raphael, chases the ball
though (unsurprisingly) fails to get to it in time. Andi might deride Raphael as a pathetic
fi gure, someone who gives chase while knowing the chase is futile. But we can see that
because the absence of knowledge isn’t luminous, Andi risks turning himself into a
player who fails to chase the ball when he might, in fact, reach it in time. For cases will
arise sooner or later where Raphael believes he knows he won’t reach the ball, but in fact
doesn’t know this. In such cases, thanks to the habit he has inculcated in himself, Raph-
ael will sometimes end up reaching the ball. On the other hand, Andi doesn’t chase the
ball both in cases where he knows that he won’t reach it, and in those cases in which
he falsely takes himself to know that he won’t reach it. In this way, Andi—in failing to
chase the ball even when he knows he won’t reach it—inculcates in himself an undesir-
able habit. While we thus might all agree that there is something untoward about Andi’s
behaviour on the court, it seems very odd to think, as a defeatist view suggests, that his
giving up in the good case costs him knowledge. Similarly, there is something untoward
about dogmatism. But it is similarly odd to think that untoward dogmatism costs
knowledge.
Consider a version of Bridge Builder in which Arthur does not disagree with Simon,
but instead voices a much milder protest. Suppose Simon calculates the number of struts
required and comes to know that he knows that twelve struts are needed. Suppose then
Arthur expresses a little epistemic apprehension: ‘Perhaps you should double check. It
would be really awful if you made a mistake.’ Simon dogmatically presses forward and
brushes off Arthur’s concerns. Here too we feel that Simon is getting into bad habits—
but it would be rather far-fetched to suppose that Arthur’s apprehension serves to defeat
Simon’s knowledge that he knows. 31 Plausibly, dangerous dogmatic habits do not gener-
ally cost one the ability to know.
31 There is a contextualist model of what is going here with which we won’t engage: Arthur’s apprehen-
sion puts Simon in a context where the relation he now expresses by ‘knows’ is one in which he does not
stand to the fact that twelve struts are required, even if he continues to stand in the relation that he previ-
ously expressed by ‘knows’ to that fact (and even, perhaps, continues to stand in the relation to the fact
consisting of his standing in that relation to the fact).
disagreement without transparency: some bleak thoughts 23
Now it might be thought that there is a crucial diff erence between the case in which
Arthur merely expresses apprehension and in which Arthur actually voices a contradic-
tory opinion. For on some tempting probabilistic models, the fact of Arthur’s disagree-
ment makes it no longer likely from Simon’s perspective that twelve struts are required,
thereby costing Simon his original knowledge. How so? Suppose, as seems natural, that
for S to know p , p must have a suitably high probability on S’s evidence. And suppose
that prior to hearing Arthur’s opinion, Simon’s epistemic probability in the fact that the
bridge requires twelve struts was suffi ciently high for Simon to know it. Assuming that
Simon knows that Arthur has a very good track record, the epistemic probability (for
Simon) that Arthur will agree with Simon is high. But it is easy to have the intuition that
the probability of the fact that twelve struts are required, conditional on Arthur’s disagree-
ing with Simon, is not high enough for knowledge. 32 , 33 But note that models like this
are of no use if one thinks of knowing p as suffi cient for p ’s being part of one’s body of
evidence. For insofar as Simon’s total body of evidence includes the fact that twelve struts
are required, he will hardly be able to conditionalize his way to a less than high probabil-
ity in this fact. It is worth underscoring the oddness of leaving out the relevant piece of
knowledge from Simon’s total body of evidence. As we naturally think about the case,
we take all sorts of other bits of knowledge that Simon has as suffi cient for rendering
various facts part of his evidence. If, for example, we think of Simon as knowing various
facts about Arthur’s track record, we are ipso facto inclined to count those facts as part of
what Simon has to go on. Leaving out the bridge-building facts that he knows from his
body of evidence thus might seem somewhat ad hoc. At the very least, probabilistic
considerations need not force us to accept a defeatist view on which habitual blame-
worthy right-doing in disagreements such as this is an incoherent phenomenon. Indeed,
from a perspective according to which knowing p is suffi cient for p ’s being part of one’s
evidence, such a defeatist view is implausible.
What of the phenomenon of evidential blameworthy right-doing? In such cases, S
conforms to KDN although it is likely on her evidence that she isn’t so conforming.
According to a defeatist view on which blameworthy epistemic conduct is incompati-
ble with knowledge, such cases cannot arise. To fl esh out this kind of view, fans of defeat
often argue that something like the following is true:
Evidence-Bridge Principle (EBP): If it is likely on S’s evidence that S doesn’t know p , then S doesn’t
know p.
32 As many have noted, such models face diffi culties in dealing with disagreements about mathematics
and logic, since standard probabilistic models assign probability 1 to all mathematical and logical truths. This
precludes disagreement having any potential to lower probabilities via conditionalization. The challenge of
making good on some notion of non-idealized probabilities is not an easy one to meet. We return to this
theme briefl y in the next section.
33 Note that even if Simon’s track record is known to be a bit better than Arthur’s, that will not help much
(vis-à-vis a setting where Arthur is considered a peer). Suppose Simon’s conditional probability of his being
right conditional on a disagreement is 0.6, of Arthur being right 0.4. That would still, on this model, give
disagreement a knowledge-destroying eff ect (assuming 0.6 is too low for knowledge). Thus, on such a
model, epistemic peerhood is not what is crucial.
24 john hawthorne and amia srinivasan
EBP will not be tempting for defenders of multi-premise closure concerning know-
ledge. By multi-premise closure, if one knows each of a number of premises and deduces
their conjunction, then one knows the conjunction. But suppose each conjunct is such
that one does not know that one knows it and, indeed, that for each premise there is
some non-negligible likelihood that one does not know it. Then it will be easy to fl esh
out the case such that it is likely on one’s evidence that one does not know the conclu-
sion. (Except in special cases where the risks of failing to know each conjunct are highly
interdependent, the risks will add up to a large risk given enough conjuncts.)
Of more general interest here is Williamson’s recent persuasive case against any such
bridge principle, one that shows its incompatibility with natural ways of thinking about
margins of error. 34 Imagine you are looking at a pointer on a dial. Given the distance
you are from the dial, the particular light conditions, and so on, there exists some margin
of error n such that that there is some strongest proposition p you are in a position to
know of the form the pointer is plus or minus n degrees from point x , where x is the actual
position of the pointer. If you were to believe, say, that the pointer is plus or minus n-1
degrees from point x , you would not in fact know this proposition. Suppose, on this particu-
lar occasion, the strongest proposition you know about the position of the pointer is the
proposition p , that the pointer is within range Q. That is, for all you know, the pointer is
anywhere within the range Q, a range which has position x , the actual position, at its
centre. Now, note that nearly all of the positions within Q preclude knowing p . If, say,
the position of the pointer were closer to the edge of Q than point x , then one’s margin
for error would preclude knowing p . So it is very unlikely, relative to the propositions
that you know (including p itself), that you know p . 35 , 36 , 37
The general upshot of Williamson’s argument, we take it, is the following. Defeatism
can be helpfully thought of as a view on which knowledge is what we might call a
‘minimally luminous’ state. A minimally luminous state is one such that whenever one is
in it, it is not the case that it’s unlikely on one’s evidence that one is in it. But Williamson’s
argument suggests that, given some plausible assumptions about margins of error,
knowledge is not even minimally luminous. 38 If one clings onto EBP in the face of such
cases it seems that one will be forced to deny that there is any such thing as the strongest
proposition one knows, which in turn seems to generate a forced march to a sceptical
34 Timothy Williamson, ‘Improbable Knowing’ in Trent Dougherty (ed.) Evidentialism and Its Discontents
(Oxford: Oxford University Press, 2011) , ch. 9.
35 Note that the argument makes no particular assumptions about how much you know about the posi-
tion of the pointer, beyond the anti-sceptical assumption that in such a case one knows something about the
position of the pointer by visual inspection.
36 We also note in passing that the models that Williamson describe wherein one knows even though it
is likely on one’s evidence that one does not know can easily be extended to describe cases where one
knows even though one knows that it is likely on one’s evidence that one does not know.
37 As Williamson notes, the argument will need to be complicated a bit further to account for ‘inexactness
in our knowledge of the width of the margin for error’ (‘Improbable Knowing’, 155).We shall not go
through the argument that incorporates variable margins here.
38 Williamson’s argument, like his anti-luminosity argument, generalizes to all non-trivial conditions. We
focus here on knowledge so as to intersect with the debate about defeat.
disagreement without transparency: some bleak thoughts 25
conclusion that one knows nothing about the relevant subject matter. So we are faced
with a choice between abandoning evidence- or justifi cation-bridge principles on one
hand, and embracing radical scepticism on the other. 39 Assuming Williamson is right,
then not only can one not eliminate the case of habitual blameworthy right-doing using
defeat considerations; non-transparency means that one cannot eliminate the case of
evidential blameworthy right-doing either.
We certainly do not take ourselves to have shown that epistemologists are misguided
in positing the phenomenon of defeat. 40 But we do take ourselves to have made trouble
for one of the plausible motivations for defeatist views—namely, that of trying to align
knowledge with epistemic virtue. One obviously cannot eliminate the myriad cases of
blameless wrongdoing using defeat considerations. But of more interest is that one can-
not plausibly eliminate the paradigmatic cases of blameworthy right-doing, either. 41
6 The two-state solution
There is certainly some intuitive pull to the thought that, in addition to an ‘ought’ gov-
erned by outcomes, we need an ‘ought’ that is tied to praiseworthy epistemic conduct—
just as it is natural to draw an analogous distinction in the moral realm between what
one objectively ought to do and what one subjectively ought to do. That said, the praise-
connected ‘ought’ is rather more elusive than it initially seems. In this section we explain
why. Again, our explanation will turn on considerations of non-transparency.
A natural fi rst pass on the ‘subjective’ epistemic ought will mimic its standard ana-
logue in the moral sphere: one ought to do that which has greatest expected epistemic utility .
Suppose that there exists some fi xed scale of epistemic utility in the disagreement situa-
tion that is KDN-inspired. 42 Then, the idea goes, the praiseworthiness of epistemic
conduct is given by this scale in combination with the facts about the subject’s particular
epistemic situation. Such an ‘ought’ can in principle sit perfectly well alongside an
‘ought’ whose ordering source is generated by a ranking of outcomes. The introduction
of this subjective ‘ought’ is thus not an objection to KDN; instead, it is supplementary
structure designed to remedy those ways in which KDN is silent. One might also argue
that this subjective ‘ought’ is truer to the context in which philosophers or the folk raise
39 Similar arguments can be formulated against analogous bridge principles, for example, ones that rely
on safety or reliability as the salient notion.
40 For a general anti-defeat case see Maria Lasonen Aarnio, Indefeasible Knowledge (DPhil dissertation,
University of Oxford, 2009) and ‘Unreasonable Knowledge’, Philosophical Perspectives 24, 2010. The present
discussion is signifi cantly infl uenced by those writings.
41 These considerations also raise the question of whether continuing to believe when it’s likely on one’s
evidence that one does not know is even intuitively blameworthy in general. After all, it is far from clear that
we fi nd the pointer-looker intuitively blameworthy for believing the strongest proposition that she knows.
42 It can be at best inspired and not determined by KDN. To fi x a scale of this sort will require far more
than a ranking that, say, puts knowledge on top, withheld belief second, knowledge-less true belief third, and
knowledge-less false belief fourth. A full valuational structure would both rank outcomes and provide valu-
ational distances between them.
26 john hawthorne and amia srinivasan
questions about how to deal with disagreement, and as such is a better answer to the
question with which we began.
That said, the ‘subjective’ epistemic ought might fail to capture our intuitive rankings
of epistemic praiseworthiness. Here are three reasons why. First, it is easy to imagine
cases in which one is not in a position to know of the presence or absence of a piece of
evidence: that is, cases in which one’s evidence is non-transparent. This is especially obvi-
ous if one’s evidence is that which one knows: if one knows p but is not in a position to
know that one knows p , then one is not in a position to know that p is part of one’s evi-
dence. Conversely, if one doesn’t know p but is in no position to know that one does not
know p , then one is not in a position to know that p is not part of one’s evidence. This
possibility will again generate instances of blameworthy right-doing and blameless
wrongdoing. Suppose, for example, that Sally justifi ably takes herself to know certain
false propositions about Harry’s reliability; she has lots of misleading evidence that Harry
is extremely reliable. Relative to everything that Sally justifi ably takes herself to know,
the course of action with maximum expected epistemic utility is for her to trust Harry,
but relative to what Sally in fact knows, the course of action with maximum expected
epistemic utility is dogmatism. Using the subjective ‘ought’—that is, the ‘ought’ tied to
expected epistemic utility—we can say that Sally ought to be dogmatic. However, intui-
tively, if Sally were instead to trust Harry, she would be blameless for so doing. Thus, an
‘ought’ tied to expected epistemic utility will not necessarily match our intuitions about
praiseworthy and blameworthy epistemic conduct.
This result is unsurprising, of course, given a conception of evidence on which know-
ing p is necessary and suffi cient for p ’s being part of one’s evidence. Knowledge is non-
transparent, and non-transparency gives rise to instances of normative divergence. Thus,
one might reasonably think that we are working here with the wrong conception of
evidence, a conception that is unsuited to our intuitions about praiseworthy and blame-
worthy epistemic conduct. However, this problem plausibly cannot be avoided by
switching to a diff erent notion of evidence. For assuming that no non-trivial condition
is such that either its absence or presence is luminous, 43 there will always be the possibil-
ity of a mismatch between one’s evidence and what one is in a position to know about
one’s evidence.
A tempting thought here is that the subjective ‘ought’ is a measure of what is best by
one’s own lights . But that thought becomes less tempting once we realize that whatever
our gloss on ‘our lights’, there will plausibly be cases in which agents are justifi ably mis-
taken about their own lights. In that case the phenomenon that gave rise to blame worthy
right-doing and blameless wrongdoing with respect to the ‘objective’ ought—namely,
43 For a persuasive case against the luminosity of any non-trivial condition, see Williamson ( Knowledge and
Its Limits ), ch. 4. One can also generate less nuanced (but perhaps equally convincing) arguments for this
conclusion. For example, consider the condition of being in pain , which seems to be as good a candidate as
any for a luminous condition. It seems highly plausible that with suffi cient psychological priming, one may
inevitably mistake a pain for an itch or a sensation of intense cold.
disagreement without transparency: some bleak thoughts 27
a mismatch between the facts pertinent to what one ought to do and what one takes
those facts to be—re-emerges for the ‘by one’s lights’ ought. In short, if we introduce an
‘ought’ tied to expected epistemic utility, then the phenomena of blameworthy right-
doing and blameless wrongdoing will still arise relative to that ‘ought’, again because of
the non-transparency of evidence.
A second potential source of limitation in the expected epistemic utility model con-
cerns mismatches between the actual probabilistic connections and what one justifi ably
takes them to be. Suppose, for example, that one is deciding whether to trust someone
about a certain proposition that is in fact a complex theorem of classical logic. If epis-
temic probabilities are standard, at least to the extent that all logical truths are assigned
probability 1, then the facts of the disagreement will be probabilistically irrelevant. The
proposition will have probability 1 relative to all facts, and the expected epistemic utility
of trusting one’s interlocutor will be calculated accordingly. It is obvious enough, then,
that any such conception of probability will induce intuitively compelling cases of
blameless wrongdoing and blameworthy right-doing. But it is not obvious that we can
contrive a non-idealized notion of probability that will provide a more satisfying gauge
of praiseworthiness and blameworthiness. 44 Note that even if the operative notion of
probability were subjective probability, that will not avoid the general worry, since there
is no reason to expect that subjective probabilities are themselves luminous. This is espe-
cially clear if subjective probabilities are a matter of one’s dispositions over all bets, since
there is no guarantee that one is in a position to know one’s own dispositions. But even
if one thinks of assigning a subjective probability to a proposition as more akin to, say,
feeling cold than being disposed to bet, anti-luminosity arguments for feeling cold will
still apply.
Third and fi nally, the expected epistemic utility conception also fails to take account
of the habitual considerations advanced earlier. What those considerations seem to indi-
cate is that there are cases in which one knows that a certain action has greatest expected
epistemic utility (using a KDN-inspired scale), but in which one is, nonetheless, blame-
worthy for doing it and blameless for not doing it. Ex hypothesi , Bridge Builder Simon
knows that he knows that twelve struts are required, and hence (on very plausible
assumptions) will be in a position to know that sticking to his guns has maximum
expected epistemic utility (at least assuming that knowing p is suffi cient for p to count as
44 Note that in general one source of instability in our practices of praising and blaming others is the
extent to which we take badly formed beliefs as nevertheless providing an excuse for some course of action.
Consider the case of Frank, who is deciding how to respond to a disagreement about whether- p . Frank’s
disagreeing interlocutor knows not- p and Frank’s evidence provides strong support for the hypothesis that
the interlocutor knows not- p . But Frank is in the grip of a bad argument whose conclusion is that the disa-
greement is suffi cient to preclude his interlocutor’s knowing. Should we say that Frank ought to ignore the
bad argument and go ahead and trust the interlocutor? Or should we say that Frank ought not to trust the
interlocutor given the (misguided) argument he fi nds persuasive? Similar issues arise when contriving a
‘subjective’ ought for people with misguided moral convictions. See Gideon Rosen, ‘Skepticism About
Moral Responsibility’, Philosophical Perspectives 18 (2004): 295-313 ; see also Michael J. Zimmerman, ‘Moral
Responsibility and Ignorance’, Ethics 107 (1997): 410-28 .
28 john hawthorne and amia srinivasan
evidence). 45 But as we have said, it is easy to get into a frame of mind where, despite
these facts, we think of his sticking to his guns as a worse course of action than, say,
shifting to agnosticism.
Note that habitual distinctions can even make a diff erence to expected epistemic
utilities once we shift from a KDN-inspired scale to one that looks at the longer term.
Even if one thinks of knowledge as the hallmark of epistemic success, one might grade
an action such as sticking to one’s guns by long-run consequences for knowledge rather
than by the very short-term consequentialism encoded by KDN. Similarly, one could
have a subjective ‘ought’ that was geared to longer-term consequences. Consider a situ-
ation where S knows that she knows, but also knows that by being dogmatic now she
will make it likely that she will dogmatically cling to false beliefs in the future. Here,
being dogmatic will straightforwardly have maximum expected utility for S according
to a KDN-inspired scale, but may well not have maximum expected utility when a
longer-term scale is in play.
What we have seen so far is that a two-state solution that uses KDN-inspired expected
epistemic utility as the ground for the so-called subjective ‘ought’ does not do what it is
designed to do, namely generate an ‘ought’ that aligns with our intuitive judgments
about praise and blame. This is because non-transparency is inescapable: as we attempt to
index our ‘ought’ to more stably accessible or ‘subjective’ conditions, we come up against
possible cases in which agents fail to be in a position to know whether those conditions
obtain—and thus fail to be in a position where they are blameworthy for failing to do
what they ought to do, or vice versa.
7 Disagreement, despair, and beyond
We have suggested that those of us who hope for a general and intuitively satisfying
answer to the question that is at the centre of the disagreement debate—namely, what
we ought to do, epistemically speaking, when faced with disagreement—might be hop-
ing in vain. There are deep structural reasons why such an answer has proven, and will
continue to prove, elusive. Intuitively, we expect epistemic norms to be normatively satis-
fying: that is, we expect them to track our intuitions about blameworthy and praise-
worthy epistemic conduct. An epistemic norm that ties what one ought to do to a
non-transparent condition (e.g. knowledge) is an epistemic norm that will not satisfy
this basic desideratum. To construct an epistemic norm that is normatively satisfying,
then, we require an epistemic ‘ought’ that is tied to only transparent conditions; unfortu-
nately, no such conditions plausibly exist. As such, the hope of fi nding a normatively
satisfying answer to the disagreement question seems like a hope unlikely to be
satisfi ed.
45 We say ‘on plausible assumptions’ because, for example, one might know that one knows p in a dis-
agreement situation and yet conceivably not know whether, if one sticks to one’s guns one will suddenly
start believing p for bad reasons that are not one’s current basis for believing p .
disagreement without transparency: some bleak thoughts 29
Where does this leave us? One kind of reaction to all this is to despair of any cogent
treatment of non-ideal cases. For example, we might say that expected epistemic utility
works as a measure for epistemic praise and blame for creatures for whom evidence and
probabilistic connections are completely transparent. 46 (Habitual wrongdoing can’t
arise for such creatures, since that phenomenon requires the non-luminosity of the
absence of knowledge.) And we might contend that for non-ideal creatures there is no
stable measure of epistemic praise and blame, and that associated ‘ought’ claims are not
ultimately coherent. Since we are squarely non-ideal creatures, we think this is despair
indeed. 47
Another kind of response fi nds fault in the attempt to formalize praiseworthiness in
the guise of KDN-inspired expected utility. One might hold that facts about one’s evi-
dence provide some reason to do this or that, but that facts about what one takes one’s
evidence to be, as well as facts about what habits a course of action inculcates, also pro-
vide reason to do this or that. And one might hope, moreover, that there exists, at least in
many cases, a fact about what one has ‘all things considered’ reason to do. Less commit-
tedly, one might envisage an ordering source that directly ranks acts in terms of com-
parative praiseworthiness, perhaps without trying to give any sort of quasi-reductive
account of what grounds these facts of relative praiseworthiness. At least one problem
with these responses is that they fail to notice the ways that praise and blame reactions
can be tied to various parameters that cannot plausibly be brought together into an ‘all
things considered’ or ‘on balance’ judgment. Consider again Andi, who gives up chasing
the ball in cases where he knows that he won’t reach it. If we focus on the local scenario,
Andi’s actions seem unassailable; after all, he gives up when he knows it is futile. By con-
trast, Raphael’s behaviour seems a little lamentable; after all, he keeps chasing when he
knows he can’t reach the ball. But if we shift to a more global outlook, we can see Andi’s
course of action as problematic and blameworthy, Raphael’s as noble and praiseworthy.
It is far from clear that one of these outlooks has any special authority over the other.
The preceding refl ections point to what some will take to be the most promising
option apart from despair: namely, to claim that even within the realm of the more ‘sub-
jective’ ought, there is context-dependence according to the kinds of weightings of
values that are operative at a context. There isn’t a single ordering source associated with
evaluating the subject ‘by her own lights’. Rather, perhaps, there are a range of candi-
date-ordering sources to which one might semantically tie ‘ought’ claims, and which
one might allow to dictate one’s reactive attitudes. Note that such context-dependence
will likely infect other normative terms like ‘reasonable’ and ‘rational’. From this per-
spective, it is an illusion to think one has fi xed one’s subject matter by saying ‘I am using
the “ought” of rationality here’, for there will simply be no privileged ‘ought’ of rational-
ity. One pressing question for this approach is how it handles the question as to which of
46 Thanks to David Chalmers for pushing this point.
47 Moreoever, if we require that the evidence of one’s interlocutor be completely transparent as well as
one’s own, that makes the ideal case particularly far removed from ordinary disagreement. And if knowledge
suffi ces for evidence it precludes disagreements where one of the parties knows.
30 john hawthorne and amia srinivasan
the various reactive attitudes—praise, blame, and so on—one ought to have. It’s natural
to think that the context in which one says ‘Andi ought to give up’ should also be a con-
text in which it is appropriate to say ‘People ought not to condemn Andi for giving up’,
and indeed is a context where one condemns the person who condemns Andi, and so
on. But this kind of contextualism is one where ‘ought’ seems to lose much of its moti-
vational power once the contextualism has been refl ectively absorbed. If one overhears
someone saying ‘Andi ought to be praised for giving up’, it is hard to be much moved by
that when one is aware that at other contexts, some can speak truly (about the same
action) by saying ‘Andi ought to be condemned for giving up’.
Finally, others will be tempted by a more expressivist reaction: ‘ought’ claims express
kinds of recommendations, though there are both diff erent styles of recommendations
(the bouletic style has a diff erent fl avour from the deontic style, and within the deontic
style, some are made in an advisory spirit, some in an optative spirit), and diff erent moods
in which we make them (sometimes we are looking at the longer term, sometimes the
shorter term; sometimes we are driven by certain wishes and concerns, sometimes by
others). How much either contextualism or expressivism really buys us over outright
despair we leave to the reader to judge.
Our goal was to suggest that a natural hope—that we might settle on an intuitively
satisfying and principled answer to the disagreement question—might remain unsatis-
fi ed. This becomes clear, we think, when we take seriously the non-transparency of any
condition that might plausibly play a role in an epistemic norm. Since knowledge is
obviously non-transparent, norms like KDN fail to live up to our normative expecta-
tions in some fairly obvious ways. But alternatives, like the claim that one ought to do what
has greatest expected epistemic utility , again fail to satisfy our evaluative intuitions. We don’t
mean this to be a serious recommendation of KDN as an answer to the disagreement
question. Indeed, our intention has been to suggest that there seems to be no single
privileged answer to the question ‘What ought we to do, epistemically speaking, when
faced with a disagreement?’ This thought, bleak as it might be, easily leads to bleaker
thoughts. We have not argued for the conclusion here, but it seems that non- transparency
poses a more general problem for the ambition to formulate all sorts of epistemic
norms. 48 If so, then it is not just a stable answer to the disagreement question that will
remain elusive, but also stable answers to all questions concerning what we ought to do,
epistem ically speaking.
48 For a general discussion of the implications of non-transparency for normative theorizing, see Amia
Srinivasan’s “What’s in a Norm?” (in progress).
1 Some cases
Case 1: Intrapersonal confl ict
Suppose that you suddenly realize that two beliefs that you hold about some subject are
inconsistent with one another. Prior to becoming aware of the confl ict, you were quite
confi dent of each. Indeed, let’s suppose that you were more or less equally confi dent that
they were true. Now that you are aware of the confl ict, how should you revise your
beliefs?
A possible answer: in any case of the relevant kind, you are rationally required to
abandon both beliefs until you acquire further evidence. In particular, it would be unrea-
sonable to retain one of the two beliefs while abandoning the other.
A better view: in some cases of intrapersonal confl ict, the reasonable thing to do might
be to abandon both beliefs until further evidence is acquired. But in other cases, it might
be perfectly reasonable to resolve the confl ict by dropping one of the two beliefs and
retaining the other. What would be a case of the latter kind? Paradigmatically, a case in
which one of the two beliefs is well supported by your evidence but the other is not. If
your total evidence strongly supported one of the two beliefs before you became aware
of the confl ict, then it might very well be reasonable to retain that belief even after you
realize that it is inconsistent with something else that you have confi dently believed up
until now.
Case 2: Disagreeing with a great dead philosopher
You regularly teach a certain classic philosophical text. In the course of doing so, you
have come to have a high opinion of the author’s judgment. It’s not simply that you
think that he manifests genuine intellectual virtues such as creativity and imagina-
tion in the course of pursuing the questions with which he is concerned, but that his
2
Disagreement and the Burdens
of Judgment
Th omas Kelly
32 thomas kelly
judgment is generally quite reliable. 1 Indeed, you would not claim to have better
judgment about the domain than he does. Nevertheless, against this background of
genuine respect, you believe that his discussion of some particular issue is uncharac-
teristically weak. It’s not that you think that you have uncovered a clear fallacy, or
something akin to a clear fallacy, the kind of thing such that if you could only travel
back in time and present it to the author, a modicum of intellectual humility would
compel him to change his mind. Rather (what I take to be a much more typical case),
you simply think that the author has overestimated the force of his own arguments
and underestimated the force of the objections that he considers. You thus conclude
that the author’s case for his conclusion is quite weak. Could this attitude be
reasonable?
A possible answer: no, it couldn’t. After all, the author himself evidently took his case
to be quite compelling, and you make no claim that your judgment is generally superior
to his in this domain. Because of this, you should treat his judgment that his case is com-
pelling, and your initial judgment that his case is quite weak, evenhandedly. Thus, you
should be more or less agnostic about the merits of the author’s case.
A better view: it might very well be reasonable for you to be confi dent that the
author’s case is not compelling. What would be circumstances in which this is a reason-
able attitude on your part? Suppose that the author really has overestimated the extent to
which his arguments tell in favor of his conclusion and underestimated the force of the
objections that he considers, and that this is something that you’ve correctly picked up
on. In those circumstances, you might very well be justifi ed in having a low opinion of
the author’s discussion, notwithstanding his contrary opinion.
Case 3: Disagreeing with one’s past self
Like many other philosophers, I sometimes write relatively detailed notes in the mar-
gins when reading a philosophical text. More eccentrically, I’ve also long maintained
the practice of dating the cover page of every book that I read, each time that I read it.
(Thus, among the not especially interesting autobiographical facts to which I cur-
rently have access is that I fi rst read Descartes’ Meditations in March of 1991.) Because
of these practices, I’m often in a position, upon rereading a philosophical text, to
compare my current view of some author’s arguments with my past view of those
arguments. Perhaps unsurprisingly, I often fi nd that I agree with my past self, even
when the past self in question is a relatively distant one. For example, I don’t think
particularly highly of Descartes’ arguments for the existence of God now, and I’m
pleased to report that it seems that my eighteen-year-old self didn’t think too highly
of them either.
1 Consider the way in which one might genuinely admire the ingenuity displayed by Leibniz or Spinoza
in their metaphysical writings while nevertheless thinking that they are quite unreliable about matters of
ontology. In my own case, the attitude that I take towards Frege’s work on the epistemology of mathematics
would provide a much better example of the kind of thing that I have in mind.
disagreement and the burdens of judgment 33
But on occasion, my past self and I disagree. Of course, it’s easy enough for me to
write off the opinions of my eighteen-year-old self as the misguided ruminations of a
philosophical novice. On the other hand, and as much as I’d like to believe otherwise,
I don’t believe that my philosophical judgment now is appreciably better than it was, say,
fi ve years ago, back when I was a lowly assistant professor. Rather, I suspect that at this
point I’ve pretty much leveled off when it comes to the amount of sophistication that I
bring to bear when I critically engage with a text. Consider then those cases in which
I disagree with my philosophically mature self : upon re-reading some philosophical text,
I fi nd that I’m inclined to change my mind about an author’s discussion, which now
seems signifi cantly weaker (or stronger) than it did in the past.
Suppose that I go ahead and change my mind. Perhaps my awareness that I once
thought diff erently tempers my current confi dence a bit, since it reminds me of my fal-
libility with respect to judgments of the relevant sort. But the opinion that I settle on
now is signifi cantly closer to what I would have thought if I had simply made up my
mind in complete ignorance of what I used to think, than it is to the opinion that I used
to hold. Question: could this be reasonable on my part?
A possible answer: no, it would never be reasonable for me to do this. After all, I don’t
think that my judgment has signifi cantly improved when it comes to this general kind
of question; nor do I claim to possess any ‘silver bullet’ piece of evidence that I lacked
then. Given this, it would be completely arbitrary to privilege the opinion of my current
self over the opinion of my past self. Therefore, I should be even-handed and give equal
weight to the opinion of my philosophically mature past self and the opinion that I’m
now inclined to hold.
A better view: in at least some cases, it might very well be reasonable for me to change
my mind signifi cantly, and adopt an opinion that is relatively close to the view that I
would hold if I ignored my past opinion entirely, and relatively far from the view taken
by my past self. What would be a case of this kind? Suppose that it’s not simply that it
seems to my current self that my past self gave too little (or too much) weight to the
author’s arguments, but that it’s true that my past self did this. That is, suppose it’s a case in
which my current self is right to think that my past self misjudged the probative force of
the evidence and arguments that are presented in the text. In cases of this kind, it doesn’t
follow that I’m unreasonable, even if I don’t treat my past opinion and the opinion that
I’m inclined to hold even-handedly.
But how can I be sure that I’m getting things right now, as opposed to then? There
is no way for me to be sure of this. If I favor the assessment of my current self, but my
past assessment was more accurate, then the opinion that I end up holding will be
unreasonable (or at least, less reasonable than another opinion that I might have held
instead). But that’s simply the familiar fate of someone who misjudges his or her
evidence.
I’m interested in what to say about all of these cases, and in what others think about
them. But I put them on the table primarily as a warm-up for thinking about the case
with which I will be primarily concerned for the rest of the paper.
34 thomas kelly
Case 4: Peer disagreement
Suppose that you and I have been exposed to the same evidence and arguments that
bear on some proposition: there is no relevant consideration that is available to you
but not to me, or vice versa. For the sake of concreteness, we might picture the
following:
You and I are attentive members of a jury charged with determining whether the accused is
guilty. The prosecution, following the defense, has just rested its case.
Suppose further that neither of us has any particular reason to think that he or she enjoys
some advantage over the other when it comes to assessing considerations of the relevant
kind, or that he or she is more or less reliable about the relevant domain. Indeed, let’s
suppose that we possess signifi cant evidence that suggests we are likely to be more or less
equally reliable when it comes to questions of the relevant kind. Because we’re aware of
this, if we had been asked in advance of the trial which one of us is more likely to be
wrong in the event of a disagreement, we would have agreed that we were equally likely
to be wrong. 2 Nevertheless, despite being (apparent) peers in these respects, you and I
arrive at diff erent views about the question on the basis of our common evidence. For
example, perhaps I fi nd myself quite confi dent that the accused is guilty while you fi nd
yourself equally confi dent that he is innocent.
Suppose next that, upon learning that I think that the accused is guilty, you reduce
your confi dence in his innocence. However, even after you take my opinion into
account, it still seems to you that on balance the evidence suggests that he is innocent.
You still regard it as signifi cantly more likely that he is innocent than that he is guilty, to
the point that you can correctly be described as retaining your belief in his innocence.
Question: in these circumstances, is there any possibility that this is a reasonable response
on your part?
A possible answer: The Equal Weight View/Conciliationism (cf. Elga 2007 ; Christensen
2007a , 2011 ; Feldman 2003 , 2006 , 2007 ; Kornblith 2010 ; Bogardus 2009 ; Matheson
2009 ; Cohen this volume). No, there isn’t. In any case of the relevant kind, you are
rationally required to abandon your original belief and retreat to a state of agnosticism.
(I’m required to do the same.) Given the relevant symmetries, you should give equal
weight to my view as to yours; thus, given that initially I am confi dent that the accused is
guilty while you are equally confi dent that he is not, the uniquely rational stance is for
us to suspend judgment about the issue. As Richard Feldman puts it:
Consider those cases in which the reasonable thing to think is that another person, every bit
as sensible, serious, and careful as oneself, has reviewed the same information as oneself and
has come to a contrary conclusion to one’s own . . . An honest description of the situation
acknowledges its symmetry. . . . In those cases, I think, the skeptical conclusion is the reason-
able one: it is not the case that both points of view are reasonable, and it is not the case that
2 Cf. Elga’s ( 2007 ) account of what it is to treat someone as an epistemic peer in his sense .
disagreement and the burdens of judgment 35
one’s own point of view is somehow privileged. Rather, suspension of judgment is called for.
(2006: 235) 3
A better answer: The Total Evidence View ( Kelly 2007 ). Yes, there is. Whether it’s reasonable
for you to believe that the suspect is innocent even after learning that I think otherwise
is not something that can be determined, given only the facts about the fi ction provided.
What are some circumstances in which your belief might be reasonable? Suppose that
the original evidence with which we are presented strongly supports the view that the
suspect is innocent. Your original belief is a rational response to what was then our total
evidence; mine is not. (Against a general background of competence, I commit a per-
formance error.) After you learn that I think that the accused is guilty, your total evi-
dence has changed: it is now on the whole less supportive of the view that he is innocent
than it was previously. It is thus reasonable for you to reduce your confi dence to at least
some degree. Still, the total evidence available to you then might very well make it more
likely that the suspect is innocent than that he is guilty, to the point that it’s reasonable
for you to believe that he is guilty. In any case, there is certainly no guarantee that the
uniquely reasonable response on your part is to retreat to a state of agnosticism between
your original opinion and my original opinion, as the Conciliationist suggests.
2 Conciliationism
Conciliationism plays a central role in structuring the epistemology of disagreement
literature. Some prominent contributors to the literature endorse the view, but even
those who reject it often use it as the background against which to develop their pre-
ferred alternatives. If any view deserves the title of the “View to Beat,” it is this one.
However, despite its prominence in the literature, and although the animating intui-
tion seems straightforward enough, it’s not clear exactly what the view actually says .
Among other things, we currently lack an adequate precisifi cation of what it means to
give “equal weight” to someone else’s opinion and to one’s own. 4 David Christensen,
3 Although Feldman’s early writings on the topic of disagreement provide paradigm statements of the
position that I call “Conciliationism” or “The Equal Weight View,” they do not accurately refl ect his latest
views. Indeed, on the basis of Feldman ( 2009 ) and recent conversations, I believe that there is very little (and
possibly no) diff erence between his current views and the one that I defend under the heading “The Total
Evidence View.”
4 On this point, see especially Jehle and Fitelson, “What is the ‘Equal Weight View’?” In several places in
Kelly ( 2007 ), I slipped into interpreting “giving equal weight to your peer’s opinion” as a matter of averaging
your initial credence and your peer’s initial credence in order to arrive at an updated credence. As a number
of people have pointed out, however, this “arithmetical mean” interpretation is not plausible as a general
interpretation. It works well enough in the special case with which I was primarily concerned, in which you
invest a certain credence in p and I invest the same credence in not-p. (In that case, averaging the original
credences leads us to converge on a revised credence of 0.5, representing perfect agnosticism, which is where
proponents of the view think we should be.) However, the interpretation is not as plausible in cases where
we are not centered around the midpoint in this way. For example, suppose that I perform a calculation and
invest 90% confi dence in the number at which I arrive. (I give some weight to the possibility that I’ve made
a mistake.) I then learn that you arrived at the same number independently, and currently invest 80% con-
fi dence in that answer. Intuitively, learning that you arrived at the same number should make me more
36 thomas kelly
who in my estimate has done at least as much as anyone else to develop this general
approach to disagreement, freely admits that there is as of yet no suitably general and
determinate principle of belief revision on the table (2011: 17). For this reason, I want
to be quite clear about what I will mean by ‘Conciliationism’. For purposes of this
paper, a view counts as Conciliationist if and only if it entails that suspending judgment
is a necessary condition for being reasonable in a canonical case of peer disagreement,
that is, any case that has the same structural features as Case 4 above. (In a framework
employing degrees of belief, the necessary condition should be interpreted so as to
require a stance of agnosticism, i.e., a degree of belief of approximately 0.5). Notably, a
Conciliationist need not hold that suspending judgment in such circumstances is also a
suffi cient condition for reasonableness, although some Conciliationists might very well
hold that it is. 5
Elsewhere I’ve argued against Conciliationism and defended The Total Evidence
View at some length. Here I want to attempt to get a bit deeper, in a way that builds
on the insights of some other contributors to the debate. Although I am perhaps
prone to bias when it comes to evaluating the health of the disagreement literature,
I do think that there has been some discernible progress. It’s not simply that there has
been some convergence among prominent representatives of competing approaches
with respect to the ultimate issues. It’s also that, with respect to the major divisions
that persist, there is greater clarity about what underwrites these divisions, and what it
would take to resolve them. An example of the latter: in his recent critical survey of
the literature (2009), Christensen identifi es a principle, ‘Independence’, that he thinks
underlies the division between those who accept Conciliationism and those who
reject it:
Independence : In evaluating the epistemic credentials of another person’s belief about P, in order
to determine how (if at all) to modify one’s own belief about p, one should do so in a way that
is independent of the reasoning behind one’s own initial belief about p. (2009:758)
confi dent of my answer, as opposed to less confi dent, which would be the outcome of averaging our cre-
dences. And surely someone who recommends that I give equal weight to both of our opinions should not
be understood as saying otherwise (as pointed out by Christensen 2011 : 3). Thus, the arithmetical mean
interpretation is not viable across the board. (I don’t believe, however, that this point detracts from whatever
force my objections in the aforementioned paper possess.)
5 This point is emphasized by Christensen ( 2011 ), in the course of developing a version of Conciliation-
ism designed in part to avoid some of the objections off ered in Kelly ( 2007 ). Christensen suggests that the
Conciliationist should say that even if one responds to the disagreement by adopting the rationally required
attitude, that attitude might still be unreasonable, if one’s initial belief was unreasonable given one’s initial
evidence. For criticism of this way of understanding the view, see Cohen ( this volume ) and my “Believers as
Thermometers.”
A further note about terminology is in order here. Occasionally, “Conciliationism” is used in an extremely
inclusive way, so that any view according to which one must give at least some (even extremely minimal)
weight to the opinion of a peer in a canonical case of peer disagreement counts as a species of Conciliation-
ism (see, e.g., Elga 2010 ). On this inclusive usage, the Total Evidence View, no less than the Equal Weight
View, counts as a species of Conciliationism. As the above makes clear, however, I will use the term in a
much less inclusive way, following the practice of Christensen’s ( 2009 ) helpful survey of the literature.
disagreement and the burdens of judgment 37
According to Christensen, the dispute between Conciliationists and Non-
conciliationists is explained by the fact that the former accept, while the latter reject, a
principle of this sort. Whether that diagnosis is ultimately correct, it is certainly true
that prominent Conciliationists frequently endorse principles of this sort in the course
of arguing for their view. 6 For my part, although I think that both Independence and
Conciliationism are false, if I changed my mind about the former I would immediately
change my mind about the latter. More generally, I think that once one accepts Inde-
pendence, Conciliationism is more or less irresistible. Thus, I’d like to consider this
principle at some length and with some care.
3 Independence
Notice that Independence is a relatively general epistemic principle, one which says
nothing about the case of peer disagreement in particular. According to Christensen
Conciliationism will result from combining this sort of principle with the thought that, to the
extent that one’s dispute-independent evaluation gives one strong reason to think that the other
person is equally likely to have evaluated the evidence correctly, one should (in the case where
one is quite confi dent that p, and the other person is equally confi dent that not-p) suspend belief
(or adopt a credence close to .5) in p. (2009: 758–9)
But why should we accept anything like this principle? Here is Christensen, one more time:
The motivation behind the principle is obvious: it’s intended to prevent blatantly question-
begging dismissals of the evidence provided by the disagreement of the others. It attempts to
capture what would be wrong with a P-believer saying, for example, “Well, so and so disagrees
with me about p. But since P is true, she’s wrong about p. So however reliable she may generally
be, I needn’t take her disagreement about p as any reason at all to change my belief.”
There is clearly something worrisome about this sort of response to the disagreement of oth-
ers. Used as a general tactic, it would seem to allow a non-expert to dismiss even the disagree-
ment of large numbers of those he took to be experts in the fi eld. (2011: 2)
In several places, Christensen employs his own paradigm example of a disagreement
between peers, “The Ordinary Restaurant Case,” in order to illustrate how we should
apply Independence. 7 In the Ordinary Restaurant Case, you and I independently cal-
culate our shares of the dinner tab (we’ve agreed to divide the check evenly among
everyone who was at dinner). We know, on the basis of substantial track record evi-
dence, that we’re more or less equally competent when it comes to performing this
general kind of calculation (in our long history of dining together, we almost always
come up with the same number, but on those occasions when we’ve come up with
6 Christensen himself endorses Independence, and, as he notes, extremely similar principles have been
explicitly endorsed by other Conciliationists, e.g., Elga ( 2007 ) and Kornblith ( 2010 ). Independence is also
endorsed in passing by Cohen (this volume).
7 The Ordinary Restaurant Case was fi rst introduced in Christensen ( 2007a ). It also appears in Chris-
tensen ( 2009 , 2010 , 2011 ).
38 thomas kelly
diff erent numbers, each of us has turned out to be the one who was correct approxi-
mately half the time). On this occasion, you arrive at the number $43 while I arrive at
the number $45. A widely shared intuition is that, upon discovering this, both of us
(including the person who in fact reasoned impeccably, assuming that one of us did)
should become much less confi dent of his or her original answer, and that indeed,
each of us should divide our credence between the two answers more or less equally.
Notice how, applied to this case, Independence yields the widely shared intuition. For
Independence instructs each of us to set aside the reasoning which led us to our origi-
nal answer in evaluating the “epistemic credentials” of the other person’s belief; once
we’ve done this, we’re left with the knowledge that the other person is, in general,
more or less equally reliable when it comes to this kind of calculation. And this in turn
suggests that each of us should treat the two original opinions even-handedly in arriv-
ing at a new view. (At least, until we perform the calculation again, or consult a calcu-
lator.) Elsewhere, Christensen writes of the need to “bracket” the reasons and evidence
on the basis of which one reaches one’s original answer, once one becomes aware of
the disagreement. 8 For notice that, if in fact you reasoned impeccably in arriving at
your original answer, then the facts from which you reasoned (that the total bill is n
dollars; that m people have agreed to divide the check evenly, etc.) literally entail the
correct answer. So if such facts are among the evidence you have to go on in evaluat-
ing my belief, then they would seem to provide a basis for discounting my opinion
entirely. But according to Independence, you should set aside such facts when evalu-
ating my belief.
What should we make of Independence? First, a couple of preliminary remarks. How
should we understand talk of “evaluating the epistemic credentials of another person’s
belief about p?” An obvious fi rst thought is that such evaluation is a matter of judging
the epistemic status of the person’s belief: for example, making a judgment about how
reasonable that belief is. But on refl ection, it’s clear that “evaluating the epistemic cre-
dentials of another person’s belief ” will have to include considerably more than mere
judgments of reasonableness, given the role that such evaluation is supposed to play in
guiding revision of one’s own beliefs. For on anyone’s view, the mere fact that one evalu-
ates someone else’s opinion as perfectly reasonable completely leaves open how much
weight (if any) one should give to that opinion. For example, suppose that my loathing
of the butler leads me to frame him for some crime that he didn’t commit. Suppose fur-
ther that I execute my plan impeccably: due to my eff orts, the authorities and many
members of the general public come to possess large quantities of misleading evidence,
all of which suggests that the butler committed the crime. On the basis of this evidence,
you become extremely confi dent that the butler did it. When I subsequently meet you
and note with satisfaction how confi dent you are of the butler’s guilt, I might very well
judge that your belief is perfectly reasonable. Nevertheless, in these circumstances, the
mere fact that you reasonably believe that the butler committed the crime is no reason at
8 On “bracketing,” see especially his (2010).
disagreement and the burdens of judgment 39
all for me to be even slightly less confi dent of my belief that he did not commit the
crime. On the other hand, it’s also true, on anyone’s view, that when I encounter a per-
son who I take to be both better informed than I am about some question and perfectly
reasonable in believing as she does, the fact that she believes as she does gives me a reason
to revise my view in her direction. 9
The moral: given that “evaluating the epistemic credentials of another’s belief that p”
is supposed to play a role in potentially guiding one’s own belief revision, such evalua-
tion will have to go considerably beyond judgments about whether the other person is
reasonable in believing as she does. Such evaluation will also require judgments about
the quality of her evidence and how well informed she is. This point will be important
for us later on; for now, I want to fl ag it and move on.
A second preliminary point concerns a worry about the way in which Christensen
formulates Independence. For there is an aspect of that formulation that threatens to
severely limit its applicability (or at least, an aspect that makes it unclear how the princi-
ple should be applied in a signifi cant range of cases). Independence requires that, when
one evaluates another’s belief that p, one bracket “the reasoning behind one’s initial
belief about p.” Talk of “the reasoning behind one’s initial belief ” is easiest to under-
stand in cases that closely resemble Christensen’s favorite example, the Ordinary
Restaurant Case. In that case, there really is some identifi able, relatively discrete piece of
reasoning that leads one to a particular belief. But many beliefs, including many of the
beliefs that philosophers are ultimately concerned with in the disagreement literature
(e.g., the kinds of extremely controversial beliefs that people hold about history, politics,
religion, and philosophy) are often not easily understood as the output of some discrete
process of reasoning.
Consider, for example, two diff erent bases for atheism. Undoubtedly, some atheists
believe as they do because of some relatively discrete piece of reasoning. (We might
think here of someone who arrives at the view that there is no such being as God by
reasoning from some relatively small set of premises, at least one of which refers to the
existence of evil in the world.) But alternatively, one might disbelieve in God for the
following reason: given everything else that one takes to be true about reality, one judges
that it’s extremely improbable that any such being exists. On Christensen’s formulation,
it’s much easier to see how Independence applies to the former sort of atheist than to
the latter. (Presumably, in evaluating someone else’s belief about God, the atheist is not
supposed to bracket everything that she takes to be true about reality, even if the reason
she invests extremely low credence in the proposition that God exists is the fact that that
proposition has extremely low probability given everything else she believes.) Perhaps it
still makes sense to talk about “the reasoning behind the belief that God does not exist”
9 If I know that she’s both better informed than I am and perfectly reasonable in believing as she does,
shouldn’t I simply adopt her opinion as my own? Not necessarily, for her being better informed than I am
is consistent with my having some relevant evidence that she lacks, and this can make it reasonable for me
not to simply adopt her opinion.
40 thomas kelly
in the second case as well as the fi rst. But if so, we should not underestimate the diff er-
ence between the two cases. 10
Having raised this issue, I will for the most part ignore it in what follows. But I do
want to insist on a statement of Independence that has clear application to a case in
which one’s initial belief is based on one’s assessment of a given body of evidence or
information. For this purpose, I propose the following, which I take to be completely in
the spirit of Christensen’s original statement:
Independence* : In evaluating the epistemic credentials of another person’s belief about P, in order
to determine how (if at all) to modify one’s own belief about P, one should do so in a way that
is independent of one’s assessment of those considerations that led one to initially believe as one
does about P .
Although I still have some worries of detail about this formulation, I’ll waive these in
what follows.
The fi rst substantive point that we should note about Independence* (or for that
matter, Independence) is that it is an extremely strong principle. Suppose that I possess a
great deal of evidence 11 that bears on the question of whether the Holocaust occurred;
I look it over and judge correctly that this body of evidence strongly confi rms that the
Holocaust occurred; on the basis of that assessment, I invest a correspondingly high
amount of credence in the proposition. I then encounter a Holocaust denier. (For pur-
poses of the example, let’s imagine that this person is quite reliable when it comes to
matters that are unrelated to the Holocaust.) In evaluating the epistemic credentials of
his belief that the Holocaust never occurred, Independence* would have me bracket
my assessment of all of those considerations which led me to believe that the Holocaust
did occur. An obvious question is whether, once I do this, I’ll still have enough left to go
on to off er an evaluation of the Holocaust denier’s belief. A second question is why, even
if I do have enough left to go on to arrive at an evaluation, we should think that the
evaluation that I come up with under those conditions is worth anything.
Suppose that the person in question is grossly ignorant of certain historical facts, his-
torical facts which make it overwhelmingly likely that the Holocaust occurred. Indeed,
perhaps the evidence that the Holocaust denier possesses is suffi ciently impoverished
and misleading (the misleading testimony provided by parents whom he had a default
entitlement to trust; the propaganda to which he has been subjected, etc.) that his belief
that the Holocaust never occurred is a perfectly reasonable thing for him to think, both
objectively and by my own lights. His problem is not irrationality but ignorance. One
might have thought that his gross ignorance is certainly something that I should take
into account in evaluating the epistemic credentials of his belief. (Recall that, for reasons
given above, evaluating a person’s belief in the sense relevant to Independence* must go
10 Christensen himself is well aware of this issue: see his ( 2011 : 18), where he credits Jennifer Lackey for
emphasizing its importance.
11 Here, we can think of my evidence as a body of information, or a collection of facts, to which I have
cognitive access.
disagreement and the burdens of judgment 41
beyond merely making a judgment about the epistemic status of his belief given his evi-
dence.) However, there seems to be a problem with my doing this. Suppose that it turns
out that (as is plausible enough) the historical facts of which he is ignorant are the very
same facts on which I base my own belief that the Holocaust occurred. In that case, in
evaluating his belief, I should bracket my own assessment of these considerations. That
is, I should set aside my own judgment that these considerations strongly support the
view that the Holocaust occurred. But the problem then is this: my judgment that the
Holocaust denier is grossly ignorant when it comes to matters relating to the Holocaust
is not at all independent of my assessment that the relevant considerations strongly con-
fi rm the occurrence of the Holocaust. That is, if I set aside my assessment that these facts
strongly confi rm the occurrence of the Holocaust, then I would no longer take some-
one’s ignorance of them to be a handicap in judging whether the Holocaust occurred.
After all, there are ever so many facts ignorance of which I take to be no handicap at all
when it comes to judging whether the Holocaust occurred. It is only because I judge
that these facts confi rm that the Holocaust occurred, that I take ignorance of them to be
at all relevant to “the epistemic credentials” of someone’s belief about the Holocaust.
It will be helpful to lay out more explicitly the reasoning whose legitimacy is in ques-
tion. In concluding that the Holocaust denier’s belief about whether the Holocaust
occurred is lacking in epistemic credentials, I reason as follows:
(1) F1 . . . Fn are true.
(2) Given F1 . . . Fn, it is extremely likely that the Holocaust occurred.
(3) The Holocaust denier is ignorant of F1 . . . Fn.
(4) Therefore, the Holocaust denier is ignorant of facts that make it extremely likely
that the Holocaust occurred.
(5) Therefore, the Holocaust denier’s opinion about whether the Holocaust
occurred is untrustworthy/lacking in epistemic credentials.
I claim that this might very well be good reasoning, even if my own belief that the Holo-
caust occurred is based on F1 . . . Fn, and I arrived at this belief because I judged that
these considerations support it. 12 But if Independence* is true, then it would not be
legitimate for me to reason in this way. Therefore, Independence* is false.
The case of the Holocaust denier is, I think, a counterexample to principles like Inde-
pendence* or Independence. In assessing the example, it’s important to be clear about
the way in which it diff ers from similar cases that the proponent of Independence* defi -
nitely can handle. Consider, for example, the following kind of case. 13 I have vivid appar-
ent memories of having had eggs for breakfast this morning, and no reason to distrust
these apparent memories; I am thus extremely confi dent that I had eggs for breakfast
12 That is, in the case as we have described it, my judging that (2) is true plays an essential role both in the
reasoning that leads me to believe that the Holocaust occurred, as well as in the reasoning that leads me to
conclude that the Holocaust denier’s belief about whether the Holocaust occurred is untrustworthy.
13 For suggesting this case as a comparison, as well as for making clear to me how his view handles it, I am
indebted to Christensen (personal communication).
42 thomas kelly
this morning. I then learn that you (who were not at breakfast with me) are, unsurpris-
ingly, much less confi dent that I had eggs for breakfast. Intuitively, it’s perfectly reasona-
ble for me to retain a high degree of confi dence that I had eggs for breakfast even after
learning that you invest much less confi dence in the same proposition. More generally, it
seems perfectly reasonable for me to conclude that your beliefs about what I had for
breakfast are lacking in “epistemic credentials” compared to my own beliefs about the
same subject, and to take this into account in deciding how to revise (or not revise) my
own views in the light of yours. One might be tempted to think that this fact already
makes trouble for strong Independence principles. After all, if the reason why I initially
invest so much confi dence in I had eggs for breakfast is simply that I remember that I had
eggs for breakfast , then appealing to your ignorance of this same fact in evaluating the
epistemic credentials of your belief about what I had for breakfast seems to violate Inde-
pendence*. However, notice that in this case, there are other, readily available routes by
which I can conclude that your beliefs about what I had for breakfast are lacking in epis-
temic credentials compared to my own, routes that do not violate Independence*. That
is, even if the reasoning given by (6)–(8) violates Independence*, the reasoning given by
(9)–(11) does not seem to violate the principle:
(6) (I remember that) I had eggs for breakfast this morning .
(7) You are ignorant of the fact that I had eggs for breakfast this morning.
(8) Therefore, your opinions about what I had for breakfast this morning are lack-
ing in epistemic credentials compared to mine.
(9) I have vivid apparent memories about what I had for breakfast this morning.
(10) Because you were not at breakfast with me, you have no apparent memories
(vivid or otherwise) about what I had for breakfast this morning.
(11) Therefore, your opinions about what I had for breakfast this morning are lack-
ing in epistemic credentials compared to mine.
In this case then, I am still in a position to conclude that your opinion about what I had
for breakfast is likely to be less reliable than mine even after I bracket the specifi c basis
for my own opinion, namely, my apparent memories that I had eggs for breakfast this morn-
ing . (That is, even if I do not appeal to the evidence on which I base my belief that I had
eggs for breakfast this morning, namely, the apparent memories with that specifi c con-
tent, I can still appeal to the fact that I have some apparent memories or other about what I had
for breakfast while you have no such memories .) But this is quite diff erent from the case of the
Holocaust denier as presented above. In the case of the Holocaust denier, that the denier
is ignorant of the facts on which I base my belief that the Holocaust occurred is the very
reason that I take his opinion to be essentially worthless; and if I did not take those facts
to strongly confi rm the claim that the Holocaust occurred, then I would have no basis
for my negative evaluation of his belief (nor would I take myself to have any such basis).
Of course, by changing the Holocaust denier case, or adding certain details to it, one
could make it more like the breakfast case in relevant respects. For example, perhaps
disagreement and the burdens of judgment 43
I recently visited the Holocaust Museum, and I remember that this is where I learned
F1 . . . Fn, the facts on which I currently base my belief that the Holocaust occurred.
Perhaps I have no reason to think that the Holocaust denier has done the same thing, or
anything similar. In that case, I would be in a position to reason as follows:
(12) I know that I’ve recently visited the Holocaust Museum, but I have no reason
to think that the Holocaust denier has been exposed to any similarly reliable
source of information about the Holocaust.
(13) Therefore, the Holocaust denier’s opinions about the Holocaust are lacking in
epistemic credentials compared to my own.
In this variant of the story, there is a route by which I can arrive at a negative evaluation
of the Holocaust denier’s opinion, even while bracketing the basis for my own opinion
in the way that Christensen recommends. However, even if no such alternative route is
available, it does not follow that I lack a legitimate basis for concluding that the Holo-
caust denier’s opinion is untrustworthy. For I might still be in a position to reach that
conclusion by the kind of reasoning given by (1)–(5), despite the fact that such reason-
ing fl agrantly violates Independence*.
4 Independence and dogmatism
Although I believe that the case of the Holocaust denier is a counterexample to princi-
ples like Independence(*), I don’t want to rest too much on that claim. In the course of
defending Independence against objections raised by Ernest Sosa ( 2010 ) and Jennifer
Lackey ( 2010 ), Christensen ( 2011 ) displays great ingenuity in parrying apparent coun-
terexamples. Perhaps he or some other Conciliationist could show that there is some
way in which I could reach a suffi ciently negative assessment of the Holocaust denier’s
belief even while bracketing my assessment of those considerations that underwrite my
own, opposite belief. Even in that event, however, I think that we should be quite suspi-
cious of the suggestion that the kind of bracketing exercise envisaged by Christensen
plays an important role in how I should take the Holocaust denier’s opinion into
account. Off hand, it seems akin to the suggestion that, in a case in which I discover that
I hold two inconsistent beliefs, I should evaluate the credentials of one belief while
bracketing my assessment that I have overwhelming evidence for the other.
The suspicion that Independence(*) is too strong might be enhanced when we recall
the considerations which Christensen cites as motivation for adopting such principles
in the fi rst place:
The motivation behind the principle is obvious: it’s intended to prevent blatantly question-
begging dismissals of the evidence provided by the disagreement of the others. It attempts to
capture what would be wrong with a P-believer saying, for example, “Well, so and so disagrees
with me about p. But since P is true, she’s wrong about p. So however reliable she may generally
be, I needn’t take her disagreement about p as any reason at all to change my belief.”
44 thomas kelly
There is clearly something worrisome about this sort of response to the disagreement of oth-
ers. Used as a general tactic, it would seem to allow a non-expert to dismiss even the disagree-
ment of large numbers of those he took to be experts in the fi eld.
An observation: the reasoning that Christensen describes in this passage is really quite
awful. It is paradigmatically dogmatic , in the pejorative sense of that term. In view of the
awfulness of such reasoning, it would be rather surprising, I think, if we needed to
invoke a principle as strong as Independence or Independence* in order to explain
what’s wrong with it. That is, it would be surprising if we needed to adopt a principle
that makes it obscure how I can legitimately appeal to my historical knowledge in eval-
uating the Holocaust denier’s belief, in order to explain (e.g.) what would be wrong
with my dogmatically dismissing a consensus of experts, or a person of arbitrarily great
reliability, on the basis of my own, non-expert opinion.
This observation suggests a strategy that I will pursue in what follows. I’ll attempt to
show how someone who is either agnostic about strong Independence principles or
who (like me) thinks that such principles are false can account for the badness of the
relevant kind of reasoning in a perfectly natural way. This will leave such principles
unmotivated, and, in view of their strength and the problems that they face, leave us with
good reason to think that they are false.
Suppose that I arrive at the belief that p; I then hear you sincerely assert your belief
that not-p. On the basis of your impressive track record, I know that you’re generally
reliable about this sort of question. Imagine the speech mentioned by Christensen in
my mouth, addressed to you:
Well, you disagree with me about p. But since p is true, you must be wrong about p. So even
though you’re very reliable in general, I needn’t take your disagreement about p as any reason at
all to change my mind.
When I appeal to my belief to completely dismiss your contrary opinion, I am in eff ect
inferring, from my belief that p, that your sincere assertion that not-p is misleading evi-
dence about the truth of p. What account could someone who eschews appeal to strong
Independence principles give of what is wrong with such reasoning?
Consider fi rst a case in which my initial belief that p is unreasonable (that is, I unrea-
sonably believe that p even before learning that you think otherwise). When I later infer
that your testimony that not-p is misleading, my procedure is unreasonable and dog-
matic, inasmuch as I lack a reasonable basis for drawing the relevant conclusion: the
proposition p, from which I infer the misleadingness of your testimony, is not among the
things that I believe reasonably. 14
Of course, even if my belief was reasonable prior to receiving your testimony, this
does not mean that I can then reasonably infer your testimony is misleading once
14 In the usual case, if it’s unreasonable for me to believe p prior to receiving your testimony that not-p,
then it will still be unreasonable for me to believe p after receiving your testimony. There are unusual, trick
cases in which this condition fails to hold, but I will ignore them here.
disagreement and the burdens of judgment 45
I receive it. For even if I am initially justifi ed in believing p, your testimony that not-p
might undermine my justifi cation, in which case I’m in no position to reasonably con-
clude that your testimony is misleading. Indeed, as Kripke ( 1971 ) emphasized, even if
one initially knows that p, it might be unreasonable and dogmatic to dismiss subsequently
encountered considerations that suggest that not-p. For once one is presented with
those considerations, one might no longer know that p and thus no longer be in a posi-
tion to rationally infer that the considerations are misleading ( Harman 1973 ). Of course,
if one once knew that p, then p is true, so the considerations that suggest that p is false
must be misleading. But one is in no position to reasonably conclude this, once one’s
belief has been undermined.
So here is the short story about why it will often be unreasonable and dogmatic for
me to dismiss your contrary opinion in the envisaged way: after I add the fact that you
believe as you do to my stock of evidence, it will no longer be reasonable for me to
believe that p, given what is then my total evidence. And if it’s no longer reasonable for
me to believe that p, then I lack any rational basis for inferring that your sincere testi-
mony is misleading evidence. Of course, at one level Conciliationists will agree that
that’s the correct story; they will take themselves to have provided a deeper explanation.
My current point is the modest one that the story outlined here about why the relevant
kind of dogmatism is bad reasoning is certainly not the exclusive property of the Con-
ciliationist; it can be told just as easily by the proponent of the Total Evidence View, and
by others who either reject or are agnostic about Independence(*).
At this point, it’s worth bearing in mind that there are cases in which it is reasonable
for one to discount genuine evidence on the grounds that it is misleading ( Sorensen
1988 ; Kelly 2008 ). Consider the following extreme case:
True Story . I live with my family at 76 Alexander Street. On a fairly regular basis, we receive mail
for a person named Frederick Jacobs at this address. This mail provides genuine evidence that
someone named Jacobs lives at 76 Alexander Street. (Consider: when a passerby on the street,
curious about who lives at this address, opens our mailbox and fi nds mail addressed to Jacobs,
this increases the credibility of the relevant proposition for the passerby.) Nevertheless, on the
basis of my knowledge that only members of my family live at 76 Alexander Street and that
Jacobs is not a member of my family, I reasonably conclude that this evidence is misleading and
dismiss it without further ado. 15
Why isn’t my behavior in True Story unreasonable and dogmatic? Answer: given all of
the evidence available to me which bears on the question of whether someone named
Jacobs lives in my house—including those considerations that suggest that he does—it’s
still reasonable for me to believe that he does not, and thus, to conclude that those con-
siderations are misleading. This is what it’s reasonable for me to believe, given my total
evidence.
15 Although all of the details of the example are nonfi ctional, the inspiration for using them in this way is
due to Crispin Wright ( 2004 ).
46 thomas kelly
Notice that (with a bit of stretching) we might even construe True Story as involving
a piece of testimony: given the relevant conventions, an envelope that displays the name
“Jacobs” immediately above the address “76 Alexander St” is a kind of written testimony
which in this case constitutes misleading evidence for a false proposition. Even if we
think of it in this way, however, it would certainly be foolish for me to bracket my assess-
ment of all of the evidence which I take to underwrite my belief that no one named
Jacobs lives in my house, in considering how to adjust my credence in the light of this
piece of evidence.
The case of the Holocaust denier should be understood along these lines, I think.
After I add the fact that he believes as he does to what I know, it’s still reasonable for me
to have tremendous confi dence that the Holocaust occurred, and (therefore) to infer
that his belief is false. Contrast Christensen’s example in which I take my own non-
expert opinion as a basis for concluding that all of the many experts who disagree with
me are mistaken. The reason why this will typically be dogmatic and unreasonable is
simply the following: even if my belief was initially reasonable, it will typically not be
reasonable once I learn that all of the experts think otherwise, given how that will aff ect
my epistemic situation. And given that it will no longer be reasonable for me to hold my
original opinion, I will no longer have any legitimate basis from which to infer that the
experts are mistaken.
So if the thought that is supposed to motivate Independence(*) is that we need some
such principle in order to block dogmatism, or to account for why dogmatic reasoning
is bad, I don’t see it. Someone who rejects such principles can still account for the bad-
ness of intuitively dogmatic reasoning simply by appealing directly to the normative
requirement that one take into account one’s total evidence (as opposed to some proper
subset of one’s total evidence). In short, the Principle of Total Evidence can do all of the
work that needs doing.
Objection: but consider cases such as Christensen’s restaurant case, in which one’s
original evidence literally entails the correct answer. If one’s original evidence entails
that p, then it seems like one’s total evidence will always support the belief that p, no mat-
ter how much misleading testimonial evidence one subsequently acquires, so long as
that original evidence remains part of the total set. So it looks like we do need to invoke
a principle like Independence after all, in order to allow such misleading counter-
evidence to change (at least eventually) what it is reasonable for one to believe about p.
Reply: there is a genuine puzzle here, but it is a mistake to think that that puzzle moti-
vates the adoption of Independence or Independence*. After all, as Christensen himself
would be the fi rst to agree, 16 whenever one performs a non-trivial calculation, one
should not be perfectly confi dent of one’s answer even before another person comes on
the scene (given one’s awareness that one is fallible, etc.). But once it is granted that one
should not be perfectly confi dent even before one’s view is contradicted by a peer, there
is no additional mystery or formal diffi culty as to how acquiring that misleading
16 See especially his 2007b.
disagreement and the burdens of judgment 47
testimonial evidence can push the credence that it is reasonable for one to have still
lower. Of course, it is diffi cult to make sense of the idea that someone who possesses
entailing evidence should invest less than maximal credence in the entailed proposition;
indeed, orthodox theories of evidential probability would seem to rule this out (at least
in cases in which the person is certain of the entailing evidence itself ). But if, as both
Christensen and I think, this is a genuine phenomenon, then there is presumably some
story to be told here. The crucial point is that there is no reason to think that the story in
question entails Independence or Independence*, since those principles explicitly con-
cern how one should assess the beliefs of other people, and the phenomenon arises even
before other people come on the scene.
5 Two views about dogmatism
I have argued that we do not need to appeal to a principle such as Independence* in
order to account for the badness of dogmatic reasoning. What might lead someone to
think otherwise? Here is a speculation. 17 I believe that many Conciliationists—and
many others as well—tend to think about the intellectual vice of dogmatism in a par-
ticular way, a way that’s quite natural and intuitive, but ultimately mistaken. (Having said
that, I should also say at once that I think that the issues here run deep and I won’t be
able to pursue them very far in this paper. But I do want to at least put them on the
table.) My hypothesis is that many people subscribe, even if only implicitly, to a thesis
that I would express as follows:
Dogmatism is a formal vice.
What is it for something to be a “formal” vice? Consider fi rst a paradigm from the moral
sphere: the vice of hypocrisy. Someone who demands that others conform to putative
moral standards that he himself transgresses is guilty of hypocrisy. This is a moral failing
even if the standards that he fails to meet are not genuine moral standards at all. Because
of this, one can be in a position to correctly charge another with hypocrisy without
entering into potentially messy and diffi cult to resolve issues about what the correct
moral standards are. Indeed, one might realize, from the fi rst-person perspective, that a
given action would be hypocritical and treat this as a reason not to perform it. In neither
the fi rst-person nor the third-person case does one have to make a substantive moral
judgment about the correctness of the agent’s moral standards, violation of which would
constitute hypocrisy on his part. Thus, justifi ed judgments of hypocrisy do not gener-
ally presuppose the truth of substantive claims about morality.
17 Because I have focused on the views of Christensen to this point, I want to explicitly disavow the sug-
gestion that I am off ering a speculative diagnosis as to why he accepts Independence. Having discussed these
issues over a number of years with many Conciliationists and their fellow travelers, however, I am confi dent
that implicit acceptance of the picture that I am about to describe is at least sometimes an operative factor.
See also the discussion of Elga’s ( 2007 ) “[no] bootstrapping” argument below.
48 thomas kelly
It is tempting to think of the intellectual vice of dogmatism in a parallel way: that at
least in the usual case, whether someone has behaved dogmatically, or whether one
would be dogmatic in responding to the disagreement of another person in a certain
way, is something that can be determined without relying on a substantive and poten-
tially precarious judgment about all-things-considered reasonableness or justifi edness.
But I think that at least in the interesting cases, our sense that is so is an illusion: dogma-
tism is not a formal vice, inasmuch as justifi ed attributions of it typically presuppose the
truth of substantive claims about what it is reasonable to believe and infer given the
totality of evidence available to the believer.
As a way of getting at the general issue, consider the status of so-called “Moorean”
responses to revisionary philosophical theorizing. You present me with certain meta-
physical principles that you endorse and the arguments for them; taken together, the
principles entail that there are no tables and chairs, a consequence of which you are well
aware and embrace as an exciting philosophical discovery. I consider your principles and
supporting arguments. I then reason as follows: “Well, the principles seem quite plausi-
ble, and I don’t have an intellectually satisfying diagnosis of where the arguments go
wrong (if in fact they do). But of course, there are table and chairs. Therefore, the theory
is false.” Is this dogmatic on my part?
Someone who thinks of dogmatism as a formal vice will be strongly inclined to
answer in the affi rmative. But I think that that’s the wrong answer. Rather, as in the cases
presented in section 1 above, there simply isn’t enough information provided in the fi c-
tion for us to know whether this particular bit of Moorean reasoning on my part is rea-
sonable or not. Everything depends on whether, having considered your theory and the
arguments for it, it’s still reasonable for me to believe that there are tables and chairs,
given what is now my total evidence. If it’s not currently reasonable for me to believe
this, then the way in which I dismiss your theory and the arguments is objectionable.
But on the other hand, if it’s still reasonable for me to believe that there are tables and
chairs, then this piece of Moorean reasoning is not dogmatic. Again, it all depends on
what it is reasonable for me to believe given my overall epistemic situation, and that’s
not something that is specifi ed in the fi ction, or something that we’re in a position to
fi gure out given what is specifi ed. There is, I think, no general objection to Moorean
reasoning of this kind on the grounds of dogmatism, although of course, certain instances
of such reasoning are objectionable in virtue of being dogmatic. And that’s because dog-
matism, unlike hypocrisy, is not a formal vice. 18
Return to the case of disagreement, and suppose that one did think of dogmatism as a
formal vice. Notice that in this case, there is strong pressure to wheel in some principle
like Independence(*) in order to explain what’s wrong with the kind of paradigmatically
18 In fact, I think that one eff ect of the tendency to think of dogmatism as a formal vice is that the
strength of Moorean responses to revisionary philosophical theorizing is often greatly underestimated. For
a defense of such reasoning, see Kelly ( 2005 ), Kelly ( 2008 ). For more general refl ections on the possibility
that dogmatism is not a formal vice and related matters, see my ( 2011 ).
disagreement and the burdens of judgment 49
bad reasoning described by Christensen, in which one uses one’s non-expert opinion as
a basis for completely dismissing an expert consensus, or the view of some hyper- reliable
individual. Specifi cally: the fact that the reasoning is dogmatic is supposed to be some-
thing that holds (and therefore, is in principle recognizable) independently of the fact
that the belief from which such reasoning proceeds is unreasonable on the dogmatist’s
total evidence. So we need some principle that counts the reasoning as bad/dogmatic,
other than the Requirement of Total Evidence. But once we give up on the idea that
dogmatism is a formal vice, we give up on the project of identifying instances of dogma-
tism in a way that does not presuppose substantive judgments about what is and isn’t
reasonable to believe given the subject’s overall epistemic situation.
Objection: “You yourself have assumed that Christensen’s putative examples of dog-
matic reasoning are genuine instances. But of course, Christensen never said anything
about the subject’s total evidence! So you too think that we can typically identify
instances of dogmatism, independently of knowing what it’s reasonable to believe given
the subject’s total evidence. That is, you yourself think that dogmatism is a formal vice
after all.”
Reply: it’s true that Christensen never tells us about the subject’s total evidence, or
about all of the subject’s total evidence that bears on p. But he does tell us that part of the
subject’s total evidence is made up of extremely substantial and strong evidence that
not-p (the consensus of the experts that not-p, etc.). Given that this evidence is included
in the subject’s total evidence, one would (at the very least) have to tell an Extremely
Unusual Story about the rest of the subject’s total evidence, in order for it to be reason-
able for the subject to believe p in a way that would render the envisaged reasoning non-
dogmatic. So naturally, when we think about the case, we implicitly fi ll in the details in a
way that makes the reasoning dogmatic: that is, we implicitly assume that nothing like
the Extremely Unusual Story is true in the fi ction. (Compare the way in which, in judg-
ing that the protagonists in Gettier’s paper lack knowledge, we assume such things as
that their beliefs are not overdetermined in ways that render them knowledge after all.)
Again, someone who denies that dogmatism is a formal vice will tend to think that
there are fewer genuine epistemic norms than someone who assumes that it is a formal
vice. In order to see how this plays out in a concrete case, consider what is perhaps the
single most prominent argument for Conciliationism in the literature, Adam Elga’s
“no bootstrapping” argument for his version of The Equal Weight View. It is worth
quoting in full:
Suppose that you and your friend are to judge the truth of a claim, based on the same batch of
evidence. Initially, you count your friend as an epistemic peer—you think that she is about as
good as you at judging the claim. In other words, you think that, conditional on a disagreement
arising, the two of you are equally likely to be mistaken. Then the two of you perform your
evaluations. As it happens, you become confi dent that the claim is true, and your friend becomes
equally confi dent that it is false.
When you learn of your friend’s opposing judgment, you should think that the two of you
are equally likely to be correct. The reason is [this]. If it were reasonable for you to give your
50 thomas kelly
own evaluation extra weight—if it were reasonable to be more than 50 per cent confi dent that
you are right—then you would have gotten some evidence that you are a better evaluator than
your friend. But that is absurd.
The absurdity is made more apparent if we imagine that you and your friend evaluate the
same long series of claims. Suppose for reductio that whenever the two of you disagree, you
should be, say, 70 per cent confi dent that your friend is the mistaken one. It follows that over the
course of many disagreements, you should end up extremely confi dent that you have a better
track record than your friend. As a result, you should end up extremely confi dent that you are a
better evaluator. But that is absurd. Without some antecedent reason to think that you are a bet-
ter evaluator, the disagreements between you and your friend are no evidence that she has made
most of the mistakes. (2007: 487)
Elsewhere 19 I have criticized this argument at some length; here I will concentrate on
those aspects that intersect most directly with the present issue.
Elga takes the argument of this passage to successfully undermine any alternative to
The Equal Weight View. In particular, he takes the argument off ered here to undermine
both what he calls “The Extra Weight View”—according to which each party to the
dispute is permitted to give some special, presumptive weight to his or her own judg-
ment—as well as views akin to The Total Evidence View, on which it potentially matters
which of the parties has in fact done a better job evaluating the evidence. 20 However,
I believe that while the argument has considerable force against the former sort of view,
it has little to none against the latter.
In order to see this, let’s focus our attention directly on the situation in which Elga
claims the absurdity of any alternative to The Equal Weight View is most apparent,
namely, the situation in which you and your friend each evaluate a long series of claims.
Elga formulates the argument as a reductio ad absurdum . The supposition from which the
absurd consequences are alleged to follow is this:
Whenever you and your friend disagree, you should be, say, 70 per cent confi dent that your
friend is the mistaken one.
The crucial fact here is the following: this supposition is not something to which a pro-
ponent of The Total Evidence View is committed. That is, the proponent of The Total
Evidence View is not committed to the idea that, whenever you and your friend dis-
agree, you should be n per cent confi dent that your friend is the one who has made the
mistake (where n is some number greater than 50). Indeed, on the contrary: the propo-
nent of The Total Evidence View will stand with Elga in rejecting any such general pol-
icy as an unreasonable one. On The Total Evidence View, it’s not true, in general, that you
should be more confi dent that your friend has made the mistake whenever the two of
19 See Kelly ( 2007 section 5.4). The next two paragraphs are borrowed from that discussion.
20 Elga makes the last point explicit on the same page: “Again, this absurdity is independent of who has
in fact evaluated the claims properly. Even if in fact you have done a much better job than your friend at
evaluating the claims, simply comparing your verdicts to those of your friend gives you no evidence that this
is so” (487).
disagreement and the burdens of judgment 51
you disagree. Nor is there some general answer to the question of how confi dent you
should be that it’s your friend who has made the mistake (as there is on both The Extra
Weight View and on The Equal Weight View). And this is because how confi dent it’s
reasonable for you to be that your friend has made a mistake is not something that fl oats
entirely free of the evidence on which he bases his opinion. Thus, since the proponent of
The Total Evidence View would not accept the supposition from which Elga derives the
absurd consequence, the reducio ad absurdum on off er cannot show that her view is false.
In giving this argument, Elga simply assumes that there must be a general norm which
has something like the following form:
In any case of peer disagreement, you should conclude that your peer’s opinion is n per cent
likely to be correct, and revise your own opinion accordingly.
Notice that a norm of this form will say nothing at all about the reasonableness or justi-
fi edness of anyone’s opinion, or about what the total evidence supports (either before or
after the disagreement is discovered). The assumption that there is a genuine norm of this
kind is thus a very substantial one. Of course, once one makes the assumption that there is
such a norm, the suggestion that the correct value for n is “50” is absolutely irresistible.
Certainly, given what it is to count someone as an epistemic peer, it would be completely
bizarre to suspect that some number other than 50 might be the real value of n. But there
is another possibility: namely, that how one is rationally required to respond to a disagree-
ment is not typically something that is fi xed independently of substantive normative facts
about how well-supported one’s original view is. On the alternative picture, how confi -
dent one is rationally permitted to be that some proposition is true upon discovering that
a peer thinks otherwise might vary signifi cantly from case to case. To take this alternative
picture seriously is to take seriously the possibility that there is no genuine norm of belief
revision that has the same general form as the one endorsed by Elga.
Of course, in some ways it would make our cognitive lives much easier if there were
genuine norms of the relevant kind. Upon learning that a peer disagrees with an opin-
ion that one holds, one would learn that one is now rationally required to invest a cer-
tain credence in the relevant proposition. Simply on the basis of applying the norm to
the original credences, one would be in a position to know what credence one is ration-
ally required to have now. That is, one would be in a position to know what credence
one is rationally required to have, without needing to make a substantive judgment about
what it is reasonable for one to believe given one’s epistemic situation, a type of judg-
ment that is highly fallible, especially in the kind of “hard cases” that are salient in this
context. 21 So it would, I think, be much easier to fi gure out what one is rationally
21 To be sure, someone who has an Elga-like picture will readily agree that the fact that such-and-such an
attitude (in the canonical case, agnosticism) is rationally required is a matter of it’s being rationally required
given one’s total evidence . (I’m certainly not accusing such a person of having to deny the Requirement
of Total Evidence.) The point is rather that the judgment that agnosticism is rationally required given one’s total
evidence is in eff ect trivial and not substantive: for the fact that one’s total evidence now includes a particular
piece of information about credences suffi ces, in connection with the norm, to determine that agnosticism
is the rationally required attitude.
52 thomas kelly
required to believe if there really were norms of the relevant kind. But the fact that it
would make our cognitive lives easier if there were such norms is not itself a good reason
to think that they exist. Faced with a peer who disagrees, knowing how one is rationally
required to respond will typically require an extremely substantive judgment about
one’s overall epistemic situation, as opposed to the straightforward application of a
general norm that dictates agnosticism in all such cases. Such are the burdens of
judgment. 22
References
Bogardus, Tomas (2009) “A Vindication of the Equal-Weight View,” Episteme 6 (3): 324–35.
Christensen, David (2007a) “Epistemology of Disagreement: the Good News,” The Philosophical
Review , 116 (2): 187–217.
—— (2007b) “Does Murphy’s Law Apply in Epistemology? Self-Doubt and Rational Ideals,”
Oxford Studies in Epistemology 2 (2007): 3–31.
—— (2009) “Disagreement as Evidence: The Epistemology of Controversy,” Philosophy Compass
4/5: 756–67.
—— (2010) “Higher Order Evidence,” Philosophy and Phenomenological Research 81 (1):
185–215.
—— (2011) “Disagreement, Question-Begging and Epistemic Self-Criticism,” Philosophers
Imprint 11 (6): 1–22.
Cohen, Stewart (this volume) “A Tentative Defense of the (Almost) Equal Weight View.” In
David Christensen and Jennifer Lackey (eds.) The Epistemology of Disagreement (Oxford:
Oxford University Press).
Elga, Adam (2007) “Refl ection and Disagreement,” Noûs 41 (3): 478–502.
—— (2010) “How to Disagree About How to Disagree,” in R. Feldman and T. A. Warfi eld (eds.)
Disagreement (Oxford: Oxford University Press), 175–86.
Feldman, Richard (2003) Epistemology (Upper Saddle River, NJ: Prentice Hall).
—— (2006) “Epistemological Puzzles About Disagreement,” in Stephen Hetherington (ed.)
Epistemology Futures (Oxford: Oxford University Press), 216–36.
—— (2007) “Reasonable Religious Disagreements,” in Louise Antony (ed.) Philosophers With-
out Gods: Meditations on Atheism and the Secular Life (Oxford: Oxford University Press),
194–214.
—— (2009) “Evidentialism, Higher-Order Evidence, and Disagreement,” Episteme 6 (3):
294–312.
Harman, Gilbert (1973) Thought (Princeton, NJ: Princeton University Press).
Jehle, David and Branden Fitelson (2009) “What Is the ‘Equal Weight View’?” Episteme 6: 280–93.
Kelly, Thomas (2005) “Moorean Facts and Belief Revision: Can the Skeptic Win?” in John Haw-
thorne (ed.) Philosophical Perspectives, xix: Epistemology (Oxford: Blackwell), 179–209.
22 Earlier versions of this paper were presented at New York University, at California State (Fullerton), at
the Pontifícia Universidade Catolica de Rio Grande do Sul in Porto Alege, Brazil, at a meeting of Fritz
Warfi eld’s graduate seminar at the University of Notre Dame, and at a meeting of my Fall 2011 graduate
seminar at Princeton; I am grateful to those audiences for their feedback. Thanks also to David Christensen
and Nate King for written comments on earlier drafts.
disagreement and the burdens of judgment 53
—— (2007) “Peer Disagreement and Higher Order Evidence.” Offi cial version online at
< http://www.princeton.edu/~tkelly/papers >. Also in R. Feldman and T. A. Warfi eld (eds.)
Disagreement (Oxford: Oxford University Press, 2010), 111–74; and in A. Goldman and
D. Whitcomb (eds.) Social Epistemology: Essential Readings (Oxford: Oxford University Press,
2011), 183–217.
—— (2008) “Common Sense as Evidence: Against Revisionary Ontology and Skepticism,” in
Peter French (ed.) Midwest Studies in Philosophy , 32: ‘Truth and Its Deformities’ (Oxford:
Blackwell), 53–78.
—— (2011) “Following the Argument Where It Leads,” Philosophical Studies 154 (1): 105–24.
—— (manuscript) “Believers as Thermometers,” to appear in a volume on “the ethics of belief ”
edited by Jonathan Matheson.
Kornblith, Hilary (2010) “Belief in the Face of Controversy,” in R. Feldman and T. A. Warfi eld
(eds.) Disagreement (Oxford: Oxford University Press), 29–52.
Kripke, Saul (1971) “On Two Paradoxes of Knowledge,” lecture delivered to the Cambridge
Moral Sciences Club.
Lackey, Jennifer (2010) “A Justifi cationist View of Disagreement’s Epistemic Signifi cance,” in
A. Haddock, A. Millar, and D. Pritchard (eds.) Social Epistemology (Oxford: Oxford University
Press), 298–325.
Matheson, Jonathan (2009) “Conciliatory Views of Disagreement and Higher-Order Evidence,”
Episteme: A Journal of Social Epistemology 6 (3): 269–79.
Pryor, James (2000) “The Skeptic and the Dogmatist,” Noûs 34 (4): 517–49.
Sorensen, Roy (1988) “Dogmatism, Junk Knowledge, and Conditionals,” The Philosophical Quar-
terly 38 (153): 433–54.
Sosa, Ernest (2010) “The Epistemology of Disagreement,” in A. Haddock, A. Millar, and
D. Pritchard (eds.) Social Epistemology (Oxford: Oxford University Press), 278–97.
Wright, Crispin (2004) “Wittgensteinian Certainties,” in Denis McManus (ed.) Wittgenstein and
Scepticism (Oxford: Routledge), 22–54.
This paper started life as a short note I wrote around New Year 2007 while in Minneapolis.
It was originally intended as a blog post. That might explain, if not altogether excuse, the
fl ippant tone in places. But it got a little long for a post, so I made it into the format of a
paper and posted it to my website. The paper has received a lot of attention, so it seems like
it will be helpful to see it in print. Since a number of people have responded to the
argument as stated, I’ve decided to just reprint the note warts and all, with a couple of
clarifi catory footnotes added, and then after it I’ll make a few comments about how one of
the key arguments was supposed to work, and how I see the overall argument of the note
in the context of the subsequent debate.
Disagreeing about Disagreement (2007)
I argue with my friends a lot. That is, I off er them reasons to believe all sorts of philosophi-
cal conclusions. Sadly, despite the quality of my arguments, and despite their apparent
intelligence, they don’t always agree. They keep insisting on principles in the face of my
wittier and wittier counterexamples, and they keep off ering their own dull alleged coun-
terexamples to my clever principles. What is a philosopher to do in these circumstances?
(And I don’t mean get better friends.)
One popular answer these days is that I should, to some extent, defer to my friends. If
I look at a batch of reasons and conclude p , and my equally talented friend reaches an
incompatible conclusion q , I should revise my opinion so I’m now undecided between
p and q . I should, in the preferred lingo, assign equal weight to my view as to theirs. This
is despite the fact that I’ve looked at their reasons for concluding q and found them
wanting. If I hadn’t, I would have already concluded q . The mere fact that a friend (from
now on I’ll leave off the qualifi er ‘equally talented and informed’, since all my friends
satisfy that) reaches a contrary opinion should be reason to move me. Such a position is
defended by Richard Feldman ( 2005 , 2006), David Christensen ( 2007 ), and Adam Elga
( 2007 ).
3
Disagreements, Philosophical,
and Otherwise
B rian W eatherson
disagreements, philosophical, and otherwise 55
This equal weight view, hereafter EW, is itself a philosophical position. And while
some of my friends believe it, some of my friends do not. (Nor, I should add for your
benefi t, do I.) This raises an odd little dilemma. If EW is correct, then the fact that my
friends disagree about it means that I shouldn’t be particularly confi dent that it is true,
since EW says that I shouldn’t be too confi dent about any position on which my friends
disagree. But, as I’ll argue below, to consistently implement EW, I have to be maximally
confi dent that it is true. So to accept EW, I have to inconsistently both be very confi -
dent that it is true and not very confi dent that it is true. This seems like a problem, and a
reason to not accept EW. We can state this argument formally as follows, using the
notion of a peer and an expert. Some people are peers if they are equally philosophically
talented and informed as each other, and one is more expert than another if they are
more informed and talented than the other.
1. There are peers who disagree about EW, and there is no one who is an expert
relative to them who endorses EW.
2. If 1 is true, then according to EW, my credence in EW should be less than 1.
3. If my credence in EW is less than 1, then the advice that EW off ers in a wide
range of cases is incoherent.
4. So, the advice EW off ers in a wide range of cases is incoherent.
The fi rst three sections of this paper will be used to defend the fi rst three premises.
The fi nal section will look at the philosophical consequences of the conclusion.
1 Peers and EW
Thomas Kelly ( 2005 ) has argued against EW and in favor of the view that a peer with
the irrational view should defer to a peer with the rational view. Elga helpfully dubs
this the “right reasons” view. Ralph Wedgwood ( 2007 : ch. 11 ) has argued against EW
and in favour of the view that one should have a modest ‘egocentric bias’, that is, a bias
towards one’s own beliefs. On the other hand, as mentioned above, Elga, Christensen,
and Feldman endorse versions of EW. So it certainly looks like there are very talented
and informed philosophers on either side of this debate.
Now I suppose that if we were taking EW completely seriously, we would at this stage
of the investigation look very closely at whether these fi ve really are epistemic peers. We
could pull out their grad school transcripts, look at the citation rates for their papers, get
reference letters from expert colleagues, maybe bring one or two of them in for job-style
interviews, and so on. But this all seems somewhat inappropriate for a scholarly journal.
Not to mention a little tactless. 1 So I’ll just stipulate that they seem to be peers in the sense
relevant for EW, and address one worry a reader may have about my argument.
An objector might say, “Sure it seems antecedently that Kelly and Wedgwood are the
peers of the folks who endorse EW. But take a look at the arguments for EW that have
been off ered. They look like good arguments, don’t they? Doesn’t the fact that Kelly and
Wedgwood don’t accept these arguments mean that, however talented they might be
1 Though if EW is correct, shouldn’t the scholarly journals be full of just this information?
56 brian weatherson
in general, they obviously have a blind spot when it comes to the epistemology of
disagreement? If so, we shouldn’t treat them as experts on this question.” There is some-
thing right about this. People can be experts in one area, or even many areas, while their
opinions are systematically wrong in another. But the objector’s line is unavailable to
defenders of EW.
Indeed, these defenders have been quick to distance themselves from the objector.
Here, for instance, is Elga’s formulation of the EW view, a formulation we’ll return to
below.
Your probability in a given disputed claim should equal your prior conditional probability in
that claim. Prior to what? Prior to your thinking through the claim, and fi nding out what your
advisor thinks of it. Conditional on what? On whatever you have learned about the circum-
stances of how you and your advisor have evaluated the claim. ( Elga 2007 : 490)
The fact that Kelly and Wedgwood come to diff erent conclusions can’t be enough
reason to declare that they are not peers. As Elga stresses, what matters is the prior judg-
ment of their acuity. And Elga is right to stress this. If we declared anyone who doesn’t
accept reasoning that we fi nd compelling not a peer, then the EW view will be trivial.
After all, the EW view only gets its force from cases as described in the introduction,
where our friends reject reasoning we accept, and accept reasons we reject. If that
makes them not a peer, the EW view never applies. So we can’t argue that anyone who
rejects EW is thereby less of an expert in the relevant sense than someone who accepts
it, merely in virtue of their rejection of EW. So it seems we should accept premise 1.
2 Circumstances of evaluation
Elga worries about the following kind of case. Let p be that the sum of a certain series
of numbers, all of them integers, is 50. Let q be that the sum of those numbers is 400 e .
My friend and I both add the numbers, and I conclude p while he concludes q . It
seems that there is no reason to defer to my friend. I know, after all, that he has made
some kind of mistake. The response, say defenders of EW, is that deference is context-
sensitive. If I know, for example, that my friend is drunk, then I shouldn’t defer to him.
More generally, as Elga puts it, how much I should defer should depend on what
I know about the circumstances.
Now this is relevant because one of the relevant circumstances might be that my
friend has come to a view that I regard as insane. That’s what happens in the case of the
sums. Since my prior probability that my friend is right given that he has an insane-
seeming view is very low, my posterior probability that my friend is right should
also, according to Elga, be low. Could we say that, although antecedently we regard
Wedgwood and Kelly as peers of those they disagree with, that the circumstance of
their disagreement is such that we should disregard their views?
It is hard to see how this would be defensible. It is true that a proponent of EW will
regard Kelly and Wedgwood as wrong. But we can’t say that we should disregard the
views of all those we regard as mistaken. That leads to trivializing EW, for reasons given
disagreements, philosophical, and otherwise 57
above. The claim has to be that their views are so outrageous, that we wouldn’t defer to
anyone with views that outrageous. And this seems highly implausible. But that’s the
only reason that premise 2 could be false. So we should accept premise 2.
3 A story about disagreement
The tricky part of the argument is proving premise 3. To do this, I’ll use a story involv-
ing four friends, Apollo, Telemachus, Adam, and Tom. The day before our story takes
place, Adam has convinced Apollo that he should believe EW, and organize his life
around it. Now Apollo and Telemachus are on their way to Fenway Park to watch the
Red Sox play the Indians. There have been rumors fl ying around all day about whether
the Red Sox injured star player, David Ortiz, will be healthy enough to play. Apollo
and Telemachus have heard all the competing reports, and are comparing their cre-
dences that Ortiz will play. (Call the proposition that he will play p .) Apollo’s credence
in p is 0.7, and Telemachus’s is 0.3. In fact, 0.7 is the rational credence in p given their
shared evidence, and Apollo truly believes that it is. 2 And, as it turns out, the Red Sox
have decided but not announced that Ortiz will play, so p is true.
Despite these facts, Apollo lowers his credence in p . In accord with his newfound
belief in EW, he changes his credence in p to 0.5. Apollo is sure, after all, that when it
comes to baseball Telemachus is an epistemic peer. At this point Tom arrives, and with a
slight disregard for the important baseball game at hand, starts trying to convince them
of the right reasons view on disagreement. Apollo is not convinced, but Telemachus
thinks it sounds right. As he puts it, the view merely says that the rational person believes
what the rational person believes. And who could disagree with that?
Apollo is not convinced, and starts telling them the virtues of EW. But a little way in,
Tom cuts him off with a question. “How probable,” he asks Apollo, “does something
have to be before you’ll assert it?”
Apollo says that it has to be fairly probable, though just what the threshold is
depends on just what issues are at stake. But he agrees that it has to be fairly high, well
above 0.5 at least.
“Well,” says Tom, “in that case you shouldn’t be defending EW in public. Because you
think that Telemachus and I are the epistemic peers of you and Adam. And we think EW
is false. So even by EW’s own lights, the probability you assign to EW should be 0.5. And
that’s not a high enough probability to assert it.” Tom’s speech requires that Apollo
regard he and Telemachus as Apollo’s epistemic peers with regard to this question. By
premises 1 and 2, Apollo should do this, and we’ll assume that he does.
So Apollo agrees with all this, and agrees that he shouldn’t assert EW any more. But
he still plans to use it, that is, to have a credence in p of 0.5 rather than 0.7. But now
Telemachus and Tom press on him the following analogy.
2 This is obviously somewhat of an idealization, since there won’t usually be a unique precise rational
response to the evidence. But I don’t think this idealization hurts the argument to follow. I should note that
the evidence here excludes their statements of their credences, so I really mean the evidence that they
brought to bear on the debate over whether p .
58 brian weatherson
Imagine that there were two competing experts, each of whom gave diff ering views
about the probability of q . One of the experts, call her Emma, said that the probability of
q , given the evidence, is 0.5. The other expert, call her Rae, said that the probability of q ,
given the evidence, is 0.7. Assuming that Apollo has the same evidence as the experts,
but he regards the experts as experts at evaluating evidence, what should his credence in
q be? It seems plausible that it should be a weighted average of what Emma says and
what Rae says. In particular, it should be 0.5 only if Apollo is maximally confi dent that
Emma is the expert to trust, and not at all confi dent that Rae is the expert to trust.
The situation is parallel to the one Apollo actually faces. EW says that his credence in p
should be 0.5. The right reason view says that his credence in p should be 0.7. Apollo is
aware of both of these facts. 3 So his credence in p should be 0.5 iff he is certain that EW is
the theory to trust, just as his credence in q should be 0.5 iff he is certain that Emma is the
expert to trust. Indeed, a credence of 0.5 in p is incoherent unless Apollo is certain EW is
the theory to trust. But Apollo is not at all certain of this. His credence in EW, as is required
by EW itself, is 0.5. So as long as Apollo keeps his credence in p at 0.5, he is being incoher-
ent. But EW says to keep his credence in p at 0.5. So EW advises him to be incoherent.
That is, EW off ers incoherent advice. We can state this more carefully in an argument.
5. EW says that Apollo’s credence in p should be 0.5.
6. If 5, then EW off ers incoherent advice unless it also says that Apollo’s credence in
EW should be 1.
7. EW says that Apollo’s credence in EW should be 0.5.
8. So, EW off ers incoherent advice.
Since Apollo’s case is easily generalizable, we can infer that in a large number of cases, EW
off ers advice that is incoherent. Line 7 in this argument is hard to assail given premises 1
and 2 of the master argument. But I can imagine objections to each of the other lines.
Objection : Line 6 is false. Apollo can coherently have one credence in p while being
unsure about whether it is the rational credence to have. In particular, he can coherently
have his credence in p be 0.5, while he is unsure whether his credence in p should be
0.5 or 0.7. In general there is no requirement for agents who are not omniscient to have
their credences match their judgments of what their credences should be.
Replies : I have two replies to this, the fi rst dialectical and the second substantive.
3 Added in 2011: This is a bit quick. I think I was assuming here that Apollo adopts a version of EW where
confl icting peer judgments don’t defeat the support his evidence provides for p , but rather screen it off . So he could
take EW as given, and still know that given merely the evidence he had before talking to Telemachus, the rational
credence in p is 0.7. That’s not, on Apollo’s view, the rational credence in p given all his evidence, because Telema-
chus’s judgment counts for a lot. But it is the rational credence given initial evidence, and he knows that even
after talking to Telemachus. Since the rational credence given his initial evidence just is the credence the right
reasons view recommends, he knows it is what the right reasons view recommends. Or, at least, he knows it is
what the right reasons view recommends on the view that opposing judgments screen off underlying evidence.
But Apollo could hold the EW view without holding that view about screening. If Apollo thinks that Telema-
chus’s judgment defeats his initial reasons for thinking the rational credence in p given his (prior) evidence is 0.7,
then he won’t know what right reasons recommends. Thanks here to David Christensen.
disagreements, philosophical, and otherwise 59
The dialectical reply is that if the objector’s position on coherence is accepted, then a
lot of the motivation for EW fades away. A core idea behind EW is that Apollo was
unsure before the conversation started whether he or Telemachus would have the most
rational reaction to the evidence, and hearing what each of them says does not provide
him with more evidence. (See the ‘bootstrapping’ argument in ( Elga 2007 ) for a more
formal statement of this idea.) So Apollo should have equal credence in the rationality
of his judgment and of Telemachus’s judgment.
But if the objector is correct, Apollo can do that without changing his view on EW
one bit. He can, indeed should, have his credence in p be 0.7, while being uncertain
whether his credence in p should be 0.7 (as he thinks) or 0.3 (as Telemachus thinks).
Without some principle connecting what Apollo should think about what he should
think to what Apollo should think, it is hard to see why this is not the uniquely rational
reaction to Apollo’s circumstances. In other words, if this is an objection to my argument
against EW, it is just as good an objection to a core argument for EW.
The substantive argument is that the objector’s position requires violating some
very weak principles concerning rationality and higher-order beliefs. The objector is
right that, for instance, in order to justifi ably believe that p (to degree d ), one need not
know, or even believe, that one is justifi ed in believing p (to that degree). If nothing
else, the anti-luminosity arguments in ( Williamson 2000 ) show that to be the case.
But there are weaker principles that are more plausible, and which the objector’s posi-
tion has us violate. In particular, there is the view that we can’t both be justifi ed in
believing that p (to degree d ), while we know we are not justifi ed in believing that we
are justifi ed in believing p (to that degree). In symbols, if we let Jp mean that the agent
is justifi ed in believing p , and box and diamond to be epistemic modals, we have the
principle MJ (for Might be Justifi ed).
MJ Jp → ♢ JJp
This seems like a much more plausible principle, since if we know we aren’t justifi ed in
believing we’re justifi ed in believing p , it seems like we should at least suspend judgment in p .
That is, we shouldn’t believe p . That is, we aren’t justifi ed in believing p . But the objector’s
position violates principle MJ, or at least a probabilistic version of it, as we’ll now show.
We aim to prove that the objector is committed to Apollo being justifi ed in believ-
ing p to degree 0.5, while he knows he is not justifi ed in believing he is justifi ed in
believing p to degree 0.5. The fi rst part is trivial, it’s just a restatement of the objector’s
view, so it is the second part that we must be concerned with.
Now, either EW is true, or it isn’t true. If it is true, then Apollo is not justifi ed in
having a greater credence in it than 0.5. But his only justifi cation for believing p to
degree 0.5 is EW. He’s only justifi ed in believing he’s justifi ed in believing p if he can
justify his use of EW in it. But you can’t justify a premise in which your rational credence
is 0.5. So Apollo isn’t justifi ed in believing he is justifi ed in believing p . If EW isn’t true,
then Apollo isn’t even justifi ed in believing p to degree 0.5. And he knows this, since he
knows EW is his only justifi cation for lowering his credence in p that far. So he certainly
60 brian weatherson
isn’t justifi ed in believing he is justifi ed in believing p to degree 0.5. Moreover, every
premise in this argument has been a premise that Apollo knows to obtain, and he is cap-
able of following all the reasoning. So he knows that he isn’t justifi ed in believing he is
justifi ed in believing p to degree 0.5, as required.
The two replies I’ve off ered to the objector complement one another. If someone
accepts MJ, then they’ll regard the objector’s position as incoherent, since we’ve just
shown that MJ is inconsistent with that position. If, on the other hand, someone rejects
MJ and everything like it, then they have little reason to accept EW in the fi rst place.
They should just accept that Apollo’s credence in p should be, as per hypothesis the
evidence suggests, 0.7. The fact that an epistemic peer disagrees, in the face of the same
evidence, might give Apollo reason to doubt that this is in fact the uniquely rational
response to the evidence. But, unless we accept a principle like MJ, that’s consistent with
Apollo retaining the rational response to the evidence, namely a credence of 0.7 in p . So
it is hard to see how someone could accept the objector’s argument, while also being
motivated to accept EW. In any case, I think MJ is plausible enough on its own to under-
mine the objector’s position. 4
Objection : Line 5 is false. Once we’ve seen that the credence of EW is 0.5, then Apollo’s
credence in fi rst-order claims such as p should, as the analogy with q suggests, be a
weighted average of what EW says it should be, and what the right reason view says it
should be. So, even by EW’s own lights, Apollo’s credence in p should be 0.6.
Replies : Again I have a dialectical reply, and a substantive reply.
The dialectical reply is that once we make this move, we really have very little motiva-
tion to accept EW. There is, I’ll grant, some intuitive plausibility to the view that when
faced with a disagreeing peer, we should think the right response is halfway between
our competing views. But there is no intuitive plausibility whatsoever to the view that
in such a situation, we should naturally move to a position three-quarters of the way
between the two competing views, as this objector suggests. Much of the argument for
EW, especially in Christensen, turns on intuitions about cases, and the objector would
have us give all of that up. Without those intuitions, however, EW falls in a heap.
The substantive reply is that the idea behind the objection can’t be coherently sus-
tained. The idea is that we should fi rst apply EW to philosophical questions to work out
the probability of diff erent theories of disagreement, and then apply those probabilities
to fi rst-order disagreements. The hope is that in doing so we’ll reach a stable point at
which EW can be coherently applied. But there is no such stable point. Consider the
following series of questions.
Q1 Is EW true?
Two participants say yes, two say no. We have a dispute, leading to our next question.
Q2 What is the right reaction to the disagreement over Q1?
4 Added in 2011: I still think there’s a dilemma here for EW, but I’m less convinced than I used to be that
MJ is correct. In particular, MJ seems a little too close to the “ratifi cationist” views I attack in section 6 to
be confi dent that it is true.
disagreements, philosophical, and otherwise 61
EW answers this by saying our credence in EW should be 0.5. But that’s not what the
right reason proponents say. They don’t believe EW, so they have no reason to move
their credence in EW away from 0. So we have another dispute, and we can ask
Q3 What is the right reaction to the disagreement over Q2?
EW presumably says that we should again split the diff erence. Our credence in EW
might now be 0.25, halfway between the 0.5 it was after considering Q2, and what
the right reasons folks say. But, again, those who don’t buy EW will disagree, and
won’t be moved to adjust their credence in EW. So again there’s a dispute, and again
we can ask
Q4 What is the right reaction to the disagreement over Q3?
This could go on for a while. The only “stable point” in the sequence is when we assign
a credence of 0 to EW. That’s to say, the only way to coherently defend the idea behind
the objection is to assign credence 0 to EW. But that’s to give up on EW. As with the
previous objection, we can’t hold on to EW and object to the argument.
4 Summing up
The story I’ve told here is a little idealized, but otherwise common enough. We often
have disagreements both about fi rst-order questions, and about how to resolve this dis-
agreement. In these cases, there is no coherent way to assign equal weight to all prima
facie rational views both about the fi rst order question and the second order, epistemo-
logical, question. The only way to coherently apply EW to all fi rst order questions is to
put our foot down, and say that despite the apparent intelligence of our philosophical
interlocutors, we’re not letting them dim our credence in EW. But if we are prepared to
put our foot down here, why not about some fi rst-order question or other? It certainly
isn’t because we have more reason to believe an epistemological theory like EW than
we have to believe fi rst order theories about which there is substantive disagreement.
So perhaps we should hold on to those theories, and let go of EW.
Afterthoughts (2011)
5 The regress argument
The argument at the end of section 3 is too condensed. There are a couple of distinct
arguments that need disentangling. One concerns stubborn interlocutors, the other
concerns whether the EW theory is believable.
Here’s a simple version of the stubborn interlocutors argument. After they hear each
other’s views, Apollo’s credence in p goes to 0.5, but Telemachus’s credence stays at 0.3.
Well, Apollo and Telemachus are peers, so by EW now Apollo’s credence should be half-
way between these two values. That is, it should be 0.4. But at the time Apollo forms this
credence, he and Telemachus are peers, who have the same evidence and have acted on
it to the fullest extent of their ability. So Apollo, if he believes in EW, should move his
62 brian weatherson
credence to 0.35, and so on until he fully concedes to Telemachus. But it is absurd that
EW should force complete concession to a stubborn interlocutor, so EW is false.
Now there are a couple of things one might say to that argument. Perhaps one will
reject the last step, and deny that it is a reductio of EW. Perhaps one will say that there’s a
principled reason to apply EW once . I don’t think either will work, but they do seem like
the most attractive ways out of this problem. We’ll return to this below, after looking at
the other thread of the regress argument.
Arguably, Apollo should have credences that are justifi able by his own lights. And that
requires that his credences be in some kind of equilibrium. But many credences won’t
satisfy this requirement. Let’s say that his credence in both p and in EW is 0.5. Then he
doesn’t have any justifi cation for having a credence 0.5 in p . After all, he has already con-
ceded that he can only justify having a credence of 0.5 in p if he is certain that EW is
true. But he can’t be certain that EW is true, since it is subject to peer disagreement.
So what kind of state would be an equilibrium for Apollo? This question requires a
little care. Say a 0 th order proposition is a proposition that is in no way about what to do
when faced with a disagreement. Say that a ( k + 1) th order proposition is one that is
about how to resolve disagreements about k th order propositions. We’ll assume for sim-
plicity that the only options on the table are the right reasons view and EW. I doubt
doing this will signifi cantly stack the deck in my favor, though I don’t have much of a
proof of this.
We can carve up the equal weight view and the right reasons view into a number of
diff erent sub-views. The n th order equal weight view (for n ≥ 1) says that when faced
with a disagreement about ( n − 1) th order propositions, one should assign equal weight
to the credences of each peer. The n th order right reasons view, by contrast, says that one
should assign no weight to the credences of the peers, and just follow the reasons where
they lead, when faced with such a disagreement. This way of stating things makes it
clear that one consistent view is to hold the fi rst order equal weight view, and the right
reasons view for second and higher orders. Indeed, such a view is defended by Adam
Elga ( 2010 ).
Now we need an equilibrium condition on credal states. Let r be an n th order proposi-
tion, for any n ≥ 0. Assume that the agent believes that the ( n + 1) th order equal weight
view recommends assigning credence x E to r , and the ( n + 1) th order right reasons view
recommends assigning credence x R to r . Finally, assume that the agent’s credence in the
( n + 1) th order equal weight view is y , and hence their credence in the ( n + 1) th order
right reasons view is 1 − y . Then the equilibrium condition is that their credence in r is
x E y + x
R (1 − y ). If that’s not their credence in r , then they aren’t treating their philo-
sophical views as experts, so there is a disequilibrium between their lower-order and
higher-order beliefs.
Now let f be a function from non-negative integers into [0, 1] such that f (0) is Apollo’s
credence in p , and f ( n ) is his credence in the n th order equal weight view. Apollo thinks
the reasons in favor of the equal weight view are good, so for any n ≥ 1, he thinks the
( n + 1) th order right reasons view recommends having some higher credence, call it x , in
disagreements, philosophical, and otherwise 63
the n th order equal weight view. 5 But he knows his peers disagree, so he thinks the ( n + 1) th
order equal weight view recommends having credence 0.5 in the n th order equal weight
view. And of course things are a little diff erent at the base level, where the fi rst order equal
weight view recommends credence 0.5 in p , and the fi rst order right reasons view rec-
ommends some other credence y in p . 6 So we need an f with the following features:
f ( 0 ) = 0.5 f ( 1 ) + y ( 1 − f ( 1 ) )
while for n ≥ 1
f (n ) = 0.5 f ( n + 1 ) + x(1 − f (n + 1))
= x + ( 0.5 − x ) f ( n + 1 )
Now let’s fi nd the values of f , x , y such that the minimal value of f ( n ) is maximized (for n > 0).
I’m assuming here that the full equal weight view is the conjunction of each of the orders of
equal weight views, and the credence in a conjunction is no greater than the minimal value
of a credence in the conjuncts. Note that if f ( k + 1) > 2 / 3 , then f ( k ) < 2 /
3 , since
f ( k ) = 0.5 f ( k + 1 ) + x ( 1 − f ( k + 1 ) )
≤ 0.5 f ( k + 1 ) + 1 − f ( k + 1 )
= 1 − 0.5 f ( k + 1 )
≤ 2 / 3
So the best case scenario, if our aim is to maximize the minimal value of f ( n ) is that
x = 1, and f ( n ) = 2 / 3 for all n > 0, while f (0) = 1 + y /
3 . That is, the highest credence that
can coherently be held in EW is 2 / 3 .
Now that isn’t a great conclusion, since a theory that says if you accept the arguments
for it are completely compelling, you should still only give it credence 2 / 3 is not looking
too great. But it is diff erent from the earlier conclusion. The stubborn interlocutors
argument said that an EW theorist should completely concede to a stubborn interlocu-
tor. A special case of this is that if we are faced with a stubborn interlocutor, our credence
in EW goes to 0. The “equilibrium” argument I’ve just been running, on the other
hand, says that when faced with any disagreeing peer, even not a stubborn one, one’s
credence in EW should fall far enough that one no longer believes EW. I think the argu-
ment at the end of section 3 of the original paper runs together the equilibrium
argument and the stubborn interlocutors argument.
One could resist the equilibrium argument by denying that it is a requirement of
rationality that one’s credences be in this kind of equilibrium. For what it’s worth, I don’t
5 I think x should be 1. That’s because Apollo starts off with credence 1 in the n th order equal weight view,
and conditional on the right reasons view being correct, he doesn’t have any reason to deviate from that
because someone disagrees. But this doesn’t seem to persuade a lot of people. And while I don’t think failure
to persuade is evidence of falsity, it is evidence of failure of persuasiveness. And I like my arguments to be
persuasive. So I won’t assume x = 1.
6 For reasons given in the previous footnote, I think y = 0.7, but I won’t assume that here. I will assume
y ≤ 0.7, since it is hard to see how that could be violated.
64 brian weatherson
think that it is a requirement of rationality that one’s credences be in this kind of equi-
librium. But it is hard to square that denial with many of the motivations for EW. So I
think most EW defenders won’t want to do that. We’ll come back to this point, in eff ect,
in the next section.
Interestingly, one other natural response to the argument is to say that the right
credence in the second order right reasons view is 1. That is, in eff ect, the line that
Adam Elga takes in his (2010). In the terms we have here, we end up with f (0) = 0.5, f (1)
= 1, f ( n ) = 0 for n ≥ 2. That gives up on the full version of EW, but maybe it is enough
to rescue the fi rst-order version. And, interestingly, it might provide a way out of the
stubborn interlocutor argument. If f (2) = 1, then perhaps we can take Telemachus’s
unwillingness to adjust his credences in the light of disagreement to be a sign that he
isn’t Apollo’s peer when it comes to debates about second-order propositions. So
Apollo need not keep adjusting his credences, because he need not keep regarding
Telemachus as a peer. Now we could question whether treating fi rst-order and higher-
order equal weight so diff erently is well justifi ed. And we could question whether this
approach to the stubborn interlocutor question is really consistent with the general
attitude towards peerhood that EW proponents adopt. But I don’t want to get into
those debates here . Rather, I just wanted to separate out two diff erent strands in the
regress argument from the end of section 3, two strands that are excessively blurred in
the original note.
6 The philosophical signifi cance of philosophical disagreements
I now think that the kind of argument I presented in the 2007 note is not really an argu-
ment against EW as such, but an argument against one possible motivation for EW. I also
think that alternate motivations for EW are no good, so I still think it is an important
argument. But I think its role in the dialectic is a little more complicated than I appreci-
ated back then.
Much of my thinking about disagreement problems revolves around the following
table. The idea behind the table, and much of the related argument, is due to Thomas
Kelly ( 2010 ). In the table, S and T antecedently had good reasons to take themselves to
be epistemic peers, and they know that their judgments about p are both based on E . In
fact, E is excellent evidence for p , but only S judges that p ; T judges that ¬ p . Now let’s
look at what seems to be the available evidence for and against p .
Evidence for p Evidence against p S ’s judgment that p T ’s judgment that ¬ p
E
Now that doesn’t look to me like a table where the evidence is equally balanced for and
against p . Even granting that the judgments are evidence over and above E , and granting
that how much weight we should give to judgments should track our ex ante judgments
of their reliability rather than our ex post judgments of their reliability, both of which
strike me as false but necessary premises for EW, it still looks like there is more evidence
disagreements, philosophical, and otherwise 65
for p than against p . 7 There is strictly more evidence for p than against it, since E exists.
If we want to conclude that S should regard p and ¬ p as equally well supported for
someone in her circumstance, we have to show that the table is somehow wrong. I know
of three possible moves the EW defender could make here.
David Christensen ( 2010a ), as I read him, says that the table is wrong because when we
are representing the evidence S has, we should not include her own judgment. There’s
something plausible to this. Pretend for a second that T doesn’t exist, so it’s clearly rational
for S to judge that p . It would still be wrong of S to say, “Since E is true, p . And I judged
that p , so that’s another reason to believe that p , because I’m smart.” By hypothesis, S is
smart, and that smart people judge things is reason to believe those things are true. But
this doesn’t work when the judgment is one’s own. This is something that needs explain-
ing in a full theory of the epistemic signifi cance of judgment, but let’s just take it as a
given for now. 8 Now the table, or at least the table as is relevant to S , looks as follows.
Evidence for p Evidence against p E T ’s judgment that ¬ p
But I don’t think this does enough to support EW, or really anything like it. First, it won’t
be true in general that the two sides of this table balance. In many cases, E is strong evi-
dence for p , and T ’s judgment won’t be particularly strong evidence against p . In fact, I’d
say the kind of case where E is much better evidence for p than T ’s judgment is against p is
the statistically normal kind. Or, at least, it is the normal kind of case modulo the assump-
tion that S and T have the same evidence. In cases where that isn’t true, learning that T
thinks ¬ p is good evidence that T has evidence against p that you don’t have, and you
should adjust accordingly. But by hypothesis, S knows that isn’t the case here. So I don’t see
why this should push us even close to taking p and ¬ p to be equally well supported.
The other diffi culty for defending EW by this approach is that it seems to undermine
the original motivations for the view. As Christensen notes, the table above is specifi -
cally for S . Here’s what the table looks like for T .
Evidence for p Evidence against p S ’s judgment that p
E
It’s no contest! So T should fi rmly believe p . But that’s not a way of saying the two peers
should have equal weight. Instead, it looks like a way of saying that, at least in this case, the
right reasons view is right; T should simply follow the evidence, and believe as if the
disagreement never happened. Obviously there are going to be many views that say that
the right reasons view happens to correlate with the right answer in this case without
7 By ex ante and ex post I mean before and after we learn about S and T ’s use of E to make a judgment
about p . I think that should change how reliable we take S and T to be, and that this should matter to what
use, if any, we put their judgments, but it is crucial to EW that we ignore this evidence. Or, at least, it is
crucial to EW that S and T ignore this evidence.
8 My explanation is that evidence screens any judgments made on the basis of that evidence, in the sense
of screening to be described below.
66 brian weatherson
being generally correct. (The right reasons view, to this way of thinking, has the virtues
of a clock stopped at the correct time.) So this isn’t an argument for right reasons. But it
does seem to undermine strong versions of equal weight.
The second approach to blocking the table is to say that T ’s judgment is an under-
cutting defeater for the support E provides for p . This looks superfi cially promising.
Having a smart person say that your evidence supports something other than what
you thought it did seems like it could be an undercutting defeater, since it is a reason
to think the evidence supports something else, and hence doesn’t support what you
thought it does. And, of course, if E is undercut, then the table just has one line on it,
and the two sides look equal.
But it doesn’t seem like it can work in general, for a reason that Kelly ( 2010 ) makes
clear. We haven’t said what E is so far. Let’s start with a case where E consists of the judg-
ments of a million other very smart people that p . Then no one, not even the EW theor-
ist, will think that T ’s judgment undercuts the support E provides to p . Indeed, even if E
just consists of one other person’s judgment, it won’t be undercut by T ’s judgment. The
natural thought for an EW-friendly person to have in that case is that since there are two
people who think p , and one who thinks ¬ p , then S ’s credence in p should be 2 / 3 . But
that’s impossible if E , that is, the third person’s judgment, is undercut by T ’s judgment.
It’s true that T ’s judgment will partially rebut the judgments that S , and the third party,
make. It will move the probability of p , at least according to EW, from 1 to 2 / 3 . But that
evidence won’t be in any way undercut .
And as Kelly points out, evidence is pretty fungible. Whatever support p gets from
other people’s judgments, it could get very similar support from something other than
a judgment. We get roughly the same evidence for p by learning that a smart person
predicts p as learning that a successful computer model predicts p . So the following
argument looks sound to me.
1. When E consists of other people’s judgments, the support it provides to p is not
undercut by T ’s judgment.
2. If the evidence provided by other people’s judgments is not undercut by T ’s judg-
ment, then some non-judgmental evidence is not undercut by T ’s judgment.
3. So, not all non-judgmental evidence is not undercut by T ’s judgment.
So it isn’t true in general that the table is wrong because E has been defeated by an
undercutting defeater.
There’s another problem with the defeat model in cases where the initial judgments
are not full beliefs. Change the case so E provides basically no support to either p or ¬ p .
Here’s one case where that might happen. 9 S and T are two old blowhards discussing an
upcoming football match, as it turns out a cup fi nal, between two teams about which
they know next to nothing. All that they know is that one of the teams is full of players
with “big match experience,” and the other team isn’t. Both S and T are aware that there
9 This is a variant of a case Ishani Maitra uses for a diff erent purpose.
disagreements, philosophical, and otherwise 67
are mountains of studies showing no correlation between big match experience and
success in cups. But they dismiss these because, well, what would pointy-headed stats
geeks know about winning a cup? Let E be what they know about the two teams, and p
be the proposition that the team with the big match experience will win. S doesn’t
think there’s enough reason to believe p , since they know so little about the teams, while
T thinks the big match experience will be enough, so p will be true. Also assume that S
and T are peers; they know that they have a similar, and similarly bad, track record at
predicting these kinds of games. Here’s the table then:
Evidence for p Evidence against p T ’s judgment that p
Since E is irrelevant, it doesn’t appear, either before or after we think about defeaters.
And since T is not very competent, that’s not great evidence for p . But EW says that S
should “split the diff erence” between her initial agnosticism, and T ’s fi rm belief in p .
I don’t see how that could be justifi ed by S ’s evidence.
So that move doesn’t work either, and we’re left with the third option for upsetting
the table. This move is, I think, the most promising of the lot. It is to say that S ’s own
judgment screens off the evidence that E provides. So the table is misleading, because it
“double-counts” evidence.
The idea of screening I’m using here, at least on behalf of EW, comes from Reichen-
bach’s The Direction of Time , and in particular from his work on deriving a principle that
lets us infer events have a common cause. The notion was originally introduced in
probabilistic terms. We say that C screens off the positive correlation between B and A
if the following two conditions are met:
1. A and B are positively correlated probabilistically, that is, Pr ( A ∣ B ) > Pr ( A ).
2. Given C , A , and B are probabilistically independent, that is, Pr ( A ∣ B ∧ C ) =
Pr ( A ∣ C ).
I’m interested in an evidential version of screening. If we have a probabilistic analysis of
evidential support, the version of screening I’m going to off er here is identical to
the Reichenbachian version just provided. But I want to stay neutral on whether we
should think of evidence probabilistically. 10 When I say that C screens off the evidential
support that B provides to A , I mean the following. (Both these clauses, as well as the
statement that C screens off B from A , are made relative to an evidential background.
I’ll leave that as tacit in what follows.)
1. B is evidence that A .
2. B ∧ C is no better evidence that A than C is. 11
10 In general I’m skeptical of always treating evidence probabilistically. Some of my reasons for skepticism
are in ( Weatherson 2007 ).
11 Branden Fitelson pointed out to me that the probabilistic version entails one extra condition, namely
that ¬ B ∧ C is no worse evidence for A than C is. But I think that extra condition is irrelevant to disagree-
ment debates, so I’m leaving it out.
68 brian weatherson
Here is one stylized example of where screening helps conceptualize things. Detective
Det is trying to fi gure out whether suspect Sus committed a certain crime. Let A be that
Sus is guilty, B be that Sus was seen near the crime scene near the time the crime was
committed, and C be that Sus was at the crime scene when the crime was committed.
Then both clauses are satisfi ed. B is evidence for A ; that’s why we look for witnesses who
place the suspect near the crime scene. But given the further evidence C , then B is nei-
ther here nor there with respect to A . We’re only interested in fi nding if Sus was near the
crime scene because we want to know whether he was at the crime scene. If we know
that he was there, then learning he was seen near there doesn’t move the investigation
along. So both clauses of the defi nition of screening are satisfi ed.
When there is screened evidence, there is the potential for double-counting. It would
be wrong to say that if we know B ∧ C we have two pieces of evidence against Sus.
Similarly, if a judgment screens off the evidence it is based on, then the table double-
counts the evidence for p . Removing the double-counting, by removing E , makes the
table symmetrical. And that’s just what EW needs.
So the hypothesis that judgments screen the evidence they are based on, or JSE for
short, can help EW respond to the argument from this table. And I think it gets the right
results in the puzzle cases mentioned above. It helps with the “cup fi nal” case, because it
says that since T has such a poor track record, S shouldn’t be confi dent in p on the basis of
T ’s judgment. It provides a principled reason why the number of peers on either side of a
disagreement matters, since it denies that confl icting judgments are defeaters for each of
the peers. And it explains why even the mistaken party should “split the diff erence” rather
than concede to the other person, since the evidence E that would motivate a concession
has been screened off . So I think it’s an attractive way forward for EW.
But JSE is vulnerable to regress arguments. I now think that the argument in
“ Dis agreeing about Disagreement” is a version of the regress argument against
JSE. So really it’s an argument against the most promising response to a particularly
threatening argument against EW.
Unfortunately for EW, those regress arguments are actually quite good. To see this,
let’s say an agent makes a judgment on the basis of E , and let J be the proposition that
that judgment was made. JSE says that E is now screened off , and the agent’s evidence
is just J . But with that evidence, the agent presumably makes a new judgment. Let J ' be
the proposition that that judgment was made. We might ask now, does J ' sit alongside
J as extra evidence, is it screened off by J , or does it screen off J ? The picture behind
JSE, the picture that says that judgments on the basis of some evidence screen that
evidence, suggest that J ' should in turn screen J . But now it seems we have a regress on
our hands. By the same token, J '' , the proposition concerning the new judgment
made on the basis of J ' , should screen off J ' , and the proposition J ''' about the fourth
judgment made, should screen off J '' , and so on. The poor agent has no unscreened
evidence left! Something has gone horribly wrong.
I think this regress is ultimately fatal for JSE. But to see this, we need to work through
the possible responses that a defender of JSE could make. There are really just two moves
disagreements, philosophical, and otherwise 69
that seem viable. One is to say that the regress does not get going, because J is better
evidence than J ' , and perhaps screens it. The other is to say that the regress is not vicious,
because all these judgments should agree in their content. I’ll end the paper by address-
ing these two responses.
The fi rst way to avoid the regress is to say that there is something special about the
fi rst level. So although J screens E , it isn’t the case that J ' screens J . That way, the regress
doesn’t start. This kind of move is structurally like the move Adam Elga ( 2010 ) has
recently suggested. He argues that we should adjust our views about fi rst-order matters
in (partial) deference to our peers, but we shouldn’t adjust our views about the right
response to disagreement in this way.
It’s hard to see what could motivate such a position, either about disagreement or
about screening. It’s true that we need some kind of stopping point to avoid these
regresses. But the most natural stopping point is the very fi rst level. Consider a toy
example. It’s common knowledge that there are two apples and two oranges in the bas-
ket, and no other fruit. (And that no apple is an orange.) Two people disagree about how
many pieces of fruit there are in the basket. A thinks there are four, B thinks there are
fi ve, and both of them are equally confi dent. Two other people, C and D , disagree about
what A and B should do in the face of this disagreement. All four people regard each
other as peers. Let’s say C ’s position is the correct one (whatever that is) and D ’s position
is incorrect. Elga’s position is that A should partially defer to B , but C should not defer to
D . This is, intuitively, just back to front. A has evidence that immediately and obviously
entails the correctness of her position. C is making a complicated judgment about a
philosophical question where there are plausible and intricate arguments on each side.
The position C is in is much more like the kind of case where experience suggests a
measure of modesty and deference can lead us away from foolish errors. If anyone
should be sticking to their guns here, it is A , not C .
The same thing happens when it comes to screening. Let’s say that A has some evidence
that (a) she has made some mistakes on simple sums in the past, but (b) tends to massively
over-estimate the likelihood that she’s made a mistake on any given sum. What should she
do? One option, in my view the correct one, is that she should believe that there are four
pieces of fruit in the basket, because that’s what the evidence obviously entails. Another
option is that she should be not very confi dent there are four pieces of fruit in the basket,
because she makes mistakes on these kinds of sums. Yet another option is that she should
be pretty confi dent (if not completely certain) that there are four pieces of fruit in the
basket, because if she were not very confi dent about this, this would just be a manifesta-
tion of her over-estimation of her tendency to err. The “solution” to the regress we’re
considering here says that the second of these three reactions is the uniquely rational
reaction. The idea behind the solution is that we should respond to the evidence pro-
vided by fi rst-order judgments, and correct that judgment for our known biases, but that
we shouldn’t in turn correct for the fl aws in our self-correcting routine. I don’t see what
could motivate such a position. Either we just rationally respond to the (fi rst-order) evi-
dence, and in this case just believe there are four pieces of fruit in the basket, or we keep
70 brian weatherson
correcting for errors we make in any judgment. It’s true that the latter plan leads either to
regress or to the kind of ratifi cationism we’re about to critically examine. But that’s not
because the disjunction is false, it’s because the fi rst disjunct is true.
A more promising way to avoid the regress is suggested by some other work of Elga’s,
in this case a paper he co-wrote with Andy Egan ( Egan and Elga 2005 ). Their idea, as I
understand them, is that for any rational agent, any judgment they make must be such
that when they add the fact that they made that judgment to their evidence (or, perhaps
better given JSE, replace their evidence with the fact that they made that judgment), the
rational judgment to make given the new evidence has the same content as the original
judgment. So if you’re rational, and you come to believe that p is likely true, then the
rational thing to believe given you’ve made that judgment is that p is likely true.
Note that this isn’t as strong a requirement as it may fi rst seem. The requirement is not
that any time an agent makes a judgment, rationality requires that they actually refl ect,
and say on refl ection that it is the correct judgment. Rather, the requirement is that the
only judgments rational agents make are those judgments that, on refl ection, she would
refl ectively endorse. We can think of this as a kind of ratifi ability constraint on judgment,
like the ratifi ability constraint on decision-making that Richard Jeff rey uses to handle
Newcomb cases ( Jeff rey 1983 ).
To be a little more precise, a judgment is ratifi able for agent S just in case the rational
judgment for S to make conditional on her having made that judgment has the same
content as the original judgment. The thought then is that we avoid the regress by say-
ing rational agents always make ratifi able judgments. If the agent does do that, there isn’t
much of a problem with the regress; once she gets to the fi rst level, she has a stable view,
even once she refl ects on it.
It seems to me that this assumption, that only ratifi able judgments are rational, is what
drives most of the arguments in Egan and Elga’s paper on self-confi dence. So I don’t
think this is a straw-man move. Indeed, as the comparison to Jeff rey suggests, it has some
motivation behind it. Nevertheless it is false. I’ll fi rst note one puzzling feature of the
view, then one clearly false implication of the view.
The puzzling feature is that in some cases there may be nothing we can rationally do
which is ratifi able. One way this can happen involves an example Egan and Elga off er (in
a footnote) about the directionally challenged driver. 12 Imagine that when I’m trying to
decide whether p , for any p in a certain fi eld, I know (a) that whatever judgment I make
will usually be wrong, and (b) if I conclude my deliberations without making a judg-
ment, then p is usually true. If we also assume JSE, then it follows there is no way for me
to end deliberation. If I make a judgment, I will have to retract it because of (a). But if
I think of ending deliberation, then because of (b) I’ll have excellent evidence that p , and
it would be irrational to ignore this evidence. (Nicholas Silins ( 2005 ) has used the idea
that failing to make a judgment can be irrational in a number of places, and those argu-
ments motivated this example.)
12 A similar example is discussed in ( Christensen 2010b ).
disagreements, philosophical, and otherwise 71
This is puzzling, but not obviously false. It is plausible that there are some epistemic
dilemmas, where any position an agent takes is going to be irrational. (By that, I mean it
is at least as plausible that there are epistemic dilemmas as that there are moral dilemmas,
and I think the plausibility of moral dilemmas is reasonably high.) That a case like the
one I’ve described in the previous paragraph is a dilemma is perhaps odd, but no reason
to reject the theory.
The real problem, I think, for the ratifi ability proposal is that there are cases where
unratifi able judgments are clearly preferable to ratifi able judgments. Assume that I’m
a reasonably good judge of what’s likely to happen in baseball games, but I’m a little
over-confi dent. And I know I’m over-confi dent. So the rational credence, given
some evidence, is usually a little closer to 1 / 2 than I admit. At risk of being arbitrarily
precise, let’s say that if p concerns a baseball game, and my credence in p is x , the
rational credence in p , call it y , for someone with no other information than this is
given by:
y = x + (sin(2π x ))/50
To give you a graphical sense of how that looks, the dark line in this graph is y , and the
lighter diagonal line is y = x .
1
0.75
y
0.5
0.25
0 10.750.50.25
72 brian weatherson
Note that the two lines intersect at three points: (0, 0), ( 1 / 2 , 1 /
2 ) and (1, 1). So if my cre-
dence in p is either 0, 1 / 2 or 1, then my judgment is ratifi able. Otherwise, it is not. So the
ratifi ability constraint says that for any p about a baseball game, my credence in p should
be either 0, 1 / 2 or 1. But that’s crazy. It’s easy to imagine that I know (a) that in a particu-
lar game, the home team is much stronger than the away team, (b) that the stronger team
usually, but far from always, wins baseball games, and (c) I’m systematically a little over-
confi dent about my judgments about baseball games, in the way just described. In such a
case, my credence that the home team will win should be high, but less than 1. That’s just
what the ratifi cationist denies is possible.
This kind of case proves that it isn’t always rational to have ratifi able credences. It
would take us too far afi eld to discuss this in detail, but it is interesting to think about
the comparison between the kind of case I just discussed, and the objections to back-
wards induction reasoning in decision problems that have been made by Pettit and
Sugden ( 1989 ), and by Stalnaker ( 1996 , 1998 ). The backwards induction reasoning
they criticize is, I think, a development of the idea that decisions should be ratifi able.
And the clearest examples of when that reasoning fails concern cases where there is a
unique ratifi able decision, and it is guaranteed to be one of the worst possible out-
comes. The example I described in the last few paragraphs has, quite intentionally, a
similar structure.
The upshot of all this is that I think these regress arguments work. They aren’t, I
think, directly an argument against EW. What they are is an argument against the most
promising way the EW theorist has for arguing that the table I started with misstates S ’s
epistemic situation. Given that the regress argument against JSE works though, I don’t
see any way of rescuing EW from this argument.
References
Christensen, David (2007) “Epistemology of Disagreement: The Good News,” Philosophical
Review 116: 187–217. doi:10.1215/00318108-2006-035.
— — (2010a) “Disagreement, Question-Begging and Epistemic Self-Criticism,” Philosophers’
Imprint 11: 1–22.
— — (2010b) “Higher-Order Evidence,” Philosophy and Phenomenological Research 81: 185–215.
doi:10.1111/j.1933-1592.2010.00366.x.
Egan, Andy, and Adam Elga (2005) “I Can’t Believe I’m Stupid,” Philosophical Perspectives 19:
77–93. doi:10.1111/j.1520-8583.2005.00054.x.
Elga, Adam (2007) “Refl ection and Disagreement,” Noûs 41: 478–502. doi:10.1111/j.
1468-0068.2007.00656.x.
— — (2010) “How to Disagree About How to Disagree,” in R. Feldman and T. A. Warfi eld (eds.)
Disagreement (Oxford: Oxford University Press), 175–87.
Feldman, Richard (2005) “Respecting the Evidence,” Philosophical Perspectives 19: 95–119.
doi:10.1111/j.1520-8583.2005.00055.x.
disagreements, philosophical, and otherwise 73
— — and Stephen Cade Hetherington (2006) “Epistemological Puzzles About Disagreement,”
in Stephen Cade Hetherington (ed.) Epistemology Futures (Oxford: Oxford University Press),
216–26.
Jeff rey, Richard C. (1983) The Logic of Decision (Chicago: University of Chicago Press).
Kelly, Thomas (2005) “The Epistemic Signifi cance of Disagreement,” Oxford Studies in Epistemology
1: 167–96.
— — (2010) “Peer Disagreement and Higher-Order Evidence,” in R. Feldman and T. A. Warfi eld
(eds.) Disagreement (Oxford: Oxford University Press), 111–74.
Pettit, Philip, and Robert Sugden (1989) “The Backward Induction Paradox,” Journal of Philosophy
86: 169–82.
Silins, Nicholas (2005) “Deception and Evidence,” Philosophical Perspectives 19: 375–404.
doi:10.1111/j.1520-8583.2005.00066.x.
Stalnaker, Robert C. (1996) “Knowledge, Belief and Counterfactual Reasoning in Games,” Eco-
nomics and Philosophy 12: 133–63. doi:10.1017/S0266267100004132.
— — (1998) “Belief Revision in Games: Forward and Backward Induction,” Mathematical Social
Sciences 36: 31–56. doi:10.1016/S0165-4896(98)00007-9. < http://www.sciencedirect.com/
science/article/B6V88-3T51PP8-X/2/d5bdbf6bf115c628d5bbeb7ceb1bf466 > accessed 16
November 2012.
Weatherson, Brian (2007) “The Bayesian and the Dogmatist,” Proceedings of the Aristotelian Society
107: 169–85. doi:10.1111/j.1467-9264.2007.00217.x.
Wedgwood, Ralph (2007) The Nature of Normativity (Oxford: Oxford University Press).
Williamson, Timothy (2000) Knowledge and its Limits (Oxford: Oxford University Press).
This page intentionally left blank
B. Conciliation
This page intentionally left blank
Many recent writers have embraced one version or another of the thought that the dis-
agreement of equally informed, equally skillful thinkers can (in at least some circum-
stances), require a rational agent to revise her beliefs (to at least some extent)—even if
her original assessment of the common evidence was correct. There has, of course, been
much disagreement as to the amount of revision required in certain cases, and as to the
theoretical underpinnings of the required revisions. But a common thread uniting all
these views is the recognition that we may make mistakes in assessing evidence; that the
disagreement of others who have assessed the same evidence diff erently provides at least
some reason to suspect that we have in fact made such a mistake; and that reason to sus-
pect that we’ve made a mistake in assessing the evidence is often also reason to be less
confi dent in the conclusion we initially came to. The rationale for revision, then,
expresses a certain kind of epistemic modesty.
So far, this may seem like little more than epistemic common sense. But it turns out
that the sort of modesty in question has some puzzling consequences. The consequences
have come out mainly in discussions of positions advocating highly conciliatory
responses to disagreement. On such positions, if I hold some philosophical view, for
example, and fi nd myself in disagreement with other philosophers—philosophers
familiar with all the same arguments, and possessing philosophical skills equivalent to
mine—I often should become much less confi dent in my philosophical view, perhaps,
in categorical-belief terms, withholding belief on the topic. 2
4
Epistemic Modesty Defended 1
David Christensen
1 Versions of this paper were presented at the Collège de France, University of Oxford and The Ohio
State University; thanks to all three audiences for helpful discussion. I’d also like to thank Stew Cohen, Adam
Elga, Sophie Horowitz, Jennifer Lackey, John Pittard, Josh Schechter, Jonathan Vogel, and the participants in
my graduate seminar for valuable discussion and/or comments on earlier drafts.
2 Of course, the description of the case needs fi lling out in various ways. I’m assuming that there is not a
huge imbalance in the number of philosophers holding the diff erent views, that there is not a preponderance
of more highly skilled philosophers on one side, that I have reason to think that the stated views of the
philosophers involved refl ect their ordinary and honest appraisal of the arguments, not bizarre brainwashing
or joking, etc.
78 david christensen
Suppose I hold such a view (call it CV, for Conciliatory View 3 ), and that I practice
what I preach. So, for example, when I think about the arguments directly relevant to a
certain version of mentalist Internalism about epistemic justifi cation, it seems very likely
to me that it’s true. But in response to the disagreement of epistemologists I respect,
I become much less confi dent in Internalism. Now as it turns out, I’m also aware of the
current controversy about disagreement, and know that a number of epistemologists
reject CV in favor of positions toward the “steadfast” end of the spectrum: they hold that
one may (often, at least in large measure) maintain one’s confi dence in one’s initial
beliefs despite knowledge of disagreement by those who seem, independent of the dis-
agreement, to be as well positioned as oneself to arrive at accurate views on the disputed
matter. I also quite reasonably respect epistemologists who hold steadfast views and
reject CV. Insofar as I practice what I preach, it seems that CV requires me to become
much less confi dent in CV as well.
This puts the advocate of CV in a situation that’s puzzling in a number of ways. For
one thing, it would seem that, in the present epistemological climate, at least, CV has
the property that one cannot rationally believe it (at least very strongly), even if it’s
true. But this in itself isn’t obviously mysterious or deeply problematic. After all, there
would seem to be other situations—ones in which all epistemologists accept CV, for
instance—in which one could rationally believe in CV. So CV isn’t obviously intrin-
sically impossible to believe rationally. The present situation might, for all we’ve seen
so far, simply be the sort of situation we confront on all kinds of topics all the time:
one in which the truth on some matter is not rationally believable, because our evi-
dence, misleadingly, points away from it.
But there are a couple of more serious worries in this general neighborhood. In the
next section, I’ll look at the worry that the self-undermining character of CV makes it
impossible to maintain any stable view of disagreement that doesn’t reject CV com-
pletely. In the sections that follow, I’ll look at an argument which uses self- undermining
to show that CV, and many other principles expressing epistemic modesty, are incon-
sistent, and thus must be rejected. I will argue that, despite these diffi culties, epistemic
modesty can be defended.
1 Self-undermining and instability
The fi rst worry emerges when we think in more detail about how someone who is ini-
tially convinced by conciliationist arguments should react to the anti-CV beliefs of
philosophers she respects. We can see the problem even in a simple case where we
abstract from the wider debate and consider just Connie, a conciliationist, and Steadman,
3 An often-used term to describe certain views of this sort is “Equal Weight View.” This term was used
by Elga ( 2007 ) to describe his own view, but has been used by various writers, not all of whom use it the
same way. My term is intended to be more general, and not to suggest limitation to, e.g., Elga’s version of
the position. The term “conciliatory,” is actually also taken from Elga (see his 2010 ).
epistemic modesty defended 79
who holds a steadfast view. Suppose that Connie and Steadman are unaware of the
wider debate. Connie, in her study, thinks hard about the issue and becomes highly con-
fi dent of CV—say, she reaches credence 0.99—and goes to tell Steadman. He tells her
that he, too, has been thinking hard about disagreement, but that he has become equally
confi dent of a steadfast view SV. They discuss the issue thoroughly, each off ering his or
her own arguments, and conscientiously attending to the arguments of the other. At the
end of the day, unfortunately, they still disagree just as strongly. Let us assume, for the sake
of argument, that Connie’s original take on these arguments is correct; these arguments
do in fact support CV, and in fact Steadman has misjudged them.
Connie, back in her study, refl ects on the day. She considers Steadman her philo-
sophical equal; in fact, she has strong reason to believe that Steadman is as likely to get
things right in philosophy as she is. And she has every reason to think that Steadman’s
general parity in philosophical reasoning would extend to thinking about the episte-
mology of disagreement (at least, insofar as she puts aside the fact that he’s arrived at SV,
which seems wrong to her). So, good conciliationist that she has become, Connie
reduces her confi dence in CV dramatically. Say, for the sake of simplicity, that she now
has about equal confi dence in CV and SV.
But there seems to be something unsatisfactory about where Connie has ended up.
From her perspective when she emerged from her study, she did the right thing in fully
conciliating with Steadman. But from her present perspective, there’s only about half a
chance that CV is correct (in which case she reacted correctly to the fact of Steadman’s
disagreement). There’s also about half a chance that SV is correct—in which case she
should ignore disagreement and maintain the belief that’s supported by the original
arguments; that view, by her lights (and in fact) is CV. What should her reaction be to
this new uncertainty about the correct rules for belief-management?
A natural suggestion is something like this: insofar as she divides her credence between
two diff erent rules for belief-management, and those rules dictate diff erent credences
for some proposition, she should adopt a credence in between what the two rules rec-
ommend: a mixture of the recommended credences. If she’s much more confi dent of
the fi rst of the rules, her new credence in the relevant proposition should be closer to
that recommended by the fi rst rule. In cases such as the present one, where she has about
equal confi dence in the two rules, she should adopt a credence in the disputed proposi-
tion that’s about halfway between the credences recommended by the two rules.
Let us suppose for the sake of argument that something along the lines of this natural
thought is correct, and that Connie sees this. It seems that she should decide that her
present credence in CV is too low—that she went too far in her initial compromise
with Steadman. For while CV recommends her present credence of about 0.5, SV rec-
ommends that she have 0.99 credence. If she mixes those recommendations equally,
she’ll arrive at around 0.75 credence in CV.
But suppose she does this. It seems that she’s still in trouble. For now she again thinks
that it is much more likely that she should be conciliatory than that she should be stead-
fast. So it should now seem to her that she hasn’t conciliated enough. Applying the
80 david christensen
commonsense thought again, she should readjust her credence in SV to a mixture of
about 75 per cent the conciliatory compromise between her original assessment and
Steadman’s, and 25 per cent the SV-mandated credence produced by her original assess-
ment of the arguments. This will land her in the vicinity of having 0.62 credence in CV.
And the process continues. 4
A couple of diff erent questions arise at this point. One of them is whether there is any
end to the series of adjustments that Connie has begun. A blog post by Matt Weiner 5
suggests that there is. We might formalize the commonsense thought somewhat as fol-
lows, at least for the special case where an agent’s credence about the right epistemic
rules is divided between rule A and rule B (let Ap be the credence in p that rule A would
recommend, and Bp be the credence in p that rule B would recommend, and let A and
B stand, respectively, for the claims that rules A and B are correct):
(*) cr(p) = cr(A)∙Ap + cr(B)∙Bp
In other words, the agent’s credence in p should be a weighted average of the credences
rule A and rule B would recommend for her, where the weighting is determined by
how much credence she has in the correctness of rules A and B. 6
Weiner assumes that the agent in question begins by having credence 1 in CV, and her
acknowledged peer begins with credence 1 in SV. 7 Is there a stable position for the agent
to take, once she applies (*) to her view about disagreement? Given the description of
the case, let us suppose that CV recommends that she split the diff erence, arriving at 0.5
credence in CV, and that SV recommends that she retain her original view: full credence
in CV. Putting these recommendations into (*) above, we get:
cr(CV) = cr(CV)∙.5 + (1-cr(CV))∙1.
Weiner points out that the equation balances when cr(CV) is 2/3. In other words, if the
agent adopts 2/3 credence in CV, her view is consistent with (*), which represents our
commonsense thought about how to react to uncertainty about epistemic rules. So
perhaps this is where Connie should end up in our beginning example. Weiner also
indicates how this solution can generalize to other cases of the same sort, but where the
agents in question begin with diff erent degrees of credence in CV and SV. Thus
defenders of CV may hope that the view does not, after all, lead to problematic insta-
bility, at least in Connie’s sort of case.
4 The worry that a defender of CV, confronted with disagreement about CV, will not be able to form a
stable credence in CV is due to Weatherson ( 2007 ), reprinted as part of his contribution to this volume.
Weatherson uses a somewhat more complex argument for his conclusion, but I think that the argument
sketched in the text, which depends on the same sort of assumption about how beliefs about the correct rule
for belief-management interact with beliefs in general, makes the same point.
5 Weiner ( 2007 ); this is a response to Weatherson ( 2007 ). Weiner’s post centers around an example more
like the one discussed here.
6 Very similar principles are put forth by Weatherson and Weiner in their discussions of instability.
7 There may be problems with this if credence 1 is interpreted in standard ways. But for present purposes,
let us put this aside; the example works equally well with credences near, but not at, the extremes.
epistemic modesty defended 81
However, it is worth noticing that there is something quite odd about the above dis-
cussion. Consider how Connie should think about her credence in CV, supposing that
she’s obeying (*), and has settled on 2/3 credence in CV and 1/
3 credence in SV. The
principle (*) captures the thought that Connie should weight the recommendations of
the two rules according to how likely she thinks they are to be correct—that is, to cor-
rectly describe rational belief. But while this seems plausible when put in the abstract, its
plausibility is considerably strained when one considers how Connie should regard her
credences in the correctness of those epistemic rules .
According to (*) would have Connie balance two credence-components: the cre-
dence recommended by CV, and the credence recommended by SV. As we’ve described
the story so far, CV recommends a much lower credence in CV than SV recommends.
Thus to the extent that Connie favors the CV-recommended credence, she’ll be less
confi dent in CV. But she’s supposed to weight these factors by her credences in the
respective correctness of two rules—by how likely she takes them to be. Thus to
the extent that she thinks CV is correct, she’ll be led to lower her credence in CV; and to
the extent that she thinks SV is correct (and thus CV is incorrect), she’ll be led to raise
her credence in CV! And this fact is perfectly transparent to Connie. It’s not at all clear
that, in this situation, Connie’s following (*) would be reasonable.
To put the point a slightly diff erent way, consider how Connie should think about her
own credence. It would be natural for her to think through her own complying with (*)
somewhat as follows:
Well, suppose that CV is true. In that case, I shouldn’t be very confi dent of it. And CV is prob-
ably correct. So I shouldn’t be too confi dent in it.
Suppose SV is correct. In that case, I should stick to my guns and be highly confi dent that
SV is false. There’s a decent chance that SV is correct. So I should have a decent amount of
credence that it’s incorrect.
Shouldn’t Connie arrange her beliefs instead so that, to the extent that CV is likely to
be true, she has high credence in CV? There seems to be something fundamentally
incoherent in Connie’s reasoning in a way that manifestly reverses this relationship.
It’s worth noting that this is not a diffi culty with Weiner’s solution to the instability
problem. The diffi culty arises from the commonsense thought which generated the
instability in the fi rst place, as formulated by (*). CV itself just says that Connie should
reduce her confi dence in a certain proposition (that CV is correct) after talking to
Steadman. This in itself is not suffi cient to generate any instability at all. But when
Connie’s reduced confi dence in this proposition is taken to require her to change her
epistemic procedures in the way recommended by (*), then the diffi culties arise. And
while it seems extremely plausible that one’s beliefs about things in general ought in
some way to be sensitive to one’s beliefs about the rules of rationality, 8 we’ve just seen
that one seemingly natural way of implementing this plausible idea generates puzzling
8 Isn’t that one of the main reasons people have given for being interested in epistemology?
82 david christensen
results. 9 And this puzzlement is independent of whether there is a stable way for
Connie to comply with (*).
So for the present, I’d like to leave off worrying about instability per se. But I want to
keep one lesson from the above discussion in mind: the interaction between CV and
(*) suggests that the oddity of self-undermining epistemic principles is closely tied to
questions about how credences about what credences are rational constrain credences
in general. This connection will, I think, emerge more clearly in what follows.
2 Self-undermining and inconsistency
The second worry about CV is not centered around stability, but around whether it is
intrinsically incoherent. The defender of CV, as noted above, might start out being quite
sanguine about the possibility that CV is not rationally believable at present, since plenty
of true things are not rationally believable, given our current evidence. He might note
that the opinions of others are important evidence, but that it’s quite possible that the
evidence in this case is misleading. Of course, he’ll have to admit that, though he defends
CV, he’s not rationally confi dent in its truth. But why not go on defending it anyway,
while others defend other views? After all, research goes best when diff erent investiga-
tors pursue diff erent lines simultaneously. Thus David Christensen ( 2009 : 763) optimis-
tically suggests: “Of course, we may still work hard at producing and disseminating
arguments for the view, hoping to hasten thereby the day when epistemic conditions
will brighten, consensus will blossom, and all will rationally and whole-heartedly
embrace Conciliationism.” But it seems to me that this degree of sanguinity underesti-
mates the diffi culty of the problem considerably. 10
The reason is as follows: the sanguine line admits that CV, in the present epistemic
circumstances, requires me to have only moderate credence in its own correctness. Plau-
sibly, this means that I shouldn’t completely follow CV in my present circumstances. But
if that’s right, then CV does not accurately describe how I should govern my beliefs in
my present circumstances. But if CV were a correct general principle, one might argue,
it would give correct direction in the present case. So it’s not a correct principle. 11 Indeed,
the argument suggests that even if conciliationists were tremendously successful in
swaying philosophical opinion, so that the experts were uniformly confi dent that CV
was correct, that this would be a case where the evidence provided by expert opinion
turned out to be misleading!
9 For discussion of a principle (“Rational Refl ection”) that is closely related to (*), see Christensen
(2010b), where more dramatic problems with the principle are developed. I should note that unpublished
work by Adam Elga argues that these problems can be avoided by a revised principle, which still captures
much of the intuitive appeal of Rational Refl ection. Interestingly, I believe that Elga’s principle may also
avoid the instability problem described above. But it would not be appropriate for me to enter into the
details here.
10 Thanks to Josh Schechter for helping me to realize this.
11 Weatherson ( 2007 ) presses an argument very close to this.
epistemic modesty defended 83
A version of this problem is explored in detail by Adam Elga, who takes it to be an
instance of a more general problem that occurs whenever an inductive method calls for
its own rejection: 12
It is incoherent for an inductive method to recommend two incompatible responses to a single
course of experience. But that is exactly what a method does if it ever recommends a competing
method over itself.
For example, suppose that inductive methods M and N off er contrary advice on how to
respond to the course of experience “See lightning, then see a rainbow.” In particular, suppose:
1. Method M says: “In response to seeing lightning and then a rainbow, adopt belief state X.”
2. Method N says: “In response to seeing lightning and then a rainbow, adopt belief state Y.”
(Assume that it is impossible to adopt both belief states X and Y.) But also suppose that M
sometimes calls for its own rejection:
1. Method M says: “In response to seeing lightning, stop following method M and start fol-
lowing method N.”
Then method M off ers inconsistent advice. On the one hand, it directly recommends belief
state X in response to seeing lightning and then a rainbow. But on the other hand, it also says
that seeing lightning should make one follow method N, which recommends belief state Y in
response to seeing lightning and then a rainbow. And it is impossible to follow both pieces of
advice. So method M gives incoherent advice about how to respond to seeing lightning then a
rainbow. And a similar confl ict arises in any case in which an inductive method recommends a
competing method over itself. (2010: 181–2)
The argument would seem to apply to CV as follows: suppose that the direct philo-
sophical arguments and evidence strongly support Internalism about epistemic justi-
fi cation, and that I assess the arguments and evidence correctly. CV, as noted above,
will require that I still not be highly confi dent in Internalism, given that so many epis-
temologists I respect take the arguments and evidence to support Externalism. Sup-
pose that, on CV, I should move from 0.9 credence in Internalism (which is where I’d
be on the basis of my considering the direct arguments alone), to 0.52 on the basis of
disagreement. And suppose that I, having been convinced of CV, do move to 0.52 on
Internalism. Then I’m confronted by disagreement about CV itself, and, again follow-
ing CV, become much less confi dent of it. But now that I have serious doubts about
whether CV is correct, I am more confi dent that I should pay less attention to dis-
agreement. In light of that, it seems that I should now not compromise so much with
others on Internalism, and should adopt a credence higher than 0.52. But by hypoth-
esis, any credence other than 0.52 would violate CV. So following CV requires me to
violate CV. So CV is inconsistent. Call the argument instantiated here the Inconsist-
ency Argument against CV.
Elga takes this argument to apply to CV, or any complete inductive method that
includes CV, since, as we’ve seen, a follower of CV is sometimes bound by CV to lose
12 Elga attributes the general argument to Field ( 2000 ); a related argument occurs in Lewis ( 1971 ).
84 david christensen
confi dence in CV. It’s worth noting that, on this way of seeing things, it is not the fact
that CV recommends against itself in the present circumstances that causes the problem.
Even if everyone happened to believe in CV, CV would still be incoherent, according to
this argument. For it would still off er inconsistent recommendations for the possible
situation in which many experts reject CV. Elga takes the problem to be decisive: “There
is no good reply. Conciliatory views stand refuted” (2010: 182).
I take the sort of argument Elga presents to provide a very powerful challenge to CV. But
before giving a fi nal assessment of its force, I’d like to examine the argument more closely.
3 The Inconsistency Argument and level-connections
One thing that seems to lie a bit below the surface of the argument as presented above is
that it depends on some sort of level-connecting principle: some principle of rationality
which connects credences about what credences are rational with credences in general.
We saw in section 1 that the Instability Argument relied on a particular version of this
idea. But the general idea seems implicit in the Inconsistency Argument as well.
To see this, note that CV is a rule for believing. It constrains my credences on con-
troversial topics under certain conditions. The Inconsistency Argument begins by
noting that CV will, in certain circumstances, apply to my credence in CV itself: it
will prevent me from being highly confi dent in the correctness of CV. But the argu-
ment then takes this to have implications for what I should believe about other things,
such as Internalism. These implications do not, strictly speaking, follow from CV. If I
lose confi dence in CV (due to disagreement), this does not entail that I change my
credence in Internalism. In fact, it would seem that I could obey CV completely, sim-
ply by maintaining my CV-mandated credence in Internalism, even after drastically
lowering my credence in CV—the very principle that mandated that credence. So it
seems that CV, taken neat, does not after all off er inconsistent advice.
To be sure, there would be something quite unattractive about making the doxastic
move just envisioned. I would be regulating my credences in accordance with a prin-
ciple whose correctness I seriously doubted—after all, CV provided my only reason
for not being highly confi dent in Internalism. Can I rationally just ignore the fact that
my lowered credence in Internalism was justifi ed by a rule in which I no longer have
confi dence? There does seem to be something epistemically wrong with divorcing
what it’s rational for me to believe in general from what it’s rational for me to believe
about what’s rational for me to believe. So my point here is not at all that dependence
on such “level-connecting” intuitions deprives the Inconsistency Argument of force.
The point is just that the considerable force the argument does have derives in part
from a commitment to level-connecting.
Two other points seem worth making about this aspect of the argument. First, deny-
ing level-connections is not only intrinsically implausible; it also should be a particularly
uncomfortable option for the defender of CV. As Weatherson points out, the intuitive
epistemic modesty defended 85
plausibility of CV derives in large part from the appeal of some sort of level-connection.
My reason for losing confi dence in Internalism when I fi nd out that others disagree
derives from worrying that my response to the direct evidence and arguments was not,
after all, the most rational one. I worry that I’ve incorrectly assessed the direct evidence
and arguments. If this sort of worry about the rationality of my belief created no rational
pressure for me to revise my beliefs, CV would be unmotivated.
The second point is that while the Inconsistency Argument depends on some sort of
level-connection principle, it does not seem to depend on any very particular version of
the idea, as does the Instability Argument. It seems to require that rational doubts about
the correctness of a certain epistemic principle should weaken the extent to which that
principle governs one’s beliefs in general. But it does not seem to require, for example,
that one believe in accordance with (*). I take this to be a way in which the Inconsist-
ency Argument is more powerful than the Instability Argument. 13
4 The scope and power of the Inconsistency Argument
Let us now look at a reason why one might be quite suspicious of the Inconsistency
Argument. The argument is naturally raised against heavily conciliatory views of dis-
agreement. But it seems clear that the argument, if sound, would have much wider
application.
Consider fi rst the spectrum of views on the rational reaction to disagreement. Many
critics of strongly conciliatory views of disagreement have advocated positions on which
one need not compromise much with apparent epistemic peers in many cases—say,
because the reasons behind one’s original judgment on the disputed issue can do double
duty and support the judgment that one’s apparent peer has made a mistake in evaluat-
ing the evidence relevant to that particular issue. Still, such moderately steadfast views
typically concede that the disagreement of others has some tendency to aff ect one’s
rational degree of confi dence—just not nearly as strong a tendency as is claimed by
more conciliatory views. 14 But it seems clear that such views are every bit as vulnerable
to the Inconsistency Argument as are strongly conciliatory views. For insofar as dis-
agreement has any power to reduce one’s rational credences in general, it will presuma-
bly have the power to reduce one’s rational credence in one’s moderately steadfast view.
And insofar as that will require one to adopt credences diff erent from those recom-
mended by the moderately steadfast view, the Inconsistency Argument will apply.
13 For example, I believe that the particular level-connecting principle Elga proposes as an improvement
on Rational Refl ection will enable the Inconsistency Argument, even if it does not enable the Instability
Argument.
14 Some examples of such views can be found in Kelly ( 2010 ), Lackey ( 2010 , 2010a ), Sosa ( 2010 ), and
Wedgwood ( 2010 ). Even Kelly ( 2005 ), which advocates a strongly steadfast response to peer disagreement,
acknowledges that disagreement of epistemic superiors—say those who one has reason to believe are less
likely than oneself to be biased—calls for epistemic deference.
86 david christensen
In fact, it would seem that any view about disagreement short of absolute steadfastness is
subject to the same problem. Consider:
Minimal Humility : If I’ve thought about some complex issue P for ten minutes, and have decided
P is true, and then fi nd out that many people, most of them smarter than I, have thought long
and hard about P, and have independently but unanimously decided that P is false, I should
become less confi dent in P.
Clearly, Minimal Humility is subject to the Inconsistency Argument. 15 Indeed, as Bryan
Frances points out,
If you accept any thesis T of the form “If such-and-such conditions obtain with respect to one’s
belief in P, then one should withhold belief in P,” then provided the antecedent of T can be true
when P = T, then you have to deal with a version of the Self-Application Objection. The prob-
ability that some such principle T should be true gives us good grounds to think that there must
be something amiss with the Self-Application Objection. 16
Frances’s abstract way of putting the point brings out that it is not just about the belief-
undermining power of disagreement . 17 Suppose, for example, one has recently adopted a
philosophical view on some moderately complex matter, and then learns that (a) one
was subject to incredibly reliable brainwashing techniques designed to produce exactly
the opinion one holds, and (b) one was dosed with special drugs which quite reli-
ably both enhance people’s susceptibility to brainwashing and leave them feeling mentally
clear, and highly confi dent in their opinions. Should this discovery diminish one’s con-
fi dence in one’s philosophical view? It surely seems so. Yet any general principle which
would mandate such a response would also seem to require, when the philosophical
view in question is that very principle, that one lose confi dence in it. And this would get
the Inconsistency Argument up and running. Thus the Inconsistency Argument would
seem to rule out countless instances of epistemic modesty which are much less intui-
tively questionable than CV.
This surely provides a serious reason to be suspicious of the argument. 18 So maybe we
shouldn’t take the abstract possibility of self-undermining as a decisive objection to any
of these principles. But before delving deeper into the question of what to make of the
Inconsistency Argument, I’d like to look at a line of response suggested by Frances—a
line which is intended to be independent of understanding why, or if, the Inconsistency
Argument goes wrong. 19
15 The example is taken from Christensen ( 2009 ).
16 See Frances ( 2010 : 457). Frances’s “Self-Application Objection” is essentially similar to what I’m calling
the Inconsistency Argument.
17 Elga ( 2010 ) also makes clear that the target of the Inconsistency Argument is not limited to theories of
disagreement.
18 For some more reasons, see Schechter ( forthcoming ) .
19 What follows is not exactly Frances’s argument, which concerns a principle of his own that’s a close
relative of CV. In adapting his argument to the general case of CV, I think I’m remaining faithful to his
intentions. For the original argument see Frances ( 2010 : 457–9).
epistemic modesty defended 87
Frances begins by pointing out that there is a diff erence between believing a princi-
ple true and arranging one’s doxastic life in accordance with the principle. With this in
mind, he suggests the following course of epistemic action (adapted to CV): withhold
judgment on CV, but arrange one’s doxastic life in accordance with it anyway. Now this
might seem to amount to simply denying level-connection principles which would
require one’s beliefs about the correctness of epistemic rules to constrain one’s doxastic
practice. But that is not what Frances has in mind. For while he withholds judgment on
whether his conciliatory principle is correct, he does believe that it’s in the vicinity of
the truth:
As already mentioned, I don’t believe that [CV] is true. But I do think that it’s closer to the truth
than other principles (such as [a version of steadfastness]). I don’t mean to suggest that it has a
high degree of truth but less than 1 (where degree 1 is the full truth). It might be perfectly false,
whatever that means. What I base my decision on, and have confi dence in, is the idea that [CV]
is in the vicinity of an important truth. I think that [CV] is a good rule of thumb, where all that
means is that it’s in the vicinity of a truth (endorsing a rule of thumb doesn’t mean endorsing
its truth). (2010: 459)
The idea, then, is not to dispense with level-connections. Rather, the idea is to endorse
a claim about epistemic rules that is weaker than CV, but which is still strong enough to
cohere with forming lower-level beliefs in a conciliatory way.
If this strategy worked, it would save something very much like CV from the diffi -
culty the Inconsistency Argument poses. And in doing this, it would lessen the worry
about the Inconsistency Argument engendered by its wide application, for we could
presumably take a similar attitude toward the many extremely plausible principles that it
seems to preclude. The strategy would provide a way for prescriptions of epistemic
modesty and the Inconsistency Argument to coexist peacefully.
I don’t think, though, that the strategy will work in the end. It does allow the person
who’s attracted to the arguments for CV to follow CV without fl outing it by believing
CV in the face of excellent epistemologists’ disagreement. But the agent pursuing this
strategy ends up running afoul of CV anyway. For she remains confi dent in the weaker
proposition: that CV is in the vicinity of the truth—that is, that CV is closer to the truth
than steadfast views are. And this weaker claim is also denied by the excellent episte-
mologists who support steadfast views; they think that steadfast views are closer to
the truth.
Moreover, although I’m quite confi dent that there are many excellent epistemolo-
gists working today who would deny that CV is closer to the truth than steadfast views
are, it is worth noting that the diffi culty with using the above strategy as a way of avoid-
ing the Inconsistency Argument does not depend on this fact. For consider CCV—the
view that CV is Close to the truth, and closer than steadfast views. Even if there weren’t
a suffi cient number of actual excellent epistemologists who reject CCV, it’s clear that
there are possible situations in which there are. And these are situations in which CCV
would entail that high confi dence in CCV was irrational. And this, as we’ve seen, is all
88 david christensen
that’s needed to launch the Inconsistency Argument. So the strategy of maintaining
confi dence in a weakened conciliatory view does not seem to me to provide a way of
escaping the Inconsistency Argument.
If this is right, it does raise the pressure the Inconsistency Argument exerts on CV.
However, it also reinforces our reason to think that there must be something wrong
with the conclusion we seem to be driven to by the Inconsistency Argument, since it
can’t seem to get along with even weakened versions of the extremely plausible princi-
ples that, in some possible circumstances, call for their own rejection.
5 Partial conciliation and the self-exemption response
As noted above, Elga holds that CV is refuted by the Inconsistency Argument. But he
does not think that this means that all highly conciliatory positions are refuted. In
response to the Inconsistency Argument, Elga proposes a modifi ed form of CV. Accord-
ing to this view, one should be conciliatory about almost all topics, but not about the
correct way of responding to disagreement; this is what he calls the “partially concilia-
tory view.” If I were to adopt such a view, call it PCV, I might be very conciliatory with
respect to my belief in Internalism. But I’d be dogmatic with respect to PCV itself.
In fact, Elga suggests that in degree-of-belief terms, I should accord the correctness
of the rule I adopt probability 1: I should be absolutely certain that it’s correct.
This, as Elga points out, may seem arbitrary at fi rst. But he argues that it really isn’t.
After all, the Inconsistency Argument applies to any epistemic policy that, in any cir-
cumstance, says that one should lose confi dence in its correctness. As Elga puts it, “In
order to be consistent, a fundamental policy, rule or method must be dogmatic with
respect to its own correctness.” 20 So, since all acceptable fundamental rules are dogmatic
with respect to their own correctness, it’s not ad hoc to take such a feature to be present
in our view which tells us how to respond to disagreement.
I think there is something clearly right in Elga’s point that the justifi cation for exempt-
ing PCV from its own scope is general, and thus that the charge of ad hoc-ness is not
clearly apt. Moreover, the suggestion that I remain fully confi dent in PCV, while being
conciliatory about all manner of other things, allows me to respect the intuitions behind
CV very widely, while also respecting the level-connecting idea that my doxastic prac-
tice should cohere with my higher-order views about rational belief formation.
Nevertheless, I think that there is something unsatisfying about the resulting position.
And though I don’t want to press the charge of ad hoc-ness, my reasons for dissatisfaction
with PCV do stem from the kind of observations one might make in explaining why PCV
seems ad hoc. In particular, it seems to me that the prescriptions of PCV in certain cases will
be sharply counterintuitive, and that these prescriptions will be counterintuitive for much
the same reason that the prescriptions of strongly steadfast views are often counterintuitive.
20 Elga ( 2010 : 183). Elga defi nes a “fundamental” method as “one whose application is not governed or
evaluated by any other method.”
epistemic modesty defended 89
So even granting that there is a non-arbitrary reason for exempting PCV from its own
scope, the view faces intuitive diffi culties similar to those facing completely steadfast views.
Consider the sort of reasoning that might convince me that PCV was correct. It
would include, in the fi rst place, thinking through and evaluating the complex argu-
ments and counterarguments off ered for and against conciliationism in general. It would
also include thinking through the meta-epistemological considerations adduced in
mounting the Inconsistency Argument, and those adduced in support of adding the
dogmatic self-exemption to deal with the problem that the Inconsistency Argument
presents. All this thinking is, to all appearances, exactly the same type of thinking I do
when I consider whether to accept Internalism, or any other complex and controversial
philosophical view. Clearly, this type of thinking is highly fallible. But if that’s right, then
it seems that I must take seriously the evidence I may get that I’ve made a mistake some-
where in my own thinking about PCV. And the disagreement of the many excellent
epistemologists who reject PCV would seem to constitute just this sort of evidence.
The oddness of refusing to take this sort of evidence on board in the present case can
be brought out by considering how remaining absolutely confi dent in PCV should fi t
into my general refl ective view about myself. Suppose, that is, that I follow PCV and
remain absolutely confi dent in its correctness, despite the fact that it’s rejected by many
epistemologists I respect, and even rate as my superiors in philosophical skill. How
should I view my own reasoning on this topic? Should I think that while I’m generally
only moderately reliable when I think about philosophy, nevertheless when I think
about arguments for general conciliation, and for not being conciliatory about concilia-
tion, I’m especially immune from error? That seems extremely dubious. There is noth-
ing about this particular topic that would make my way of thinking about it special, or
especially immune from my usual sort of blunders.
Should I count myself just lucky, then? This seems more natural: given my general fal-
libility in thinking philosophically, it would indeed be lucky if I, rather than all those
more-talented philosophers who reject partial conciliation, am the one who is right this
time. But can it possibly be rational for me to have absolute certainty that I’m the one
who lucked out in this case? That, too, seems extremely unpalatable. On what basis
could I conclude that I’m the one who got lucky, rather than those who reject PCV? Of
course, if PCV is correct, then the direct arguments on the topic actually do support
PCV, and hence indirectly support the claim that I’m correct in this particular belief,
and have not made a mistake. But that sort of support is available in any disagreement
when I’ve in fact evaluated the basic evidence correctly; the intuitive appeal of concilia-
tory views of disagreement (and of other principles of epistemic modesty) fl ows from
rejecting that sort of reasoning as begging the question.
Thus it doesn’t seem to me that it would be rational for me to be highly confi dent (let
alone certain) that I’m either very lucky or using especially reliable methods in thinking
about the topic of rational responses to disagreement. And so PCV, despite fi tting in
a natural way with the Inconsistency Argument, does not seem to me to provide a
satisfactory solution to our problem.
90 david christensen
6 A defense of epistemic modesty
Let us take stock. The Inconsistency Argument poses a strong prima facie threat to CV.
But it turns out that the problem is not that CV is intrinsically inconsistent: one could
consistently obey CV in the face of disagreement by losing confi dence in CV, but then
continuing to follow it anyway. The problem is that doing this is inconsistent with plau-
sible ways of taking beliefs in general to be rationally constrained by beliefs about what
beliefs are rational. But even if no such level-connecting principle is entailed by CV,
some such level-connection idea seems to be inseparable from the main motivation for
CV. So there is a real tension inherent in conciliatory views of disagreement. Moreover,
as we’ve seen, the tension extends to a myriad of other views that encode a certain kind
of epistemic modesty: views that allow evidence that I’ve made an epistemic mistake in
thinking about P to aff ect the degree of confi dence it’s rational for me to have in P.
And we have seen that some initially attractive ways of reacting to the Inconsistency
Argument do not fully succeed in dissolving this tension.
Of course, there may be other ways of dissolving the tension—perhaps with
some more subtle level-connection principle that can motivate principles of epis-
temic modesty without enabling the Inconsistency Argument. But at this point, I
can’t see any.
One might, of course, give up entirely on epistemic modesty. But I think that such a
radical approach would be misguided. We are fallible thinkers, and we know it. We know
that it often happens that we evaluate the arguments and evidence on a certain topic—
as carefully and conscientiously as we possibly can—and reach the wrong conclusion.
That is to say, we often make epistemic mistakes. And we know that simply looking over
the arguments and evidence again, no matter how carefully and conscientiously, cannot
be expected disclose our mistakes to us.
That being so, it seems clear that a person who was interested in having accurate
beliefs, and, thus, in correcting her epistemic errors, would not be rational to let her
confi dence in P be unaff ected by evidence that she was especially prone to making
epistemic mistakes about P. It would be irrational even in instances where the person
had in fact managed to avoid epistemic error in her original thinking about P. To give
one example: suppose a doctor, after reaching a confi dent diagnosis based on a patient’s
symptoms and test results, comes to realize that she’s severely sleep-deprived, that she’s
under the infl uence of powerful judgment-distorting drugs, that she’s emotionally
involved with the patient in a way likely to warp her judgment, or that many sober
clinicians, on the basis of the same symptoms and tests, have reached a contrary diag-
nosis. Perhaps she learns all of these things! In such a case, it seems quite clear to me
that it would be highly irrational for her to maintain undiminished confi dence in her
diagnosis. 21 So I don’t think that we may plausibly resolve the tension by denying
epistemic modesty entirely.
21 I have argued for this at greater length in (2010b).
epistemic modesty defended 91
One might, then, ask whether a less radical response is possible. Is there a way of
defending the insights of CV, and other expressions of epistemic modesty, from the
challenge posed by the Inconsistency Argument?
It seems to me that there is. It cannot be formulated precisely without having in mind
precise forms of CV and other relevant principles. But the structure of the general idea
can be illustrated using vague and rough approximations of these views. Here is a sketch:
First, we should recognize rational ideals of the following two types:
1. Respecting evidence of our epistemic errors
This sort of ideal requires, for example, that in typical cases where one is initially confi -
dent that P, and one encounters good evidence that one’s initial level of confi dence in P
is higher than that supported by one’s fi rst-order evidence (say, for example, skillful
thinkers who share one’s fi rst-order evidence about P are confi dent that not-P), one
will give signifi cant credence to the claim that one’s initial level of credence is too high.
This sort of requirement applies even when one hasn’t actually made an error in one’s
initial assessment of the evidence.
2. Level-connection
This sort of ideal requires that one’s confi dence in P be constrained by one’s beliefs
about what level of confi dence the evidence supports. For example, such an ideal may
preclude being highly confi dent of P while simultaneously believing that that high
degree of confi dence is much higher than that supported by one’s evidence.
Putting ideals of types 1 and 2 together will yield rational principles of epistemic
modesty such as CV, Minimal Humility, and principles forbidding confi dent beliefs in
many cases involving conclusions reached while sleep-deprived, or brainwashed, on
certain kinds of powerful drugs, and so forth.
Next, we should recognize that the rational principles of modesty may apply to them-
selves. So in certain cases (for example, where one has strong evidence that one has made
mistakes in thinking about such a principle), it may not be rational to have full confi -
dence in its correctness. At that point, ideals of level-connection may exert rational
pressure against fully obeying the principle of modesty. For insofar as one is rational to
doubt the correctness of the principle of modesty, one may well be rational to believe,
for example, that the level of confi dence the principle has prescribed for some other
proposition P is too low. In such cases, one must fail to respect one of the ideals fully. For
following the level-connection ideal in this instance will mean raising one’s confi dence
in P to a point higher than the principle of modesty would permit. And violating
the principle of modesty will mean violating one of the ideals from which it fl ows.
This is the problem exploited by the Inconsistency Argument.
But the fact that there is this tension among our epistemic ideals need not mean that
any of them is incorrect. It might just mean that in certain situations (in particular, when
one gets good evidence against the correctness of what are in fact the correct ideals),
one will end up violating some ideal or other, no matter what one ends up believing.
92 david christensen
This position—call it the confl icting-ideals view—is not an entirely comfortable one.
But I would argue that the discomfort it involves is not a new one—it arises quite inde-
pendently of the Inconsistency Argument. In fact, it arises in a great many cases where
an agent correctly appreciates the import of her fi rst-order evidence (for example, she
sees that her evidence entails P, or that P is the best explanation for her evidence), but
then receives powerful higher-order evidence that her take on the fi rst-order evidence
is mistaken. To the extent that she respects the higher-order evidence by refl ecting it in
her fi rst-order beliefs (say, by lowering her confi dence in P), her fi rst-order belief will
diverge from what is supported by the fi rst-order evidence alone. In thus falling short of,
for example, respecting logic, or inference to the best explanation, her beliefs will fall
short of certain rational ideals. 22 So the motivation for the confl icting-ideals view does
not just come from wanting to avoid the Inconsistency Argument.
In the context of thinking about the Inconsistency Argument, the confl icting-ideals
view off ers signifi cant attractions. Perhaps the most important one is that it allows us
to avoid the absurdities entailed by blanket rejections of all expressions of epistemic
modesty. These absurdities (such as the one illustrated in the doctor case above) should
trouble even those who fi nd highly conciliatory views of disagreement implausible.
Another attraction, it seems to me, is that seeing the Inconsistency Argument as
involving confl icting ideals also avoids having to hold that certain propositions about
diffi cult issues in epistemology are immune from rational doubt. One might question
this—after all, won’t an instance of the Inconsistency Argument show that, at least for the
most fundamental epistemic rules , doubting their correctness will lead to inconsistency?
I think it’s worth pausing to examine this question.
Let us suppose, for argument’s sake, that there is one absolutely comprehensive epis-
temic rule that encodes the epistemically best response to every possible situation; call it
the Über-rule. 23 The Über-rule will surely qualify as a fundamental rule, in the sense
that its application is not governed by any other rule: by construction, what the rule says
an agent is most rational to believe in a certain situation is exactly what the agent is most
rational to believe in that situation, so an agent cannot rationally diverge from the rule’s
prescriptions. Such a rule might not be statable in any compact natural way; it might
22 I have argued for the confl icting-ideals view at length in (2010a), and in more compressed forms in
(2007) and (2011). I also defended a particular version of a level-connection principle this way in (2010b),
but the unpublished work by Elga referred to above has persuaded me that this last invocation of the strategy
may have been unnecessary. Joshua Schechter ( forthcoming ) writes that he suspects that this sort of diagnosis
applies to what he calls the “fi xed-point argument”—a version of what I’m calling the Inconsistency Argu-
ment. But he expresses some reservation, since he sees it as a “radical view.” But if the view is in fact moti-
vated independently of the Inconsistency Argument, invoking it here doesn’t require making any new
radical commitments. And as we’ve seen, other natural treatments of the Inconsistency Argument involve
more radical departures from intuitive judgments about epistemic rationality.
23 I do not wish to argue that there is, in fact, an Über-rule which specifi es the unique best response to
every situation. One might hold that rationality is permissive, and that more than one response will be tied
for best in certain situations. One might even hold that in certain situations, several permissible responses
will be incomparable with one another. I’m raising the possibility of the Über-rule to bring out sharply a
certain possible problem with the confl icting-ideals view.
epistemic modesty defended 93
well be enormously complex, and exceedingly diffi cult to formulate and think about.
But if anyone were lucky enough to formulate it correctly, it’s a safe bet that it would be
controversial among skilled epistemologists. And given the diffi culty of the topic, and
the attendant controversy, it intuitively would seem irrational for the formulator to be
extremely confi dent that it was exactly correct. So it’s very plausible that the epistemi-
cally best option for the agent— that is, the option recommended by the Über-rule —would
involve the agent having less than full confi dence that the rule she’s formulated is
correct.
However, once we allow this, one might worry that trouble will ensue. If the agent
continues to follow the Über-rule while doubting its correctness, it seems inevitable
that she will in some cases violate the sort of level-connection ideal we’ve been discuss-
ing. So to the extent that the agent rationally doubts that the Über-rule is correct, it
seems that she will be rationally required to violate a level-connection ideal. Does this
mean that our initial assumption—that there are cases where an agent should doubt the
correctness of the Über-rule—must be rejected, so that we must after all hold the Über-
rule immune from rational doubt? Not necessarily. For on the confl icting-ideals view, it
may be that the epistemically best option for the agent will involve violating a (perfectly
legitimate) level-connection ideal. There is a sense in which this violation is epistemi-
cally regrettable, but that doesn’t mean that there was a better option for the agent. After
all, having full confi dence in the correctness of the Über-rule would fl y in the face of
powerful evidence (such as the disagreement of excellent epistemologists) bearing on
her own fallible thinking about abstract issues in epistemology. Disrespecting that evi-
dence would also involve violation of an epistemic ideal. 24
I should note that the confl icting-ideals view does not entail that, in every particular
case, the option of violating a level-connection ideal will be better than the option of
disregarding evidence of one’s error. The view leaves room for both types of response.
The point is just that the view allows for the possibility (which strikes me as very plausi-
ble) that the most rational option facing an agent will sometimes involve having some
degree of doubt about the correctness of even the most fundamental epistemic rule. So
we are not stuck with having to say that the epistemically best option for all agents must
include having absolute confi dence in the correctness of a particular subtle and com-
plex position in epistemology.
Finally, the confl icting-ideals view explains how defenders of CV can hold on to the
motivation for their views, while acknowledging that the level-connecting ideals that
lie behind their views must, in certain circumstances, be violated. For the confl icting-
ideals view recognizes that an ideal that must sometimes be violated may yet have force.
If we ask why the disagreement of other competent thinkers with the same evidence
should aff ect my confi dence, the correct explanation may still be that since their dis-
agreement is evidence that my initial belief was based on an epistemic error, it creates
rational pressure to give credence to the claim that my initial belief was based on error,
24 Compare the discussion of Über-rules in Aarnio ( forthcoming ) .
94 david christensen
and that (as ideals of level-connection would have it) this creates rational pressure to
back off of that initial belief to at least some extent.
It’s worth emphasizing that the advantages I’ve been advertising are not just available
to defenders of conciliatory views of disagreement. For one thing, they are equally avail-
able to defenders of even moderately steadfast views of disagreement. Such views may
see fi rst-order considerations as being more robust in the face of the higher-order doubts
prompted by disagreement, but to the extent that they recognize that the disagreement
of others (even, e.g., large numbers of demonstrably smarter agents) should temper one’s
confi dence to some extent, they depend on level-connecting principles for their moti-
vation. And, as we’ve seen, such views have just as much trouble with the Inconsistency
Argument as does CV. And the same point also goes, of course, for non-disagreement-
based prescriptions of epistemic modesty, for example, in the face of evidence that one
has been brainwashed or drugged. So the confl icting-ideals view is not just good for
conciliationists.
There are, of course, concerns one might reasonably have about adopting this
approach. One such worry is that it would make it much harder to give a theoretically
tractable account of rational belief. 25 For example, consider an account on which
rational credences are given by some sort of epistemic probabilities. This sort of account
seems to off er important insight into the structure of rational belief—it shows how cre-
dences are constrained by logic. And it does this by means of a clear, precisely-describ-
able formal condition. But one might think that the confl icting-ideals view poses severe
diffi culties for this picture. After all, it would seem that a rational agent’s confi dence in
some logical truth—which, if one’s credences are probabilistically coherent must be
1—may well be rationally undermined by evidence of her epistemic error. Perhaps she
gets told by an expert logician that it’s not a logical truth, or learns she was drugged
while thinking through the proof, and so on. The confl icting-ideals view may allow
(and I think it should allow) that in some such cases, the rationally best credence for the
agent will fall short of 1. In such cases, the rationally best credences will not be probabil-
istically coherent.
So we do lose the ability to take probabilistic coherence as a necessary condition on
the beliefs it would be most rational for an agent to adopt. Moreover, it seems doubtful
that some other relatively simple formal condition—or even some precisely statable
informal condition—will be able to take its place. Finally, we have no reason to suppose
that, if an ideal of probabilistic coherence is balanced against other ideals, say by means of
an Über-rule, that the recipe for combining them will be capturable in any tidy
formula.
I think that this is an important point. I suspect that insofar as one’s account of epis-
temic rationality takes agents’ refl ection on their own fallibility seriously—in the sense
25 Thanks to Timothy Williamson for prompting me to address this worry. See Aarnio ( forthcoming ) for
an extended development of a related worry about any way of allowing for epistemic modesty (at least in
the context of an epistemology that sees rational belief in terms of applying correct epistemic rules).
epistemic modesty defended 95
of encoding epistemic modesty—it is unlikely to provide a clear, relatively simple
account of rational belief. How serious a problem is this?
I cannot enter here into a full discussion of the methodological issues raised by this
worry. But it seems to me that there are a couple of points to keep in mind.
The fi rst is that we don’t lose the ability to theorize about the ideals that contribute to
epistemic rationality; and these ideals may well include clear, precisely describable for-
mal conditions. For example, we’re free to put aside worries about self-doubt in think-
ing about how logic constrains rational credence. Such thinking may convince us that a
formal condition such as probabilistic coherence correctly encodes the rational pressure
that logic puts on degrees of belief. 26
In fact, the confl icting-ideals view is in a way particularly hospitable to theorizing
about rationality in a way that utilizes a condition such as probabilistic coherence.
A major obstacle to this sort of theorizing comes from considering cases where
agents seem rationally required to violate coherence. But the confl icting-ideals view
allows us to see how the ideal can have force even in such situations, by allowing that
there’s something epistemically imperfect about the beliefs of the agent who violates
coherence as a result of rationally accommodating evidence about her own
malfunction.
So it is important to see that adopting the confl icting-ideals view does not amount to
giving up on epistemic theorizing. And it does not render useless the formal tools many
have employed to illuminate aspects of epistemic rationality; in fact, it may help make
room for just this sort of theorizing.
Another point to notice is that, insofar as we’re persuaded that in cases such as the
doctor case discussed above, there would be something epistemically defective about
the doctor’s undiminished confi dence in her diagnosis after she receives the powerful
evidence of her own malfunction, it seems that epistemologists should be interested in
explaining what that epistemic defect is. Of course, one could respond by walling off a
certain aspect or dimension of epistemic appraisal—for example, by deciding to theo-
rize about a notion of rationality on which the rationality of an agent’s confi dence in P
was by defi nition not aff ected by way of evidence raising doubts about whether the
agent made a mistake in thinking about P. One might thus develop a sanitized notion of
rational belief, on which the doctor’s undiminished high confi dence in her diagnosis, in
the face of the strong evidence of her cognitive malfunction, was perfectly rational. One
would then need to say that the doctor’s undiminished confi dence embodied some
other epistemic defect. But one would still be left with the task of theorizing about that
defect. So it seems that at some point, epistemology must confront the problem raised by
this sort of evidence. And to my mind, the intuitive irrationality of the doctor’s
un diminished confi dence in this situation speaks strongly in favor of doing that theoriz-
ing in the context of describing rational belief. 27
26 For more on this point, see Christensen ( 2007 ).
27 Compare the last section of Aarnio ( forthcoming ) .
96 david christensen
Another worry one might have about the confl icting-ideals view it that it’s too pro-
miscuous, like a general-purpose “get out of jail free” card for perpetrators of crimes
against coherence. I think that this is also a reasonable concern. But the concern may
perhaps be mitigated by noting that the cases in which ideals confl ict share an important
feature: they all involve the results of agents refl ecting critically on their own thinking.
Perhaps it is not so surprising that insofar as it is rational to take seriously one’s critical
assessments of one’s own beliefs, certain kinds of incoherence will result.
Suppose a certain rational ideal applies to beliefs. Suppose also that one can’t always
rationally be certain whether the ideal is in fact a correct one, or, alternatively, whether
one is actually obeying this ideal. In particular, suppose that it can be rational to have
such doubts, even in cases where the ideal is correct, and one is in fact obeying it. Sup-
pose also that this sort of critical refl ection on one’s own beliefs is not merely an idle
exercise—that rational doubts about whether one’s beliefs meet the correct ideals may
make it rational to change the beliefs in question. In such cases, one may well come
under rational pressure to violate a rational ideal.
On this picture, allowing for the possibility of confl icting ideals reveals no penchant
for promiscuity. The idea is not that everything is permitted—in certain cases it will be
positively irrational to satisfy a particular ideal. The idea is just to make room for mod-
esty. The confl icting-ideals view simply allows us to recognize the rationality of
acknowledging, and then taking serious account of, the possibility that we’ve fallen
short of epistemic perfection. If we can accommodate that sort of modesty in our
account of rational belief, it seems to me that it will be well worth the price of abandon-
ing the hope that some cleanly specifi able notion of coherence is satisfi ed by the maxi-
mally rational response to every evidential situation.
References
Aarnio, M. L. (forthcoming) “The Limits of Defeat,” Philosophy and Phenomenological Research.
Christensen, D. (2007) “Does Murphy’s Law Apply in Epistemology? Self-Doubt and Rational
Ideals,” Oxford Studies in Epistemology 2: 3–31.
—— (2009) “Disagreement as Evidence: The Epistemology of Controversy,” Philosophy Compass
4: 756–67.
—— (2010a) “Higher-Order Evidence,” Philosophy and Phenomenological Research 81.1:
185–215.
—— (2010b) “Rational Refl ection,” Philosophical Perspectives 24: 121–40.
—— (2011) “Disagreement, Question-Begging and Epistemic Self-Criticism,” Philosophers’
Imprint 11 (6): 1–22.
Elga, A. (2007) “Refl ection and Disagreement,” Noûs 41: 478–502.
—— (2010) “How to Disagree About How to Disagree,” in R. Feldman and T. A. Warfi eld (eds.)
Disagreement (New York: Oxford University Press).
Feldman, R. and T. A. Warfi eld, eds. (2010) Disagreement (New York: Oxford University Press).
Field, H. (2000) “Apriority as an Evaluative Notion,” in P. Boghossian and C. Peacocke (eds.)
New Essays on the A Priori (New York: Oxford University Press).
epistemic modesty defended 97
Frances, B. (2010) “The Refl ective Epistemic Renegade,” Philosophy and Phenomenological
Research 81 (2): 419–63.
Kelly, T. (2005) “The Epistemic Signifi cance of Disagreement,” Oxford Studies in Epistemology 1:
167–96.
—— (2010) “Peer Disagreement and Higher-Order Evidence,” in R. Feldman and T. A. Warfi eld
(eds.) Disagreement (New York: Oxford University Press).
Lackey, J. (2010) “A Justifi cationist View of Disagreement’s Epistemic Signifi cance,” in A. Had-
dock, A. Millar, and D. Pritchard (eds.) Social Epistemology (Oxford: Oxford University Press),
298–325.
—— (2010a) “What Should We Do when We Disagree?” Oxford Studies in Epistemology 3:
274–29.
Lewis, D. (1971) “Immodest Inductive Methods,” Philosophy of Science 38: 54–63.
Schechter, J. (forthcoming) “Rational Self-Doubt and the Failure of Closure,” Philosophical
Studies .
Sosa, E. (2010) “The Epistemology of Disagreement,” in A. Haddock, A. Millar, and D. Pritchard
(eds.) Social Epistemology (Oxford: Oxford University Press).
Weatherson, B. (2007) “Disagreeing about Disagreement,” [online] <http://brian.weatherson.
org/DaD.pdf> accessed 16 November 2012, incorporated into his contribution to this
volume.
Wedgwood, R. (2010) “The Moral Evil Demons,” in R. Feldman and T. A. Warfi eld (eds.) Dis-
agreement (New York: Oxford University Press).
Weiner, M. (2007) online <http://mattweiner.net/blog/archives/000781.html> accessed 16
November 2012.
Often when there are disagreements, the parties to the dispute possess diff erent evidence
regarding the disputed matter. In such cases, rationality requires the disagreeing parties
to take into account these diff erences in revising their beliefs. If it is known that one has
important evidence the other lacks, it is uncontroversial that the party in the inferior
evidential position should defer to the judgment of the party in the superior evidential
position. If we disagree about what the weather is like in Calcutta in May, and you but
not I have spent a lot of time in Calcutta in May, then that constitutes a reason for me to
defer to your judgment. More generally, non-experts should defer to experts about
matters within their area of expertise. This is straightforward.
Matters are considerably less clear when the parties to the dispute have the same
evidence. Of course no two people ever share exactly the same evidence. But in many
cases, there is enough shared evidence that there is no reason to suppose that either party
to the dispute is in an evidentially superior position. In such a situation, what does
rationality require of the disputants? The problem is complex because when the relevant
evidence is shared, the opinion of each of the disputants counts as evidence that the
other has reasoned incorrectly from the shared evidence.
A special case of this problem arises when the parties to the dispute are in general
equal in their reasoning abilities, or at least, close enough so there is no basis for supposing
either party is in general the superior reasoner. 1 When parties to a disagreement have the
same evidence and are equal in their reasoning abilities, they are epistemic peers.
What does rationality require when one discovers that one has an epistemic peer
who disagrees about some matter? In his seminal paper, “Puzzles About Rational
Disagreement” Richard Feldman cogently defends what has come to be called, “The
Equal Weight View” (EW). 2 Several other writers have also defended the view in various
5
A Defense of the (Almost)
Equal Weight View
S tewart C ohen
1 Of course relative reasoning abilities might depend on the subject matter.
2 Feldman ( 2006 ).
a defense of the (almost) equal weight view 99
forms. According to EW, there is an evidential symmetry in virtue of which each party
to the dispute should give equal weight to his own and his peer’s opinion.
I myself think EW, or something in the neighborhood, has to be correct. But in a
recent paper, Tom Kelly develops an ingenious challenge to EW. 3 Kelly argues that EW
fails to take into account certain evidence that can create an evidential asymmetry in a
peer disagreement. In such a situation, one peer should give extra weight to his own
opinion. He proposes an alternative to EW he calls “The Total Evidence View” (TE).
According to Kelly, TE properly takes into account all the evidence that plays a role in
determining how one should revise, and in particular, the evidence that EW overlooks.
It is a truism that one should revise one’s opinion by taking into account one’s total
evidence. The challenge for the EW proponent is not to show that one should in fact
ignore the evidence in question. Rather the task for the EW proponent is to show that
EW is consistent with this truism, that is, that EW is itself a version of TE. This is the task
I undertake in this paper.
My defense of EW will be hedged. I will fully defend the position that when one is at
the rationally correct credence on one’s evidence, one should give equal weight to one’s
peer’s view. But for reasons raised by Kelly, matters are more complicated when one is
not at the rationally correct credence on one’s evidence. I will tentatively defend the
view that EW applies even in these cases.
1 EW and symmetry
EW says I should give my peer’s opinion the same weight I give my own. EW can seem
quite plausible when one considers that our status as peers entails a symmetry between our
epistemic positions. We have the same evidence, and we are equally good at reasoning from
the evidence. Neither of us would seem to have any basis for favoring his own credence
over his peer’s. A familiar principle in ethics says that the mere fact that an action is mine
rather than someone else’s cannot be relevant to the moral status of the action. What holds
for morality holds for (epistemic) rationality as well. The mere fact that it is my opinion
rather than my peer’s cannot be relevant to the rational status of that opinion.
It seems to follow from EW that if I believe h and my peer believes not-h, we should
each suspend judgment regarding h. 4 Some have argued, against EW, that rationality
permits me to remain steadfast in the face of peer disagreement and not revise my
opinion. 5 That may seem plausible to some (though not to me) when the problem is
viewed within a binary belief framework. However, Kelly convincingly argues that to
address the rational disagreement problem in full generality, we must formulate the
problem in terms of graded belief (credences). For if I believe h and my peer suspends
judgment concerning h, what does EW tell us to do? When the rational disagreement
problem is formulated within a credence framework, the view that I can remain steadfast
3 Kelly ( 2010 ). All page references are to this work.
4 Feldman ( 2006 ).
5 van Inwagen ( 1996 ), Plantinga ( 2000 ), and Rosen ( 2001 ).
100 stewart cohen
when confronted with peer disagreement is extremely implausible. Such a view would
have it that when my peer disagrees with me, I need not make any change in my credence.
This means either that my peer’s disagreement is no evidence whatsoever against my
credence, or that it is permissible, in some instances, to ignore evidence. Neither position
is defensible. This means that the rational disagreement problem, within a credence
framework, concerns not whether one should revise in the face of peer disagreement,
but rather to what extent one should revise. EW says that rationality requires that each
subject give equal weight to his and his peer’s credence. This implies that when peers
discover they disagree, each should adopt the simple average of their credences, that is,
they should split the diff erence between their credences. If I am at 0.8 and my peer is at
0.2, then we should each move to 0.5.
There is however an important restriction on when one is required to give equal
weight to one’s peer’s credence. By stipulation, my peer is someone who reasons as well as
I in general . I can give less weight to his credence if I have grounds for thinking that in the
particular circumstances of our disagreement, his reasoning is not up to the usual stand-
ards. I might also have grounds for thinking that my peer is being insincere which would
also allow me not to give equal weight to his credence. But as Christensen and Elga, have
argued, I cannot appeal to my own reasoning from the evidence for P as a basis for ignor-
ing (or giving less weight to) my peer’s credence. 6 I cannot simply reason again from
evidence to my credence and infer on that basis that my peer’s credence is incorrect.
Rather, if I am to ignore, or give less weight to my peer’s credence, it must be on inde-
pendent grounds. The justifi cation for this is straightforward. As I noted, my peer’s dis-
agreement calls into question the correctness of my reasoning in support of my credence.
This follows from our having the same evidence. Thus it would be irrational for me to
appeal to that very reasoning as a basis for giving less weight to my peer’s credence.
One way to see this is to note that if such reasoning were allowed, I could use it to
downgrade the credences of arbitrarily many peers. 7 I could even use this reasoning to
downgrade the credence of an epistemic superior (or indeed many superiors), even
when the superior is an expert and I am not. Clearly this would be irrational.
The equal weight view says that when peers discover they disagree, they should split
the diff erence between their credences. But EW can be viewed as a special case of a
more general view concerning how to respond to disagreement. Whenever someone
with any credibility disagrees with you, this constitutes some evidence that you are
wrong. To accommodate this new evidence, you have to make a relative assessment of
your and your peer’s reasoning and evidence. According to EW, when a peer disagrees
with you, you should adjust your credence by taking a simple average of your credences.
But suppose the person who disagrees is not a peer, but rather an epistemic inferior or
superior. In that case rationality requires that you assign the appropriate relative weight
to each credence, and adjust your own by taking the weighted average.
6 Christensen ( 2007 ), Elga ( 2007 ).
7 Jennifer Lackey ( 2008 ) calls this the Many to One problem.
a defense of the (almost) equal weight view 101
2 EW and uniqueness
In his defense of EW, Feldman argues for what he calls “The Uniqueness Thesis.” 8
Uniqueness: Given a proposition h, and a body of evidence e, there is a unique
attitude toward h that is rational on e.
We can interpret “attitude” as referring either to binary beliefs or credences. Feldman
defends the principle under a binary belief interpretation and appeals to it in defense of EW.
Kelly argues that Uniqueness, under a credence interpretation, is very dubious, but that EW
is committed to it. Kelly goes on to argue that even if we assume Uniqueness, EW is implau-
sible. But if Kelly is right about both the implausibility of Uniqueness, and EW’s commit-
ment to it, EW is in trouble even if his argument based on granting Uniqueness fails.
There are actually two uniqueness theses in play, Uniqueness, and what I will call
“Doxastic Uniqueness.”
Doxastic Uniqueness: A subject cannot rationally believe there are two (or more)
rational credences for h on e, while rationally holding either.
While I agree with Kelly that Uniqueness is probably false, I will argue that EW does
not entail it. EW does entail Doxastic Uniqueness, but I will argue it is true. 9
Kelly’s argument that EW is committed to Uniqueness proceeds by posing a counter-
example to EW that he claims can be avoided only by endorsing Uniqueness. Suppose
I am at 0.7 for h on e. It turns out that a slightly lower credence for h is also rational on e,
say, 0.6. Moreover, I recognize that 0.6 is also rational for h on e. My peer is at 0.6 and
recognizes that 0.7 is also rational. Kelly argues:
At time t1, we meet and compare notes. How, if at all, should we revise our opinions? According
to The Equal Weight View, you are rationally required to increase your credence while I am
rationally required to decrease mine. But that seems wrong. After all, ex hypothesi , the opinion
that I hold about H is within the range of perfectly reasonable opinion, as is the opinion that
you hold. Moreover, both of us have recognized this all along. Why then would we be rationally
required to change?
Kelly claims the only way for EW to respond is to accept Uniqueness, thereby ruling out
the conditions that give rise to the case. But that is not correct. All EW requires is Doxastic
Uniqueness. That is, the EW proponent can allow that both 0.6 and 0.7 are rational on e,
but balk at allowing that a subject could know this and remain rational at either credence.
All the same, Kelly’s example, if correct, would show that even Doxastic Uniqueness
is false. Kelly’s argument appeals to something like the following principle:
(D): The only way my peer’s credence can force me to revise my own credence is
by constituting evidence that my credence is irrational.
8 Feldman ( 2006 ). See also Christensen ( 2007 ).
9 Roger White ( 2005 ), like Kelly, confl ates Permissiveness with Doxastic Permissiveness.
102 stewart cohen
Uncontroversially, a principle similar to D that applies to equally rational subjects
who have diff erent evidence is false. Suppose I encounter such a subject. A reliable third
party who knows what evidence each of us possesses tell us that each is rational on his
evidence. With this stipulation, my peer’s credence need not constitute evidence that
my credence is irrational. Yet clearly, his credence exerts rational pressure on me to
revise my own. What is the source of this rational pressure?
Consider the same case viewed from the perspective of binary belief. Suppose that
my peer and I have diff erent evidence for P. Again, a reliable third party tells us we are
each rational on our respective evidence. Although my peer’s belief need not constitute
evidence I am irrational on my original evidence, there is rational pressure for me to
revise my belief. Even though we may both be rational on our evidence, I know one of
us has a false belief.
Return to the credence version of this case (again where my peer and I have diff erent
evidence). As we noted, in this case as well, I need not have evidence my credence is
irrational on my evidence. All the same, there is rational pressure for me to revise. Here
we cannot say that I have evidence that my credence is false, since credences do not have
truth-values. So is it just a brute fact that in this situation, there is rational pressure to
revise my credence? Surely there is some notion in a credence framework that plays the
same role in exerting rational pressure for revising credences that evidence of falsity
plays in a binary framework.
Jim Joyce has suggested that we can evaluate credences for their accuracy , as well as
their rationality. 10 Accuracy is a graded notion. Intuitively, the higher one’s credence for
a true proposition, and the lower one’s credence for a false proposition, the more accur-
ate one’s credence is. Credences of 1 for a true proposition, and 0 for a false proposition
represent perfect accuracy. The gradational accuracy of a credence is a measure of the
distance between the value of that credence and perfect accuracy.
I propose that evidence of inaccuracy plays the same role in a credence framework
that evidence of falsity plays in a binary framework. Just as evidence of falsity exerts
pressure for one to revise one’s beliefs, so evidence of inaccuracy exerts pressure on one
to revise one’s credence. Of course any credence other than 0 or 1 will be inaccurate to
some degree. One is forced to revise, when one has evidence that one’s credence would
be more accurate if revised in a particular direction.
This is precisely how my peer’s credence can exert rational pressure on me to revise,
even when it is not evidence that my own credence is irrational. While our credences may
be equally rational on the evidence, they cannot be equally accurate. Moreover, given our
status as peers, I have no reason to suppose my own credence is more accurate than my
peer’s. Thus when my peer disagrees with me, I have evidence that my own credence
would be more accurate if revised in the direction of her credence. This explains why there
is pressure on me to revise in Kelly’s case and why Kelly’s assumption D is false.
10 Joyce ( 1998 ).
a defense of the (almost) equal weight view 103
One might wonder how accuracy can exert pressure in a case of shared evidence.
When my peer and I have diff erent evidence, my peer’s disagreement is evidence of
evidence I do not possess that rationally supports a credence diff erent from my own.
Because evidence of evidence is evidence, this explains why I possess evidence that
I would be more accurate by revising in my peer’s direction. I would thereby take
account of my peer’s evidence that I lack.
But what is the source of the accuracy pressure in the case where my peer and I have
the same evidence? In such a case, I do not have evidence of relevant evidence I don’t
possess. My peer’s disagreement, however, is evidence of a way of reasoning from my
evidence that rationally supports a credence diff erent from my own. As I have no reason
to believe my way of reasoning is more accurate than my peer’s, I have evidence that my
credence would be more accurate if revised in the direction of her credence. This makes
it irrational for me to remain at my credence.
Note that I do not need to encounter a peer at a diff erent credence for there to be
accuracy pressure on my credence. Simply recognizing a rational credence diff erent
from my own is enough to undermine the rationality of my credence—thus the truth of
Doxastic Uniqueness. In such a case, the same pressure exists to revise in the direction of
the other credence. That a peer happens to hold that credence is further evidence, only
insofar as it confi rms my judgment that the credence is rational on the evidence. Either
way, Kelly’s example provides no reason to doubt Doxastic Uniqueness, and so EW’s
commitment to it is unproblematic. 11
3 Evidential asymmetry
The argument for EW proceeds by appealing to the apparent symmetry in peer dis-
agreement. By stipulation, my peer (Peer) and I have the same evidence and we are
equally adept at reasoning from that evidence. Thus there is no reason for me to favor my
own credence over Peer’s, that is, I should give equal weight to each. Kelly objects that if
there is symmetry in the evidence, it is at best, symmetry in what he calls the “psycho-
logical evidence.” He notes that, in addition to our credences, there is the evidence upon
which we base our credences. Suppose e in fact supports a credence of 0.2 but does not
support a credence of 0.8. Then Peer is at the correct credence on e and I am at an incor-
rect credence. This would seem to be a signifi cant epistemic diff erence. Kelly argues that
the symmetry claim results from considering only the (higher-order) psychological
evidence, thereby assuming that the psychological evidence “swamps” the (fi rst-order)
non-psychological evidence. But Kelly asks rhetorically, “why should the normative
signifi cance of E completely vanish in this way?” In essence, Kelly argues that even if
there is a higher-level symmetry, the fact that e rationally supports Peer’s credence but
11 For other arguments that EW is not committed to Uniqueness, see Ballantyne and Coff man ( forthcoming )
and Lee ( unpublished ).
104 stewart cohen
not mine results in a fi rst-order asymmetry. Thus the total evidence (both the fi rst-order
and higher-order evidence) supports giving additional weight to Peer’s opinion.
According to Kelly:
If you and I have arrived at our opinions in response to a substantial body of evidence, and your
opinion is a reasonable response to that evidence while mine is not, then you are not required
to give equal weight to my opinion and to your own.
And because Peer can give more weight to his own credence than mine, he needn’t, as
dictated by EW, split the diff erence between our credences. As Kelly puts it:
What is quite implausible, I think, is the suggestion that you and I are rationally required to make
equally extensive revisions in our original opinions, given that your original opinion was, while
mine was not, a reasonable response to our original evidence.
Kelly argues that there is more evidence supporting Peer’s credence than mine. But it
doesn’t follow from the existence of this asymmetry that Peer should give extra weight to
his own opinion. How Peer should revise depends on what his evidence supports. So the
existence of the asymmetry provides a basis for Peer to favor his own credence only if his
evidence supports the existence of the asymmetry. I will argue by reductio that it cannot.
The following principle is uncontroversial:
(1) Whether and to what extent one should revise one’s credence depends only on
what one’s evidence supports.
Not surprisingly, Kelly explicitly endorses (1). Moreover, as the case is specifi ed:
(2) Peer’s evidence does not bear on whether my credence is rational on e.
If it did, there would be no problem of rational disagreement. Peer would be rational in
ignoring my credence in virtue of his evidence supporting the claim that my credence is
irrational. It is part of Peer’s evidence that he judges that my credence is irrational on e.
But my judging that my credence is rational on e blocks Peer from rationally inferring
that my credence is irrational on e.
Kelly argues for (3):
(3) If Peer’s credence is rational on e and my credence is irrational on e, Peer should
not revise by splitting the diff erence between our credences.
Now suppose, as Kelly urges, that Uniqueness is false. On this assumption, there will be cases
where both Peer and I are rational on e. I argued in section 2 that where my peer and I disa-
gree, even if I know both of us are rational on the original evidence, I must still revise my
credence owing to accuracy pressures. Because in such a case, our evidential positions are
symmetrical—we are equally rational on the evidence—Kelly should presumably agree
with EW that in this case, Peer should split the diff erence. That is, Kelly should accept
(4) If both Peer’s credence and my credence are rational on e, Peer should revise by
splitting the diff erence between our credences.
a defense of the (almost) equal weight view 105
But (2), (3), and (4) entail the falsity of (1). Propositions (3) and (4) entail that how Peer
should revise, depends on whether I am rational on e. According to (2), Peer’s evidence
does not bear on whether my credence is rational on e. It follows that Peer should revise
on the basis of considerations that cannot be inferred from his evidence. This contradicts
(1). Although there may be an asymmetry between e’s support for Peer’s credence and
e’s support for my credence, that asymmetry is not supported by Peer’s evidence and so
does not aff ect how Peer should revise.
It’s important to see that my argument does not involve a level confusion. I am not
claiming that Peer must have evidence that e supports his credence rather than mine in
order for e to support his credence rather than mine. Rather I am claiming that in order
to revise in a way that favors his credence over mine, Peer needs evidence that in fact e
does favor his credence over mine. And as the case is specifi ed, Peer does not have such
evidence.
So which of (2), (3), and (4) should we give up? It is hard to see how (2) could be
resisted. And as both Kelly and defenders of EW should agree on (4), it’s clear that (3)
has to go. So Peer should split the diff erence whether or not I am rational on e. That
extra-evidential fact cannot make a diff erence in how Peer should revise.
4 An objection: unmediated appreciation of
the evidence
Premise (2) of the reductio argument says that Peer’s evidence does not bear on whether
my credence is rational on e. But the relationship between a body of evidence e and
judgments about whether a particular credence is rational on e is obscure. Kelly
acknowledges that in a peer disagreement, neither peer has independent higher-order
evidence that favors her own credence over her peer’s. By stipulation, peers are equally
good at assessing evidence. So when there is disagreement, neither has any independent
evidence for who has responded correctly to the fi rst-order evidence. All the same,
Kelly holds that on some occasions, one’s rationally responding to the evidence is due to
one’s recognizing, via an unmediated appreciation of one’s evidence, that one’s evidence
supports one’s belief. According to Kelly:
It is implausible that every case in which one recognizes that a given belief is supported by one’s
fi rst-order evidence is a case in which one’s recognition depends on one’s having some inde-
pendent, higher-order evidence to the eff ect that one’s evidence supports that belief. Rather, in
some cases, one’s recognition that one’s evidence supports a given belief is based on an unmedi-
ated appreciation of that evidence itself. Thus, in such cases, one’s fi rst-order evidence not only
confi rms the belief in question; it also confi rms a proposition to the eff ect that it is reasonable
for one to hold that belief. (52)
But one’s evidence e is not itself evidence for what one is reasonable to believe on
the basis of e. I can confi dently judge that x’s appearing red is evidence that x is red.
And I can make this judgment even though I’m not looking at x and indeed have no
106 stewart cohen
idea how x looks. If I then look at x and it appears to be red, I have good evidence
that x is red. But I am in no better position to judge that x’s looking red is evidence
that it is red. So x’s looking red does not confi rm that x’s looking red is evidence that
it is red. We make judgments about evidential support relations a priori on the basis
of the content of evidential propositions. 12 Possessing the evidence is simply not
relevant.
I suspect Kelly meant to say that’s one’s having the fi rst order evidence can confi rm
that one is rational. If I appreciate that e is evidence for h, my possessing e confi rms
that I’m rational in believing h. This is certainly true, but the important point, I take it,
is that one can have an unmediated appreciation of one’s rationality on one’s
evidence.
How does this bear on the reductio argument? I argued in defense of (2) that Peer’s
evidence does not bear on whether I am rational on e. But it is unclear how (1) applies
in cases of a priori rationality. We can fi nesse this issue by treating an a priori rational
proposition as trivially (though defeasibly) supported by any evidence. Does this help
TE? I argued in defense of (2) that if Peer’s evidence did support that my credence is
irrational on e, Peer would be rational to maintain his credence. But again, Kelly does
not endorse that. This suggests that whether I am rational is not (trivially) inferable from
Peer’s evidence.
5 Second-order credences
Perhaps the case for TE can be improved if we think of second-order belief states as
graded rather than binary. 13 Kelly claims that the EW proponent cannot simply assume
that:
When you correctly recognize that the evidence supports p, you are no more justifi ed in think-
ing that the evidence supports p than I am in thinking that the evidence supports not-p when
I mistakenly take the evidence to support not-p.
Kelly expresses the point in terms of degrees of justifi cation for binary believing. On the
supposition that second-order belief states are graded, Kelly is suggesting the possibility
that if Peer correctly recognizes that he is at the correct credence on e, it will be rational
for him to have a higher credence that he is rational on e than it will be for me to have
that I am rational on e.
This issue is diffi cult to approach because it is obscure how we make these second-
order judgments, and in particular, what they are based on. But let us suppose for the
sake of argument that correctly recognizing that one is rational allows one to have a
higher credence for one’s own rationality than a peer who incorrectly judges she is
12 I don’t mean to commit to judgments about evidential relations being a priori. It is obscure in general
how we make these judgments.
13 Both Tom Kelly and David Christensen suggested this to me.
a defense of the (almost) equal weight view 107
rational. 14 Kelly argues that if this is the case, there is an important asymmetry that
undermines EW:
If you were better justifi ed in thinking that your response was reasonable than I was in thinking
that my response was reasonable, then this would break the putative higher-level symmetry and
provide a basis for favoring your original belief over mine.
Put in terms of credences, Kelly is saying that if it is rational for Peer to have a higher cre-
dence for the rational correctness of his response than it is for me to have for the rational
correctness of my response, then this would provide a basis for Peer to favor his response
over mine. But Kelly needs a further premise, namely, that after Peer and I meet, it remains
rational for Peer to have a higher credence than me. According to Kelly, what initially jus-
tifi es Peer’s higher credence for the correctness of his response is his recognition that his
response is correct. But one recognizes P is the case only if one knows P is the case. And
presumably, it is uncontroversial that after we meet, Peer no longer knows that his cre-
dence is correct. If he did, it would be hard to explain why he could not simply ignore the
evidence that I am at a diff erent credence. So as far as Kelly’s argument is concerned, there
is no reason to suppose that after we meet, the higher-level asymmetry still exists. 15
But suppose it were true that this higher-level asymmetry in our rational credences
for the correctness of our respective responses remains even after we meet. How does
this provide Peer with a basis for favoring his response over mine? We have already seen
that an asymmetry in the fi rst-order evidence does not give Peer a basis for favoring his
credence over mine, provided the asymmetry is not inferable from his evidence. This is
no less true for higher-order credences than for fi rst-order credences. The asymmetry
Kelly alleges is this:
(5) It is rational for Peer to have a higher credence for the rational correctness of
his response than it is for me to have for the rational correctness of my response.
Does this have any implications for how Peer should revise upon learning that I am at a
diff erent credence? Surely the mere fact that (5) is true is not relevant for how Peer
should revise his response. Consider an analogy drawn by Kelly:
Compare a situation in which you are better justifi ed in thinking that your thermometer is
functioning properly than I am in thinking that my thermometer is functioning properly.
Kelly’s point is that in such a case, you would have a basis for favoring what your thermo-
meter reads over what my thermometer reads. But this is simply not true. What matters
14 Here is one scenario where despite the fact that my peer is correct about what the evidence supports
and I am not, I am more justifi ed in thinking the evidence supports my credence than my peer is in thinking
the evidence supports his credence. Suppose that my peer’s reasoning from the evidence, though correct, is
extremely complicated. Suppose that my reasoning from the evidence is simple and straightforward, though
incorrect because of an extremely hard to see defeater in my evidence. In such a case it is not clear why it
would not be more rational for me to think I am correct about what the evidence supports than my peer is.
15 Kelly recognizes this point, but seems to think that as long as an asymmetry exists prior to our encoun-
ter, an asymmetry will remain after our encounter. But given that the asymmetry depends on Peer recogniz-
ing his credence is correct, and given that after we meet, he no longer recognizes this, I do not see how the
asymmetry will remain.
108 stewart cohen
is not whether you are better justifi ed in thinking your thermometer is functioning
properly than I am in thinking mine is functioning properly. Rather, what matters is
what you are justifi ed in thinking regarding the relative functioning of our respective
thermometers. If you are justifi ed in thinking your thermometer is functioning better
than mine, then you are rational to favor your own thermometer’s reading. But it does
not follow from your being better justifi ed in thinking your thermometer is functioning
properly than I am in thinking my thermometer is functioning properly, that you are
justifi ed in thinking that your thermometer is functioning better than mine. For all we
have said, you may have no idea how my thermometer is functioning.
Analogously (putting the point in terms of credences)
(6) It is rational for Peer to have a higher credence for the rationality of his response
than it is for me to have for the rationality of my response.
does not entail
(7) It is rational for Peer to have a higher credence for the rationality of his response
than it is for him to have for the rationality of my response.
Can (7) be true in virtue of Peer’s unmediated appreciation of what the evidence supports?
It certainly could. Suppose that before Peer and I meet, Peer considers whether 0.8 or 0.2
is rational on e. There is no reason why he could not thereby rationally become more con-
fi dent that 0.2 is rationally correct, than that 0.8 is rationally correct. In order to preserve
the symmetry, we must suppose that I consider whether 0.8 or 0.2 is rational on e and
thereby become more confi dent (to the same degree) that 0.8 is rationally correct than
that 0.2 is rationally incorrect. It is no part of EW that one must give equal weight to one’s
peer’s credence if one has spent more time evaluating the evidence than one’s peer has.
Now we have a third-order peer disagreement concerning what our confi dence
should be regarding the rationality of our confi dence in the rationality of our fi rst-order
credences. Does Peer have any basis for favoring his third-order credence over mine?
Here the dialectic simply repeats. For any level n, Peer’s n level confi dence that his n-1
level credence is more rational than my n-1 level credence could be matched by my
equal n-level confi dence that my n-1 level confi dence is more rational. At no level,
would Peer be entitled to be more confi dent than I.
6 A worry about my splitting the diff erence as well?
I have argued that Peer should split the diff erence between our credences, even though
he is rational on e, but I am not. How should I revise my credence? According to EW,
I should split the diff erence as well. But here Kelly argues that EW encounters trouble.
According to EW, by moving to the midpoint between our credences, both Peer and
I arrive at rational credences. One might wonder how I end up being as rational as Peer,
given that Peer, but not I, was rational before we split the diff erence. Even more puz-
zling, suppose that neither of us had initial rational credences on e. EW would still
dictate that we split the diff erence between our credences. How do we manage to arrive
a defense of the (almost) equal weight view 109
at rational credences simply by splitting the diff erence between our two irrational
credences? According to Kelly:
On The Equal Weight View, our high level of confi dence that H is true [after splitting the
diff erence] is automatically rational, despite the poor job that each of us has done in evaluating
our original evidence. (Indeed, it would be unreasonable for us to be any less confi dent than we
are at that point.) However, it is dubious that rational belief is so easy to come by.
In defense of EW, David Christensen argues that Kelly misinterprets EW. 16 According
to Christensen:
The Equal Weight Conciliationist is committed to holding, in Kelly’s cases, that the agents have
taken correct account of a particular bit of evidence—the evidence provided by their peer’s
disagreement. But having taken correct account of one bit of evidence cannot be equivalent to
having beliefs that are (even propositionally) rational, all things considered.
What does Christensen’s mean by an agent’s haven “taken correct account of a particu-
lar bit of evidence.” It is a truism that one must always take into account one’s total
evidence. Christensen seems to be saying that if one is irrational on one’s initial evi-
dence, one can still take correct account of further evidence in a way that does not
require one to correctly take account of one’s initial evidence.
We can gain some clarity on by considering within a binary belief framework.
Suppose I irrationally believe (say on the basis of consulting a Ouija board) that
(8) London is in France.
I then learn that
(9) Obama is in London.
From (8) and (9), I infer
(10) Obama is in France.
Is my inference rational?
Here it is useful to distinguish between two kinds of rationality principles—wide
scope and narrow scope. 17 Wide-scope principles concern coherence requirements, but
do not in themselves make rational any particular inferences. Narrow-scope principles
do make rational particular credences. If you think I am rational to infer (10), you are
appealing to an instance of a narrow-scope principle:
N: If I believe London is in France and that Obama is in London, then it is
rational for me to infer that Obama is in France.
16 Christensen ( 2010 ).
17 Broome ( 1999 ).
110 stewart cohen
In N, only the consequent of the conditional is within the scope of the rationality
operator. It thus permits detachment of the consequent when the antecedent is satisfi ed.
Now consider the wide-scope version of the principle:
W: Rationality requires that if I believe that London is in France and that Obama
is in London, then I believe that Obama is in France.
W has the entire conditional within the scope of the rationality operator. This prevents
the kind of detachment allowed by N. Instead W states a weaker coherence requirement
that entails that I am irrational if the antecedent is true and the consequent is false. W
does not require that when the antecedent is satisfi ed, I should believe that Obama is in
France. I can satisfy W by instead giving up my belief that London is in France.
I strongly doubt that N states a requirement of rationality. Given that my belief that
London is in France is irrational, surely rationality requires that I give up this belief
rather than that I make an inference from this irrational belief. If N were true, rationality
would require that I adopt an irrational belief. I doubt that this is coherent.
Of course, if I retain my belief that London is in France, and do not infer (upon learn-
ing that Obama is in London) that Obama is in France, I would be incoherent and
perhaps thereby guilty of further irrationality. But in the fi rst instance, rationality
requires that I give up my irrational belief that London is in France.
This kind of narrow-scope principle is no more plausible when applied to credences
than when applied to beliefs. In the credence case we get from the narrow-scope
principle.
N': If I have a credence of 0.8 for h, on e, and I encounter a peer at 0.2, on e, then
rationality requires that I revise my credence for h to 0.5
The wide-scope version of the principle yields:
W': Rationality requires that if I have a credence of 0.8 for h, on e, and I encoun-
ter a peer at 0.2 for h, on e, then I revise my credence for h to 0.5
W' states a coherence requirement that entails I am irrational if the antecedent is true
and the consequent is false. W' does not say that when the antecedent is satisfi ed, I should
move to a credence of 0.5. I can satisfy W' by giving up my 0.8 credence for h/e (and
presumably, my 0.8 credence for h).
N' is no more plausible than N. Given that my credence of 0.8 for h (on e) is irrational,
surely rationality requires my giving up my conditional credence for h on e (as well as
my credence for h), rather than my revising on the basis of my irrational credence. Again,
if N' were true, rationality would require that I adopt an irrational credence. I suggest
that whatever intuitive appeal this kind of narrow scope principle has comes from
failing to distinguish it from its wide-scope counterpart.
So what can we make of Christensen’s notion of taking correct account of a particular
bit of evidence. As Christensen notes, taking correct account of a particular bit of evi-
dence does not ensure that one arrives at a rational credence. So perhaps we can interpret
a defense of the (almost) equal weight view 111
Christensen as endorsing only the wide-scope principle W'. If I retain my irrational
conditional credence for P on my original evidence P but fail to move to 0.5, I would
suff er from a kind of incoherence and perhaps thereby be guilty of further irrationality.
But how does the wide-scope principle enable EW to respond to Kelly’s objection
that EW gives us the wrong result in a case where at least one peer is at an irrational
credence on the original evidence? Christensen seems to be saying EW need not
endorse as rational my moving to 0.5. Rather EW simply tells me not to make a bad
situation worse. My 0.8 credence is irrational on e, and to simply remain at 0.8 in
the face of Peer being at 0.2 would make me even more irrational. The wide-scope
principle permits me to give up my 0.8 credence rather than move to 0.5, but that will
result in a rational credence only if I thereby move to a credence rational on my total
evidence. So on Christensen’s interpretation of EW, it is silent on what my rational
credence is. EW is a theory only of how one should respond to Peer disagreement
when one is at the correct credence on the original evidence.
In defense of his interpretation Christensen says:
If one starts out by botching things epistemically, and then takes correct account of one bit of evi-
dence, it’s unlikely that one will end up with fully rational beliefs. And it would surely be asking too
much of a principle describing the correct response to peer disagreement to demand that it include
a complete recipe for undoing every epistemic mistake one might be making in one’s thinking.
But TE does precisely what Christensen says cannot be done. It tells me how to undo
my original mistake, namely, adjust my credence to whatever is rational on my total
evidence—e and Peer’s being at 0.2. So one cannot defend EW by claiming that it
would be too demanding to require that it tell us how to revise an irrational credence.
Rather, on Christensen’s interpretation, EW is an incomplete account of how one
should revise in a peer disagreement situation.
7 Is it rational for me to split the diff erence?
I’ve argued against Kelly that the rational credence for Peer is 0.5. We are now consider-
ing whether, as EW enjoins, my credence should also be 0.5. Kelly has objected that
because my original credence of 0.8 is irrational on e, allowing that my moving to 0.5 is
rational would make it too easy for me to acquire a rational credence.
This is an important objection to EW. It may be that EW can be defended only as a
theory of how one should revise when one is at a rational credence. If we give up EW
for cases where one is at an irrational credence, how should one revise in such a case?
Consider how I should revise given that Peer is at a rational credence but I am not. My
evidence consists of e along with Peer’s being at 0.2. We have stipulated that 0.2 is the
correct credence on e. Thus it seems that all of my evidence supports 0.2 as the correct
credence. So the rational credence for me is 0.2. Of course, I would not be in a position
to know that 0.2 is the rationally correct credence for me. But one is not always in a
position to know what the rationally correct credence is on one’s evidence.
112 stewart cohen
This conclusion may strike some as odd. Previously, I argued that Peer, whose cre-
dence is rational on e, should split the diff erence with me and move to a credence of 0.5.
And I have just argued that I, being at an irrational credence on e, should move to Peer’s
original credence of 0.2, which is in fact the rationally correct credence on e. So if at this
point, both Peer and I revise rationally, I, but not Peer, will arrive at the correct credence
on e. This means that in a peer disagreement where with respect to the original evidence
one peer is rational and the other is irrational, if they both revise rationally, the irrational
peer ends up in a better position than the rational peer. But this simply refl ects that fact
that in the disagreement, the rational peer receives misleading evidence whereas the
irrational peer does not. So as a result of our disagreement, Peer is rationally required to
move off his correct credence on e, and I am rationally required to move to the correct
credence on e. Peer is rationally required to move off of the correct credence on e,
whereas I am rationally required to move to the correct credence (on e) of 0.2. So if at
this point, both Peer and I revise rationally, I, but not Peer, will arrive at the correct
credence on e.
Having said that, there may be a way for EW to explain how I end up with a rational
credence at 0.5. We are assuming that when I learn of Peer’s credence, I should treat it as
evidence against the rationality of my credence. And that is because by stipulation, I
have reason to think Peer is generally rational. Then by defi nition (of “peer”), I have
equally good reason to think I’m generally rational. That counts as (defeasible) evidence
in favor of the rationality of my credence. That is to say,
(11) My credence for h on e is n, and I’m generally rational
is a (defeasible) reason for
(12) My credence of n for h on e is rational.
Typically one does not refl ect on these higher-order considerations and they play no
role in determining the rationality of one’s fi rst-order credence. One bases one’s
fi rst-order credence on only the fi rst-order evidence. But in a peer disagreement, these
higher-order considerations come into play. In order to determine how I should revise
my credence, I must refl ect on the credences Peer and I hold on the basis of e, and
the extent to which, each of us is generally rational. This explains how it is that I can
move from an irrational credence to a rational credence. I do so on the basis of my
higher-order evidence that did not fi gure in the rational basis for my original credence,
namely, the evidence that I am at 0.8 and the evidence of my general rationality. Enter-
ing into the peer disagreement brings positive evidence to bear on the rationality of my
credence that was not operative before. There is nothing puzzling about how I can
move from an irrational credence to a rational credence, if I do so on the basis of new
evidence in favor of my general rationality.
This response to Kelly’s objection on behalf of EW assumes that my being at a par-
ticular credence can be evidence for me. In diff erent contexts, both Christensen and
Kelly call this explanation into question.
a defense of the (almost) equal weight view 113
Christensen argues that there is an asymmetry between the evidential force of one’s
own credence, and the evidential force of one’s Peer’s credence. 18 Here is Christensen
discussing a peer disagreement over the answer to an addition problem:
Suppose I do some calculations in my head, and become reasonably confi dent of the answer 43.
I then refl ect on the fact that I just got 43. It does not seem that this refl ection should occasion
any change in my confi dence. On the other hand, suppose I learn that my [peer] got 43. This, it
seems, should make me more confi dent in my answer. Similarly, if I learn that [my peer] got 45,
this should make me less confi dent . . . we may take the fi rst-person psychological evidence to
be incapable of providing the sort of check on one’s reasoning that third-person evidence pro-
vides. In this sense, it is relatively inert. So the important determinants of what’s rational for [my
peer] to believe are the original evidence E1 . . . and [my] dissent . . . In contrast, the determi-
nants of what [I] should believe are E1. . . and [my peer’s] belief.
If Christensen is correct, then contrary to the way I have argued, my holding a particu-
lar credence cannot be a determinant of what credence is rational for me. I agree with
Christensen that the mere fact that I got the answer 43 should not, in itself, raise my
confi dence in my answer. But similarly, the mere fact that Peer got 45 should not, in
itself, lower my confi dence in my answer. Peer’s getting 45 is evidence against my
answer of 43 just in case it is reasonable for me to think that Peer is generally rational.
But similarly, if it’s reasonable for me to think that I am generally rational, then my get-
ting 43 is evidence for my answer of 43.
This is not to say that in the typical case, my evidence for my own credence derives
from refl ecting on my own rationality. I am merely noting what is the case when I do so
refl ect. Subjects who know they are generally rational have a general (defeasible) reason
to suppose that their own credences are rationally correct.
Having said this, I think there is an asymmetry between the evidential force of my own
credence and the evidential force of Peer’s credence. Christensen is correct to say that
refl ecting on my own credence (even along with my general rationality) cannot provide
me with a reason to revise my fi rst-order credence. Such refl ection can at most provide a
basis for boosting my second-order confi dence in my fi rst-order credence, or provide a
justifi cation for my fi rst-order credence where none existed before. Peer’s credence
(along with his general rationality) can force me to revise my fi rst-order credence.
In a diff erent context, Kelly argues that if my higher-order evidence concerning my
own rationality were suffi cient for the rationality of my fi rst-order belief, the distinction
between rational and irrational belief would collapse: 19
It seems as though the only principled, not ad hoc stand for the proponent of The Equal
Weight View to take is to hold that the psychological evidence swamps the non-psychological
evidence even when the psychological evidence is exhausted by what you yourself believe. . . .
after one arrives at some level of confi dence—in the present example, a degree of belief of
18 Christensen ( 2010 ).
19 Kelly ( 2010 ).
114 stewart cohen
0.7—how confi dent one should be given the evidence that one then possesses is . . . 0.7. Of
course, if one had responded to the original evidence in some alternative way—say, by giving
credence 0.6 or 0.8 to the hypothesis––then the uniquely reasonable credence would be 0.6 or
0.8. On the picture of evidence suggested by The Equal Weight View, the distinction between
believing and believing rationally collapses.
If Kelly is right, then the view I am defending leads to an absurd consequence. But it is
surely too quick to say that this view collapses the distinction between believing and
believing rationally. One may have no evidence of one’s own rationality, for example,
one could be suff ering from amnesia. Or one could simply be a bad reasoner and know
it. In these cases, if one reasons badly from one’s evidence, one thereby ends up with an
irrational belief.
Despite this, Kelly could argue that this view would make it impossible for anyone
who knows she is generally rational to have an irrational credence. This would still be an
absurd consequence. But it is not clear that the view has even this result.
We can distinguish between subtly incorrect reasoning, and obviously incorrect rea-
soning. As an instance of the fi rst, consider van Inwagen’s consequence argument for
incompatibilism. 20 Lewis’s response demonstrates a subtle mistake in van Inwagen’s
argument—we can suppose. 21 I claim that in this case, prior to encountering Lewis’s
argument, van Inwagen could have reasoned from his general rationality to the rational-
ity of his belief in incompatibilism based on his argument.
Compare this with a subject Smith who engages in a fl agrantly hasty generalization.
Smith is generally rational, but for irrational reasons, dislikes people with blonde hair. After
observing a person with blonde hair commit a crime, he infers that all people with blonde
hair are criminals. Now suppose Smith reasons from his general rationality, to the conclu-
sion that his belief that all people with blonde hair are criminals is rational. Here I want to
say that the obvious irrationality of his reasoning defeats his inference from his general
rationality to the rationality of his belief that all people with blonde hair are criminals.
So what is the diff erence between van Inwagen and Smith? Why is Smith’s inference
defeated and not van Inwagen’s? The diff erence between the two cases consists in just
how they were described—van Inwagen commits a subtle error, and Smith commits an
obvious error. The obvious irrationality of Smith’s reasoning constitutes a defeater of his
inference from his general rationality to the rationality of his belief that all people with
blonde hair are criminals. But van Inwagen, because his error is so subtle—it took a
mind like Lewis’s to discover it—does not have such a defeater (prior to his encounter
with Lewis). That is why van Inwagen, but not Smith, has rational support for his belief.
This account raises issues about what it is to possess defeating evidence. Suppose
I reason rationally from e to h. I possess some evidence d such that a line of reasoning
from d, that only a super-genius could appreciate, defeats my inference from e to h. Do
I rationally believe h? I have argued that in such a case, because the defeating reasoning
20 van Inwagen ( 1975 ).
21 Lewis ( 1981 ).
a defense of the (almost) equal weight view 115
can only be appreciated by a super-genius, I do believe h rationally. But I can see
someone arguing that it does not matter how opaque the reasoning is to the defeater.
If there is any way to reason from my evidence to a defeater of my inference, then I do
not believe h rationally.
Perhaps we can distinguish between distinct notions of rationality. The fi rst notion
does not tolerate any defeaters being supported by one’s evidence, no matter how subtle
the reasoning is from the evidence to the defeater. We can call this “Ideal Rationality.”
The second notion allows that one can be rational in believing h on the basis of e, even
though one’s evidence supports a defeater, provided that the reasoning from the
evidence to the defeater is not obvious.
Here it is natural to inquire, “obvious to whom?” In normal parlance, we can say that
a subject is overlooking something obvious, even though it is not obvious to him. So to
whom is it obvious? My suggestion is that ascriptions of obviousness are indexed to a
particular standard for reasoning ability. In diff erent contexts, diff erent standards are in
play. So just as we may disparage someone who is not very bright for failing to see the
obvious validity of a straightforward modus tollens argument, so a race of super-geniuses
may disparage us for failing to see the “obvious” proof of Fermat’s Last Theorem. We can
call this second notion “Intersubjective Rationality.” 22
The upshot, on the supposition that Lewis discovered a subtle error in van Inwagen’s
consequence argument, is that we can say that van Inwagen’s belief in the conclusion is
intersubjectively rational, but not ideally rational. Smith’s belief is neither.
8 EW and the commutativity of evidence acquisition
According to EW, when disagreeing peers meet, they should split the diff erence between
their credences. But this leads to an absurd result. Suppose you encounter two peers in
succession. Intuitively, the order in which you encounter them is irrelevant to the cre-
dence you should have after you have met them both.
Suppose I am at 0.9 for p, and I encounter a peer at 0.1. EW says that I should split the
diff erence between our credences and revise my own to 0.5. Next I encounter another
peer at 0.7. EW tells me to again split the diff erence and revise to 0.6.
But suppose we reverse the order in which I encounter the peers. Starting with my
original credence of 0.9, I fi rst encounter the peer at 0.7. EW says that I should revise to
0.8. But then when I meet the second peer at 0.1, I should move to 0.45. So EW dictates
that I adopt diff erent credences depending simply on the order in which I encounter
the two peers. But this is absurd. The rationality of my credence is a function of my total
evidence regardless of the order in which I acquired the evidence. 23
EW has this absurd result only if we interpret it to mean that one should split the dif-
ference with every peer one encounters. But EW should not be construed in that way.
22 For more on this notion of intersubjective rationality, see Cohen (1987).
23 This problem is discussed by Juan Comesana in an unpublished paper. See also Kelly 2010 .
116 stewart cohen
The core idea of EW is that I should give equal weight to my peer’s judgment and
my own judgment. So in the special case where I’ve encountered only one peer, I should
split the diff erence, i.e. move to the average of our credences. But I can encounter indef-
initely many peers. If I split the diff erence with each one, I am in eff ect giving increasing
weight to each new peer I encounter. 24 Surely this is not in the spirit of EW. Just as
I should give the same weight to a peer’s judgment as I give to my own, I should give
equal weight to the judgments of all of my peers. This means that as I encounter new
peers, I should revise by continually averaging over all of them. So regardless of the order
in which I encounter them, I will end up with the same credence.
9 Approximately splitting the diff erence
Christensen has been a defender of the spirit of EW. But he observes that there is reason to
depart from the letter. Because of what Jennifer Lackey calls “personal information” one
should only approximately split the diff erence with one’s peer. 25 Personal information
concerns what one knows about one’s own condition. I can be more certain that I am not
rationally impaired by drugs, emotional stress, sleep deprivation, and so on, than I can
about my peer. Thus in a typical peer disagreement, it is rational for me, on this basis, to
give slight additional weight to my own opinion. In cases where my peer holds an opinion
that seems crazy, for example, he denies a Moorean proposition, my personal information
will make it rational for me to give substantial extra weight to my credence. 26
This is surely correct. Strictly speaking, EW is only approximately correct. It is worth
noting that a stability issue arises in connection with approximately splitting the diff er-
ence. If I revise my credence in this manner, and I know nothing about whether or how
my peer has revised, then I can rationally remain at my new credence. But if I learn my
peer has approximately split the diff erence as well, that gives me less reason to suspect he
is rationally impaired. This would require me to revise further in the direction of my
peer. If he also revises in this way, that gives me still less reason to suspect he is rationally
impaired, and so I should revise in the direction of my peer again. In a case where my
peer is not impaired, this process will continue indefi nitely, so long as each of us knows
of the other’s revisions. This raises game- theoretic issues that I will not pursue, except to
note that my peer and I will not agree at the limit. In addition to worries about my peer’s
rationality, there are worries about whether he is intentionally misleading me about his
credences. As he could be systematically misleading me with each revision, that is,
mimicking a rational agent, there is no way to eliminate that possibility. 27
24 Thanks to Peter Milne for this way of putting the point.
25 Lackey ( 2008 ).
26 Lackey ( 2008 ), Christensen ( 2010 ).
27 First and foremost, I thank David Christensen who, in the course of many conversations, helped me
greatly to understand the issues at stake in the rational disagreement debate. For helpful discussion, I also thank
Nathan Ballantyne, Jessica Brown, Mark Budolfson, Juan Comesana, Josh Dever, John Devlin, David Enoch,
Ian Evans, Tom Kelly, Jennifer Lackey, Matthew Lee, Levi Spectre, Brian Weatherson, and Ruth Weintraub.
a defense of the (almost) equal weight view 117
References
Ballantyne, Nathan, and E. J. Coff man (forthcoming) “Conciliationism and Uniqueness,”
Australasian Journal of Philosophy .
Broome, John (1999) “Normative Requirements,” Ratio 12: 398–419.
Christensen, David (2007) “Epistemology of Disagreement: the Good News,” The Philosophical
Review 116.
—— (2010) “Higher-Order Evidence,” Philosophy and Phenomenological Research 81 (1):
185–215.
Elga, Adam (2007) “Refl ection and Disagreement,” Noûs 41 (3): 478–502 .
Feldman, Richard (2006) “Epistemological Puzzles About Disagreement,” in Stephen Hether-
ington (ed.) Epistemology Futures (Oxford: Oxford University Press).
Joyce, James (1998) “A Nonpragmatic Vindication of Probabilism,” Philosophy of Science 65 (4):
575–603.
Kelly, Tom (2005) “The Epistemic Signifi cance of Disagreement,” Oxford Studies in Epistemology
1: 167–96.
—— (2010) “Peer Disagreement and Higher-Order Evidence,” in Richard Feldman and Ted
Warfi eld (eds.) Disagreement (Oxford: Oxford University Press).
Lackey, Jennifer (2008) “What Should We Do When We Disagree?” in Tamar Szabó Gendler and
John Hawthorne (eds.) Oxford Studies in Epistemology (Oxford: Oxford University Press).
Lee, Matthew (unpublished) “Is the Conciliationist Committed to Uniqueness.”
Lewis, David (1981) “Are We Free to Break the Laws?” Theoria 47 (3): 113–21.
Plantinga, Alvin (2000) “Pluralism: A Defense of Religious Exclusivism,” in Philip L. Quinn and
Kevin Meeker (eds.) The Philosophical Challenge of Religious Diversity (Oxford: Oxford
University Press), 172–92.
Rosen, Gideon (2001) “Nominalism, Naturalism, Epistemic Relativism,” Noûs 35 (15): 69–91.
van Inwagen, Peter (1975) “The Incompatibility of Free Will and Determinism,” Philosophical
Studies , 25: 185–99.
—— (1996) “It is Wrong, Everywhere, Always, and for Anyone, to Believe Anything on Insuffi -
cient Evidence,” in Jeff Jordan and Daniel Howard-Snyder (eds.) Faith, Freedom, and Rationality:
Philosophy of Religion Today (London: Rowman and Littlefi eld).
White, Roger (2005) “Epistemic Permissiveness,” in Philosophical Perspectives, xix: Epistemology
(Oxford: Blackwell).
This page intentionally left blank
PART II
Disagreement in Philosophy
This page intentionally left blank
Do you get worried when you discover that some of your philosophical colleagues
disagree with your philosophical views? What if you know these colleagues to be smarter
and, with respect to the relevant issues, better informed than you? Suppose the situation is
more worrisome still: you fi nd that a large number and percentage of the people you know
full well to be smarter and better informed than you in the relevant area hold positions
contrary to your own. Will you, or should you, waver in your view? And then there is the
nightmare scenario: suppose the belief in question, the one denied by a large number
and percentage of the group of people you fully admit to be your betters on the relevant
topics, is one of the most commonsensical beliefs there are. Are you going to retain your
belief in that situation?
It is common for philosophers to have philosophical opinions on topics outside their
areas of specialty. Not only that: a good portion of those opinions are highly controver-
sial. Take an analytic metaphysician who has fi rm opinions on content externalism,
evidentialism, and the Millian view of proper names despite having done no serious
research in the philosophy of mind, epistemology, or the philosophy of language. If this
is an average philosopher, then she will know that some of the people who work in
those other areas and disagree with her are not just better informed but smarter than she
is. How can she rationally think they are all wrong?
In this essay I examine the “nightmare” case described above both generally with
regard to any area of inquiry and with respect to philosophy. After looking at the general
case I will focus on its most extreme application: learning that one’s epistemic superiors
deny just about the most commonsensical claims imaginable.
Children do not say the damndest things; philosophy professors do. Some of them say
that there are no baseballs. Or that nothing is morally wrong. Or that twice two isn’t
four. Others truly believe that there are no beliefs. Or that taking one cent from a rich
person can instantly make them no longer rich. Or that no claims using vague concepts
are true (not even “Dogs are dogs”). Some hold that nothing is true, as truth is an incon-
sistent concept. Many think that fi re engines aren’t red (or any other color). And of
course some hold that we know next to nothing about the external physical world.
6
Philosophical Renegades
B ryan F rances
122 bryan frances
The proponents of these views aren’t fools. Many of them are among our best and
brightest. They are as intellectually virtuous and accomplished as anyone in philosophy.
Some are outright geniuses. Over many years they look at the evidence as carefully and
competently as anyone, and then with full sobriety come to contravene common
sense—and often enough they have to fi ght hard against their own prejudices in order
to endorse their radical conclusions.
And yet, if you are a typical philosopher then you probably believe that they’re all
wrong in endorsing those “error theories.” This wouldn’t be so bad but for the fol-
lowing facts: you are well aware that often enough the philosophers who hold the
anti-commonsensical opinions are generally more informed than you are on those
topics, they have more raw intelligence, they have thought and investigated whether
the claims in question are true longer and in more depth than you have, they have
thought about and investigated the relevant topics longer and in more depth than
you have, they are just as or even more intellectually careful than you are, and they
have understood and fairly and thoroughly evaluated virtually all the evidence you
have seen regarding those claims (and usually quite a bit more evidence). I know this
is all true for me compared with many philosophers who endorse anti-commonsen-
sical philosophical theories, and if you don’t think it’s true for you, you’re probably
deluded. You know perfectly well that you’re not a top philosopher on the meta-
physics of material composition. Or the metaphysics of cognition. Or metaethics.
Or the philosophy of mathematics. Or vagueness. Or theories of truth. Or the meta-
physics of color. Or the nature of knowledge. At the very least, let’s assume that the
above list of facts apply to you with respect to some of those philosophical areas.
Despite having genuine respect for people you know to be your epistemic superiors
with regard to the relevant topics, you continue to disagree with them on those very
issues: you are an epistemic renegade . This certainly looks epistemically naughty.
In this essay I present an argument for metaphilosophical skepticism, the thesis that in
the nightmare scenario if one retains one’s belief, then in most interesting cases that
belief retention is seriously epistemically defective . What the defect is, however, depends on
the circumstances as well as how various epistemic notions are interrelated. If the belief
is highly theoretical (e.g., Millianism for proper names, evidentialism, content external-
ism, four-dimensionalism), then I hold that the renegade’s belief will be unwarranted,
unjustifi ed, and blameworthy. However, if the belief is a commonsensical one held in
the face of an error theory (e.g., “I have hands” versus a philosophical theory that says
there are no composite objects), that belief may well be justifi ed and amount to know-
ledge even after being retained; what the serious epistemic defect amounts to in that
case depends on how the central epistemic concepts are related to one another. This
amounts to a new, peculiar, and highly contingent kind of radical skepticism. 1
1 I argued for a related thesis in my 2010. That essay had major fl aws, including an imprecise argument
and a diff erent thesis. The present essay is perfect.
philosophical renegades 123
If the skeptical thesis is false, then we do have full-fl edged knowledge in the nightmare
scenario involving error theories and our retaining our commonsensical beliefs is epis-
temically a-okay. As I will argue this would mean that many of us know perfectly well
that many of the most popular philosophical theories are false despite the facts that
we are well aware of our status as amateurs with respect to those topics, that we are
aware of the impressive arguments for those theories, and that those arguments are gen-
erally as good as and even superior to the arguments in other philosophical areas. The
startling consequence is that large portions of metaphysics, the philosophy of language,
the philosophy of logic, the philosophy of physics, and metaethics are bunk and phil-
osophers should give up most of their error theories despite the fact that their support-
ing arguments are generally as good as or even better than other philosophical
arguments. Thus, whether or not the skeptical thesis is true for the error theory cases we get
interesting epistemological results.
This is a long essay, so it is prudent to give its structure here. In sections 1–4 I discuss
the general phenomenon of disagreement with recognized epistemic superiors and
articulate why it seems that if a person retains her belief upon fi nding herself in that situ-
ation, then her doing so will thereby be signifi cantly diminished in some important
epistemic respect. In section 5 I will present the offi cial rigorous argument for metaphilo-
sophical skepticism. Objections to the premises of that argument make up sections 6–11.
In section 12 I consider what the epistemic defect should be in the cases of a common-
sensical belief held in the face of disagreement with an error theory. Finally, in section 13
I consider the epistemological consequences of the falsity of metaphilosophical skepti-
cism, which in my view are just as interesting as the consequences of its truth.
1 The purely scientifi c case
Suppose you believe Jupiter has fewer than ten moons because that’s what you heard
when you were in school thirty years ago. However, suppose also that over the interven-
ing years evidence has accumulated in favor of a theory that there are over 200 moons
orbiting Jupiter. As a result, a large number and percentage of professional astronomers
have, independently of one another, come to accept the new theory. You become aware
of these two facts, about the evidence and the resultant professional opinion. Still, you
reject the new theory even though you admit the hard truth that the professionals have
all your evidence and much more. You just think that they must have made some mistake,
as it seems absurd to you that a planet could have over 200 moons. You are aware of their
opinion, their comparative expertise, and their epistemic advantage over yourself. And
yet you think they are wrong and you do so even though you fully admit that you have
no evidence that they lack or have overlooked. You’re not a professional astronomer.
Presumably, you’ll say in your defense that they just must have made a mistake somewhere
in digesting the new evidence, although you don’t know what the new evidence even is
or what the mistake might be.
124 bryan frances
On the face of it, your belief that there are fewer than ten moons of Jupiter won’t
amount to knowledge. Your belief might be true of course; the professionals aren’t infal-
lible. And your belief started out with some impressive justifi cation, as it was acquired in
what we can assume is the usually reliable way of reading reliable science textbooks
(though many of the authors of those books have since recanted, so your belief-forming
process really doesn’t look reliable under that highly relevant description). But given
that you are perfectly aware of the large percentage and number of specialists who dis-
agree with you, you admit that they are your epistemic superiors on the topics in ques-
tion, and you admit that you have no evidence that they lack, your belief won’t amount
to knowledge even if it’s true.
The numbers and percentages of specialists matter. If there were just a couple outlier
professional astronomers who thought Jupiter had over 200 moons, but you were aware
of a great many other specialists who insisted the number was fewer than ten even
though they were well aware of the outliers’ opinion, reasoning, and evidence, then per-
haps you could still know that Jupiter had fewer than ten moons even though you admit
that the outliers are genuine experts and have all the evidence you have as well as much
more evidence. You note that all the other specialists think the outliers are wrong and so
you conclude on that basis that the outliers must have made a mistake somewhere in
evaluating the evidence, or they lack some evidence that the other experts have, even
though you may not have the slightest idea what the mistake or extra evidence is.
One’s awareness of the specialists’ views also matters. Suppose the “over 200 moons”
theory is based on evidence that was generated with some new technology that has
been proven to work in many areas but is now being applied in an area for which it is not
suited. Suppose further that there was no then-current way the scientists could have
foreseen this limitation. Now pretend that inter-country communication among
astronomers is poor, so even though there is a large group of astronomers in the UK, say,
who are well aware of and perhaps using the new technology (and thus taking the “over
200 moons” theory very seriously), in the US very few astronomers have even heard of
the new technology let alone used it. (This scenario isn’t realistic today, but that hardly
matters.) Finally, pretend that you’re an amateur astronomer in the US who has never
heard of the new technology or the “over 200 moons” theory and who believes—
correctly and on the basis of sound if ultimately inconclusive evidence—that Jupiter has
no more than ten moons. While it’s true that if you had occasion to learn about the new
technology you would not be able to even suggest that there is anything wrong with or
inapplicable about it, the mere fact that some people with whom you don’t communi-
cate have made an error that you could not rectify doesn’t seem to sabotage your know-
ledge that Jupiter has fewer than ten moons. Since you’re not part of their epistemic
community, their mistake doesn’t infect you in an epistemic manner.
To make this clearer, consider an amateur astronomer in the UK who is your coun-
terpart in the following ways: she also believes that Jupiter has fewer than ten moons and
holds this belief on the basis of the same evidence that you use. However, unlike you, the
UK amateur knows that there is impressive evidence that Jupiter has well over ten
philosophical renegades 125
moons. She knows this because she knows that the top UK astronomers wouldn’t be
convinced of the “over 200 moons” theory if they didn’t have impressive evidence
(again, this is not to say that she thinks the top UK astronomers are infallible or even
right on this particular issue). Roughly, since the UK amateur knows that there is
impressive evidence against her belief, and she has no counter to that evidence, her
belief is thereby epistemically diminished compared to yours. Her familiarity with the
new technology and with the excellent sociological status among experts of both the
soundness of the new technology and the subsequent “over 200 moons” theory sabo-
tages her knowledge. We will see later that this epistemic diff erence between you and
your UK counterpart has signifi cance also for metaphilosophical skepticism.
The conclusions drawn in the previous paragraphs seem pretty reasonable, even
though we’ve skipped over important qualifi cations (which I’ll get to below). But when
we substitute philosophical for scientifi c inquiry nothing seems so obvious anymore.
Pretend that 45 per cent of the philosophers who have thoroughly examined all the
apparently relevant issues in metaphysics conclude that there are no non-living com-
posite material objects; another 45 per cent have come to the opposite conclusion; and
the fi nal 10 per cent remain agnostic on this point. The philosophers in the fi rst group
think there are no baseballs even though there are swarms of particles that are arranged
baseball-wise. Suppose further that these philosophers don’t hedge their view. They
hold that whenever an ordinary person says, under ordinary circumstances, something
like “There are four baseballs in the trunk of the car,” what she says—and believes—is
just plain false even though perfectly reasonable and useful. In such a society would an
ontologically commonsensical philosopher who is an amateur at metaphysics but who
nevertheless is aware of the impressive status of the “no non-living composites” view fail
to know that there are baseballs?
2 Introduction to metaphilosophical skepticism
In recent works I have explored this issue as it applies to hypotheses that have signifi cant
empirical backing as well as philosophical interest. 2 In this essay I want to do two things:
radically revise my treatment of the general case and then examine the extraordinary
case when the anti-commonsensical hypothesis is “purely philosophical” in the sense
that there is little or no straightforward scientifi c evidence for it and the reasons off ered
in its defense come from purely philosophical thought alone—often but not exclusively
metaphysics. Thus, for the most part I will set aside views such as color error theory (no
ordinary physical objects are colored), which has signifi cant empirical backing, in order
2 In my 2005a, 2005b, 2008, 2010, and 2012. There has been excellent recent work on the general topic
of the epistemic consequences of disagreement with epistemic peers (see the papers and references in this
book as well as the Feldman and Warfi eld volume). In the cases examined in this essay, however, the disagree-
ment lies with one’s admitted epistemic superiors, not peers. On the face of it, it’s more of a challenge to
rationally retain one’s view in the presence of disagreement with multiple recognized epistemic superiors
than a single peer.
126 bryan frances
to focus on purely philosophical matters such as compositional nihilism (there are
no composite objects; Rosen and Dorr 2002 , Sider ms.), artifact nihilism (there are
no artifacts; van Inwagen 1990 , Merricks 2001 ), moral error theory (there are no
positive, fi rst-order, substantive moral truths; Mackie 1977 ), mathematical error the-
ory (there are no positive, fi rst-order, substantive mathematical truths; Field 1980 ,
Field 1989 , Balaguer 1998 ), semantic nihilism (no assertions made with or beliefs
expressed with vague terms are true; Sider and Braun 2007 ), and radical skepticism
(there is virtually no knowledge). 3 The line between the philosophical theses with little
or no scientifi c backing and those with signifi cant scientifi c backing is usually consid-
ered to be generously fuzzy, but there are obvious examples on either side of the line.
I focus here on the examples far on the philosophy side.
In evaluating metaphilosophical skepticism we need not evaluate the radical anti-
commonsensical philosophical theories themselves. Indeed, I will assume for the sake of
argument that they are all false and the commonsensical beliefs listed earlier are all true.
Clearly, appropriate awareness of the credentials of a false theory can still ruin one’s
knowledge of a fact. Consider the Jupiter case again but now imagine that all profes-
sional astronomers have long accepted the “over 200 moons” theory (and there are
many of these astronomers, they are independent thinkers, and so forth). You become
aware that there is unanimous favorable expert opinion on the “over 200 moons” theory.
As before, you think that they must have made some mistake, and your one and only
reason is that you think the idea that a planet could have over 200 moons is just plain
nuts. In this case you don’t know that Jupiter has fewer than ten moons, even if against all
odds your belief is true and the old evidence described in your childhood science texts
was sound.
3 The renegade
In this section I present two conditions that the subject satisfi es in scientifi c cases like the
one involving Jupiter’s moons. Then in the following two sections I’ll explain how the
metaphilosophical skeptic uses these conditions in her argument to reach her new kind
of skepticism.
The fi rst condition, immediately below, looks complicated but the idea behind it
isn’t: all it’s really doing is making precise the vague thought that hypothesis H is taken
by the signifi cant portion of the relevant specialists to be true, and person S (who is an
amateur with respect to H) knows that fact about the specialists as well as the fact that
H’s truth means P’s falsehood.
Condition 1 : Person S is familiar with hypothesis H (e.g. “Jupiter has over 200 moons”) and
with many of the issues surrounding H, including knowing the key facts that H is inconsist-
3 Philosophers occasionally use “error theory” to refer not to the theories listed above but to subsidiary
claims regarding how those theories can account for why common sense goes wrong.
philosophical renegades 127
ent with P 4 (e.g. “Jupiter has fewer than ten moons”) and that H is “live” in the following
strong sense:
(i) For many years many of the best members of her contemporary intellectual community
(e.g. professional astronomers who study the large bodies of our solar system) have
thoroughly investigated H in order to determine whether it is true.
(ii) H is judged to have an excellent chance to be true by a signifi cant number and percent-
age of the professionals in the fi eld(s) relevant to H. These same specialists also think that
P is probably false, for the simple reasons that the evidence for H is also against P and H
and P are obviously mutually inconsistent. 5 , 6
(iii) Many of the professionals who endorse (or nearly endorse) H and reject P are S’s
epistemic superiors with respect to the topics most relevant to H and P: they are
generally more informed than S is on the topics involving H, they have more raw
intelligence than she has, they have thought about and investigated whether H is
true longer and in more depth than she has, they have thought about and investi-
gated the topics surrounding H longer and in more depth than she has, they are
just as or even more intellectually careful than she is, they are no more relevantly
biased than she is, and they have understood and fairly and thoroughly evaluated
virtually all the evidence and reasons she has regarding H and P (and usually much
additional evidence or reasons). 7
(iv) Those professionals reached that favorable opinion of H and ~P based on arguments
for H and ~P that are highly respected even by professionals who reject H and
endorse P.
A hypothesis needs a lot of sociological support in order to satisfy (i)–(iv). For instance,
merely being endorsed for years, by some excellent professionals, even internationally
famous ones, will not mean that a hypothesis satisfi es (i)–(iv). Condition 1 is quite
demanding—and keep in mind that it says that (i)–(iv) not only are true but person S
knows that they are true. For all the good press epistemicism, for instance, gets nowadays
it might fail to satisfy (ii) (I suspect it does, but this is not an armchair matter). Further-
more, suppose that theory T was overwhelmingly voted the theory that is the best one
we know of and the one that is most likely to be true. It might still fail to satisfy (ii) because
the voters could also consistently insist that although T is the best and most likely to be
true, it is unlikely to be true. The voters could think that we are still quite far from
fi nding a theory that has any real chance at truth. (This won’t be the case in which one
4 In section 5 we will see that it’s not strictly necessary that H and P be logically incompatible.
5 Here and throughout I use “evidence” to include epistemic support provided by philosophical argument,
including a priori argument.
6 If A obviously entails ~B and X is evidence for A, this usually means that X is evidence against B. There
might be exceptions, but they won’t apply to the cases at issue in which it’s clear to everyone involved in
discussions regarding H and P that the evidence for H is also evidence against P.
7 These professionals I will occasionally refer to as “experts,” which means only that they satisfy (i)–(iv).
128 bryan frances
theory is simply the negation of another.) Even the so-called “advocates” of T might
not advocate its truth but its status as the best theory around in the sense of being the
one on the shortest path to the truth.
When I say that S knows that so-and-so is her epistemic superior with respect to the
topics most relevant to H and P, I don’t mean she dwells on that fact whenever she
refl ects on her belief in P. Nor do I mean that she has consciously thought about and
explicitly accepted each of the components of the above characterization of epistemic
superiority. Although I don’t want any requirement that stringent, I do want something
a bit stronger than “S is disposed to accept that so-and-so fi ts all the superiority circum-
stances in (iii) from Condition 1.” I want something highly realistic, something that
actually applies to many of us philosophers who are confi dent enough to disagree with
David Lewis regarding some metaphysical claim, say, and yet wise and refl ective enough
to readily admit that he knew and understood quite a bit more than we do regarding the
topics directly relevant to the claim in question. I want to pick out an epistemic situation
we often actually fi nd ourselves in when we contemplate how our views confl ict with
those had by people we know full well to be better philosophers than we are in these
areas. But I’m not sure what that really amounts to; as a result I won’t off er stipulations
regarding “epistemic superior” or “knows that so-and-so is S’s epistemic superior.”
Perhaps something along these lines conveys the relevant kind of awareness of dis-
agreement in the face of epistemic superiority:
• I have consciously recognized that Lewis believes the opposite of what I believe.
• I am disposed to accept virtually all of the conditions in the characterization of
epistemic superiority with respect to P, applied to Lewis and myself ( part (iii) of
Condition 1).
• I am disposed to admit that Lewis is my epistemic superior while simultaneously
realizing that we disagree regarding P.
I may not have ever actually said, all in one breath, “Lewis is my epistemic superior
regarding P but I’m right and he’s wrong about P,” but I have admitted in one way or
another all three conditions on several occasions. The fundamental problem—I seem
epistemically problematic in continuing to believe P—is worse, perhaps, if I have asserted
all the parts of (iii) in one breath. But even without the simultaneous assertion, I don’t
look too good.
When I say that we admit that so-and-so is our epistemic superior on a certain topic
or individual claim, I mean to rule out the situation in which you think a group of
“experts” are frauds or massively deluded in spite of the fact that they are more intelligent
than you are and have thought about H much longer than you have. If you think they are
deluded, then you will hardly count them as your epistemic superiors. For instance, sup-
pose there is a large group of people with high “raw intelligence” who formulate and
examine all sorts of very complicated astrological theories. You don’t understand even
10 per cent of what they are talking about, but you won’t take them to be your epistemic
superiors on the task of knowing the future because you think they are, well, idiots.
philosophical renegades 129
But there are other cases where you don’t think the people in question are deluded
even though you disagree with them on a massive scale. For instance, suppose you are an
atheistic philosopher who respects the work done in the portion of the philosophy of
religion that takes theism for granted. You look at the philosophical work on the trinity,
for instance, and admire its philosophical quality—at least in some sense. The scholars
are grappling with the three-in-one problem of God, Jesus, and the Holy Spirit. As an
atheist, you probably think this is a one-in-one issue, as neither God nor the Holy Spirit
exists. Will you consider these scholars your epistemic superiors on the topic of the trin-
ity, or on some specifi c claim C on that topic such as “The oneness in question is not
numerical identity”?
Recall that one of the conditions on “recognized epistemic superiority” is this: you
have to know that the superior is no more relevantly biased than you are. If you strongly
believe that the trinity-believers are more biased than you are when it comes to reli-
gious matters, then of course they won’t satisfy that part of (iii); so, you won’t consider
them superiors. You escape the skeptic’s snare. But even if you thought that they are no
more relevantly biased than you are, they still might not satisfy another part of (iii): you
have to know that they are more informed than you are on the topics involving the claim
C. If you are an atheist, then in one pertinent sense you will consider any theist to be
much less informed about C and other religious matters compared to yourself, since
you will think that almost all of her interesting religious beliefs are false (or based on
false presuppositions). But in another sense you will judge her to be your epistemic
superior on religious matters, and even particular claim C, as she knows a lot more about
religion and various theories of the trinity than you do. You will also take her to know,
or at least justifi ably believe, many more conditionals regarding religion and the trinity
than you do (e.g. “If God has quality X, then according to view Y Christ must have qual-
ity Z”). Presumably, most of these conditionals have never even occurred to you, as you
think the topic is bunk and as a result don’t stay abreast of research into the trinity. These
observations show that we have to be careful how we understand “relevant matters” and
“informed” as they occur in (iii). How they apply to the philosophical cases we’re inter-
ested in is a topic for section 6.
Here is a preview of what the skeptic’s argument will be like. On the face of it, when
S learns about her epistemic superiors, as in Condition 1, S’s belief in P now faces a
threat : she’s got good reason to think her belief is false. If that threat is not neutralized,
then her belief in P won’t amount to knowledge. But Condition 2, which I’ll get to
immediately below, suggests that she has nothing that neutralizes the threat. Therefore,
her belief in P no longer amounts to knowledge. That is the very rough line of argument
the skeptic will use although I will be off ering signifi cant changes to every bit of it, espe-
cially the conclusion.
In order to introduce Condition 2, suppose that Condition 1 applies to subject S.
That is, S knows full well that a large percentage of the numerous relevant professionals
agree that H is true and P is false and that they have this opinion as the result of epis-
temically upstanding research over many years; further, S knows that she is no expert
130 bryan frances
on any of these matters. We can suppose that the metaphilosophical skeptic is right to
think that in these circumstances S has strong evidence that there is strong evidence for
H and against P. In spite of all that, S might still know that her belief P is true, that the
contrary hypothesis H is false, and that all those top professionals are just plain wrong.
There are several ways S might still know those three things. First, S might know that
she has some impressive evidence or other epistemically relevant item that the pros
don’t have or have insuffi ciently appreciated. This might be some evidence for P and
against H. Or, S might have some rare evidence that shows that the evidence for H is
fatally fl awed, even though the facts about expert opinion strongly suggested that that
evidence for H was good. These possibilities are consistent with the truth of Condition
1 applied to S. I’ll be going over these and other possibilities when we turn to the
philosophical cases. Condition 2 closes off some but not all of these possibilities:
Condition 2 : If S knows of any evidence or other epistemically relevant items that seem to cast
doubt on H, ~P, or the alleged evidence for H, then such items, if carefully, competently, and
fairly considered by the members of her community of professionals who are thoroughly famil-
iar with H and P (including her recognized epistemic superiors on the relevant topics), would
be nearly universally and confi dently judged to off er only quite weak evidence against H, ~P,
and the alleged evidence for H. (In fact, very often even the specialists on H who agree with S
that H is false and P is true would say as much.)
I designed Condition 2 to fi t the following type of case (it may need fi ddling with). Pre-
tend you are a graduate student in paleontology who is aware of several rival hypotheses
about the demise of the dinosaurs and who happens to believe the true meteor hypoth-
esis: a meteor wiped out the dinosaurs. Your PhD supervisor asks what you plan to say
about alternative hypothesis H in your dissertation (H might be the hypothesis that the
dinosaurs were wiped out by a series of supervolcano eruptions). You say that that the-
ory isn’t very plausible but you’re happy to throw in a section showing why it’s implau-
sible. She agrees with you that the meteor hypothesis is correct and H is false but she asks
you what you plan to say against H. You give your spiel and she tells you fl at out that
what you’ve said is inadequate and you should either do much better with a more criti-
cal section or drop it entirely and say in a footnote that you’ll be merely assuming the
falsehood of H. After all, professor so-and-so right down the hall is an advocate of H, he’s
certainly no dope, he isn’t alone in his expert opinion, and you’ve off ered nothing that
puts any pressure on his view or his reasons for his view.
Condition 2 is saying that the experts who accept H and reject P (as well as the
specialists who are agnostic on both H and P) know of S’s evidence and other epis-
temically relevant items that do or might support P or ~H. They haven’t overlooked
anything S has —just like in the graduate student case above. But just because they
haven’t missed anything doesn’t mean that S fails to have an epistemic item that suf-
fi ces for knowledge of P and ~H: there remains the possibility that they underestimated
the epistemic signifi cance of some of S’s epistemic items of relevance to P and H. In
the cases we’re interested in, in which S is a philosopher, H is a purely philosophical
philosophical renegades 131
error theory, and P is an ordinary commonsensical claim, this epistemic item that
S has need not be an argument that suffi ces for knowledge of P—at least, it need not be
a dialectically eff ective argument. But our epistemic items go beyond our argumenta-
tive-persuasive abilities, especially if epistemological externalism is true.
Call the person who satisfi es Conditions 1 and 2 a well-informed mere mortal with
respect to H. She is well informed because she is aware of H, H’s good status, and the rela-
tion of H to P; she is a mere mortal because, roughly put, she has no epistemic item
regarding H and P that the experts have overlooked or would not reject and she knows
that the advocates for H and ~P are her epistemic superiors on those topics. The gradu-
ate student in paleontology is a well-informed mere mortal, “mere mortal” for short.
A child and any adult unfamiliar with the fi eld relevant to H are not, as they fail to satisfy
any of the demanding epistemic requirements of Condition 1. Another kind of mere
mortal is an expert in the general fi eld but whose specialization lies elsewhere. Professor
Smith teaches various paleontology classes. She is perfectly aware of H but wouldn’t be
able to say anything interesting against it. She has the true meteor-killed-the-dinosaurs
belief but like the graduate student her belief is too lucky (in some sense) to amount to
knowledge. That’s the type of person I have in mind as a well-informed mere mortal.
I hope that Conditions 1 and 2 capture the important aspects of her epistemic position.
The renegade is the well-informed mere mortal who retains her belief in P. She is the
target of the metaphilosophical skeptic’s argument.
I would be surprised if “S believes P” wasn’t polysemous. So, it is important that we
not get confused regarding the notion of belief that is relevant here. When you’re asked
“What is your take on P?” it seems that in at least some conversational contexts, espe-
cially ones concerning philosophical questions, you are being asked to take the fi rst-
order, direct evidence you know of regarding P and announce how it strikes you as
bearing on P. You are not being asked to give your considered judgment on all sources of
evidence or take into account what anyone else thinks. Instead, you’re being asked for
something like your phenomenological intellectual reaction to that limited body of
evidence. You’re being asked this: when you consider this body of considerations, in
which direction are you inclined: P, ~P, or neither? Never mind whether you “follow” or
“give in” to that inclination, thereby coming to accept P for instance; that’s another issue
entirely. Correlative with this task of responding to “What is your take on P?” is a notion
of belief that is similarly restricted. When you reply with “I believe P is true” you are not
off ering an objective assessment but rather a subjective reaction: here is the doxastic direc-
tion in which I happen to fi nd myself moved when I weigh those considerations .
This is not an unrefl ective notion of belief, as it might be the result of years of study,
but it’s still a mere doxastic inclination in response to fi rst-order evidence. Neither is it a
weakly held belief, as the inclination in question might be very strong.
I fi nd that a great many highly intelligent students interpret philosophical questions
in this manner. I fi nd it fascinating, partly because I fi nd it foreign. I have the doxastic
inclinations like everyone else, but I never thought of them as being beliefs . It took me
years to fi gure it out, but I now suspect that when I ask my students for their beliefs
132 bryan frances
regarding a view or argument what they hear is often something along the line described
above. In particular, when I go over basic issues in the epistemology of disagreement in a
classroom there always are a bunch of students who are initially completely puzzled as to
why disagreement with other people over philosophical issues should have any relevance
at all to the epistemic status of their own opinions—even when I tell them that almost
every philosopher in the world disagrees with them (as the case may be). Contrary to
popular belief among philosophy teachers, these students aren’t closet relativists or any-
thing like that; they just interpret the notion of philosophical belief diff erently from me.
I have also met professional philosophers who seem to have the same notion of belief in
mind. This inclination-fi rst-order notion of belief even shows up in scientifi c contexts.
For instance, I have had plenty of students hear the questions “Was there a beginning to
the physical universe?” and “Assuming there was a beginning, did it have a cause?” in this
inclination-fi rst-order manner. I suspect that people hear a question in that peculiar
manner when (i) they are aware of some relevant fi rst-order considerations regarding P,
so they have some considerations to work with when responding to the question (e.g.
unless they are astronomers they will not hear the question “How many moons does
Jupiter have?” in this way), and (ii) either they think the question is highly philosophical
or they think the relevant experts have come to no consensus, so they are free to ignore
them entirely.
In any case, I am examining the case when the philosopher’s considered judgment (not
doxastic inclination) of all the relevant considerations (not just the fi rst-order ones) is
that P is true. A worry here, which I don’t know how to fi nesse, is that the inclination-
fi rst-order notion of belief might be very common among philosophers who are
voicing opinions outside their specialty areas, if not across the board. If Jen doesn’t do
any extensive research on free will or determinism, and you ask her what her “take” is
on that topic, she might say “I am incompatibilist” even though all she is reporting is
the direction of her inclination after gazing at the fi rst-order evidence she is aware of.
If this notion of belief is especially common, then the scope of metaphilosophical
skepticism is thereby diminished, but only at the cost of drastically reducing the scope
of “full” philosophical belief.
4 The metaphilosophical skeptic’s principles
As I conceive the matter, the metaphilosophical skeptic is convinced that there is some-
thing epistemically bad about what the renegade is doing: when the well-informed mere
mortal retains her belief, then she has thereby gone wrong in some important epistemic
respect. Notice that the focus is on something the renegade does , an intellectual reaction
to the discovery of being a renegade. Thus, the skeptic’s thesis is just this: the renegade’s
action of retaining her belief is seriously epistemically defi cient . The skeptic thinks that this
epistemic defect holds of the renegade almost no matter what the true story of know-
ledge, justifi cation, evidence, reason, and epistemic blame/permissibility turns out to
philosophical renegades 133
be, and it holds for virtually any renegade, regardless of her particular circumstances
(e.g. the specifi c H and P involved, the manner in which the belief in P was formed,
whether S’s belief in P initially amounted to knowledge, and so forth). Therefore, she
does not hold her thesis as a result of some complicated epistemological theory (e.g.
“Assuming the truth of internalism and evidentialism . . .”); she thinks it follows from
general principles that virtually any epistemological theory will embrace. As a result, she
needs to be fl exible regarding the nature of the “serious epistemic defect,” as it can’t be
captive to the truth of how evidence, knowledge, and warrant are related (for instance).
Only when I’m fi nished with the argument for her thesis will we be in a position to see
how to interpret this “defect” in this fl exible way.
In order to introduce the fi rst principle the skeptic will use in her argument, suppose that
Condition 1 is true of person S. On the face of it, this means that her epistemic superiors
have some evidence that she doesn’t have—evidence that must be pretty strong since it
convinced the superiors that the renegade’s belief P is false. But there are other possibilities.
Maybe they have the same evidence as S but have “digested” that evidence properly whereas
S has not. And due to that diff erence, they have seen that the evidence points toward P’s
falsehood, not truth. Then again, maybe S isn’t defi cient in either of those ways (namely,
lacking evidence or improperly digesting the commonly held evidence) but has made
some calculation error that her superiors avoided. (This might happen if P is the answer to
some arithmetic problem.) A calculation error doesn’t seem like the evidential mistakes in
the fi rst two possibilities, at least according to my understanding of “evidence.”
For my primary purpose in this essay—the application of the principles below to
philosophers—I know of no relevant diff erence between the fi rst two possibilities. Fur-
ther, the possibility of a calculation error won’t apply to virtually any philosophical case,
since very few if any philosophical disagreements pivot on calculation errors.
There are other possibilities of course. Perhaps S disagrees with her superiors because
of a diff erence in “starting points,” and not anything about evidence or calculation. There
are various ways of understanding starting points. But I think that it is easy to overempha-
size their importance. For instance, if I’m an epistemologist who doesn’t work on
vagueness or the philosophy of language and logic generally, then when I fi nd out that
a large number and percentage of the specialists in the philosophy of logic and language
endorse some anti-commonsensical view regarding vagueness (e.g. supervaluationism,
epistemicism, non-classical logic) after many years of rigorous investigation, the obvious
thing for me to conclude, by far, is that they must have some decent arguments for their
view—arguments I don’t know about and that must be pretty impressive given that they
have actually turned so many quality philosophers against common sense. On the face of
it, there is no reason to think there is some mysterious diff erence in “starting points,”
whatever they are supposed to be, that is leading them against common sense.
In any case, here is the fi rst principle:
Evidence of Evidence (Ev-of-Ev): if Condition 1 is true of person S, then S has strong evidence E 1
regarding her recognized epistemic superiors (her knowledge of the various socio-epistemic
134 bryan frances
facts about top professional endorsement, such as the four parts of Condition 1) that either
(1) there is evidence E 2 (e.g. astronomical evidence) that S doesn’t have or has underappreciated,
that her recognized superiors have, and that strongly supports the idea that H is true and P is
false (after all, it actually convinced many of those superiors that H is true and P is false), or
(2) S has made a relevant calculation error that the superiors have avoided.
I chose the name of the principle based on the view that disjunct (2) will not play a role
in what follows.
Suppose Homer knows that Sherlock Holmes has an excellent track record in mur-
der investigations; he knows that Holmes is investigating the murder of the maid; he
then hears Holmes announce after a long investigation that he has done as thorough a
job as he has ever done and that the butler defi nitely did it. At that point Homer acquires
excellent evidence E 1 (Holmes’ word and track record) that there is excellent evidence E
2
(the “detective evidence” Holmes uncovered in his investigation, evidence Homer has
yet to hear) that the butler did it. Ev-of-Ev is an extension of that idea, applying it to a
community of top professionals (instead of just one person, as in the Holmes case).
What if Homer is the butler, Homer is innocent, and he knows perfectly well that he’s
innocent? I don’t think anything changes. Homer still has excellent evidence E 1 that
Holmes has excellent evidence E 2 that he, Homer, killed the maid. It’s just that Homer
also has other evidence regarding the situation. His evidence that he didn’t kill the maid
is much, much stronger than Holmes’ evidence E 2 , at least under any ordinary circum-
stances. Now, if Holmes revealed to Homer that Homer’s memory of the relevant par-
ticulars was incredibly bad, then Homer would begin to take seriously Holmes’ strong
detective evidence E 2 that Homer committed the crime, as his own evidence that he’s
innocent—from memory—would now be undermined.
Ev-of-Ev says that when there is a signifi cant number and percentage (both of those)
of relevant people who endorse H, where H is obviously inconsistent with commonsen-
sical belief P, then E 1 is strong evidence that there is strong evidence E
2 that H is true and
P is false (where S lacks or has underappreciated E 2 and the superiors have E
2 ). I’ve
already addressed my use of “relevant” in parts (ii)–(iv) of Condition 1. Now I will say a
few things about the uses of “signifi cant” and “strong.”
The use of “signifi cant” comes from part (ii) of Condition 1. One is tempted to
ask “How large a percentage and number is signifi cant?” I think this might be like asking
“How much money do I have to lose before I’m no longer rich?” or, more to the point,
“How much evidence do I need before I’m justifi ed in thinking the butler did it?” Sup-
pose that H is the hypothesis that there are no composite artifacts and P is the claim that
I have four baseballs in the trunk of my car. If there are 10,000 metaphysicians and
philosophers of physics in the world and 90 per cent of them say H and ~P are true or
quite likely to be true (and there is no funny business, such as “There was just one of
them a year ago and he cloned himself 9,999 times”), then it looks as though the conse-
quent of Ev-of-Ev is pretty reasonable. If there are just six in the world and fi ve say H
and ~P are true, then the consequent of Ev-of-Ev isn’t plausible (good percentage but
sample size too small). If there are 10,000 of them and just 16 per cent say H
philosophical renegades 135
and ~P, then the consequent of Ev-of-Ev is much less plausible (good sample size but
percentage too small) unless there are extenuating circumstances (e.g. the 16 per cent
are the clear superiors of the 84 per cent). 8 I doubt whether there are any magic num-
bers that would make the following statement reasonable: “Ev-of-Ev is true only when
‘signifi cant’ picks out a number greater than A and percentage greater than B.”
The reason S has for thinking his belief is false is a strong reason, something that
makes it highly likely that the belief is false. I doubt whether a probabilistic reading
of “strong reason” is appropriate here, but something along that line is in order. If you
want to make it more precise, think of cases from science or mathematics. With
respect to the Jupiter story, you fi rst acquired knowledge of the fact that a large
number and percentage of astronomers endorse the “over 200 moons” theory. You
rightly took that information, E 1 , to be excellent evidence that there is some evi-
dence E 2 that (a) you don’t have, (b) they do have, and (c) strongly supports the “over
200 moons” theory.
We can now introduce the rest of the principles she uses in her argument. To begin,
assume that S believes P, S satisfi es Condition 1, Ev-of-Ev is true, and the possibility of a
relevant calculation error is remote. Then this claim follows:
(i) S has strong evidence E 1 regarding her recognized epistemic superiors that there
is evidence E 2 that S doesn’t have or has underappreciated, that her recognized
superiors have, and that strongly supports the idea that H is true and P is false.
It’s natural to think that the truth of (i) means that S faces an epistemic “threat” to her
belief in P. We are familiar with defeaters and defeater-defeaters. For instance, you start
out believing P, learn some fact Q that suggests P is false or that your evidence for P is
inadequate (so Q is a defeater), but then you learn yet another fact R that shows that Q
is false or your evidence for Q is inadequate (so R is a defeater-defeater). Something
similar applies in S’s case. Although her learning E 1 presents a threat to her belief P,
or so the skeptic claims, we can easily imagine that she has some extra information that
“overwhelms” E 1 . (I brought up these possibilities when discussing Condition 2 earlier.)
If she has that extra information (that in some sense overwhelms E 1 ), then she is not
epistemically defi cient in any way in retaining her belief in P. Moreover, it seems that
in order to avoid all epistemic defi ciency in retaining P she must have some epistemic
“item” (evidence, reason, reliability, etc.) that overwhelms E 1 ; it’s not just an option. The
next principle makes this idea explicit:
Evidence & ~Skepticism → Defeater (Ev&~Sk→D): if S has strong evidence E 1 regarding her recog-
nized epistemic superiors that there is evidence E 2 that S doesn’t have or has underappreciated,
that her recognized superiors have, and that strongly supports the idea that H is true and P is
false, then if in spite of having E 1 her retaining her belief in P suff ers no serious epistemic defect,
then she must have some epistemic item that overwhelms E 1 .
8 The “unless” clause is important in practice. For instance, the percentage of philosophers who endorse
the anti-commonsensical epistemicism increases greatly with familiarity of the relevant topics.
136 bryan frances
At this early stage we leave open what this item might be (e.g. evidence, reason, reliability,
etc.) and what it means for it to “overwhelm” E 1 . 9 Later we will look at concrete propos-
als. The skeptic says that since (i) and Ev&~Sk→D are true, we have the result that if S
escapes metaphilosophical skepticism, then she has some item that cancels out E 1 .
But remember that S satisfi es Condition 2, which according to the skeptic makes it
unlikely that S has any item that makes her escape the skeptical pit. Here we are appeal-
ing to a principle that allows exceptions:
Condition 2 → No Defeater (2→~D): if S satisfi es Condition 2, then it’s highly likely that S fails to
have any epistemic item that “overwhelms” E 1 .
When we add that principle to what we have already concluded about S’s belief in P,
we get the result that it’s highly likely that S is caught in the skeptical snare: her retaining
her belief in P is seriously epistemically defi cient. Again, what the “serious epistemic
defect” is will be addressed below in section 12.
5 The metaphilosophical skeptic’s argument
Here is the positive argument schema for metaphilosophical skepticism, which uses the
material from the previous section (the negative arguments, which consists of responses
to objections, are in sections 6–11):
(a) A large number and percentage of the members of our intellectual community
of contemporary philosophers and their advanced students satisfy Condition 1
with respect to some claims P and H. Moreover, the philosophers are renegades
with respect to those claims: they think P is true.
(b) Ev-of-Ev is true for the cases mentioned in (a): if Condition 1 is true of one of
the philosophers mentioned in (a), then she has strong evidence E 1 regarding
her recognized epistemic superiors that either (1) there is evidence E 2 that she
doesn’t have or has underappreciated, that her recognized superiors have, and
that strongly supports the idea that H is true and her contrary belief P is false, or
(2) she has made a relevant calculation error that the superiors have avoided.
(c) But in the case of the philosophers and theories mentioned in (a), possibility (2)
from (b) is not realized.
(d) Thus, by (a)–(c) each of the philosophers mentioned in (a) has strong evidence
E 1 regarding her recognized epistemic superiors that there is evidence E
2 that she
doesn’t have or has underappreciated, that her recognized superiors have, and that
strongly supports the idea that H is true and the contrary belief P is false.
(e) Ev&~Sk→D is true for the cases mentioned in (a): if one of the philosophers
mentioned in (a) has strong evidence E 1 regarding her recognized epistemic
9 I am not saying that the item in question has to be a defeater as currently understood in the literature;
I just couldn’t think of a better term to use in the principle. What the item may amount to will become clear
in section 11.
philosophical renegades 137
superiors that there is evidence E 2 that she doesn’t have or has underappreciated,
that her recognized superiors have, and that strongly supports the idea that H is
true and the contrary belief P is false, then if in spite of having E 1 her retaining
her belief P suff ers no serious epistemic defect, then she must have some epis-
temic item that overwhelms E 1 .
(f ) Thus, by (d) and (e) for each of the philosophers mentioned in (a), if in spite of
having E 1 her retaining her belief P suff ers no serious epistemic defect, then she
must have some epistemic item that overwhelms E 1 .
(g) But most of the philosophers mentioned in (a) satisfy Condition 2 with respect
to H: if they have or know of any evidence or other epistemically relevant items
that seem to cast doubt on H, or the negation of P, or the evidence the advocates
of H have for H, then such items, if carefully, expertly, and fairly considered by
the members of her community of professionals who are thoroughly familiar
with the relevant issues, would be nearly universally and confi dently rejected as
insuffi cient to rule out H, the evidence for H, or the negation of P.
(h) 2→~D is true for the cases mentioned in (a): if one of the philosophers in (a)
satisfi es Condition 2, then it’s highly likely that she fails to have any epistemic
item that “overwhelms” E 1 .
(i) Thus, by (f )–(h) it’s highly likely that her retaining her belief P suff ers a serious
epistemic defect.
In what follows I am going to assume without argument that the argument (a)–(i) is
sound for hypotheses that are live in virtue of scientifi c evidence (“scientifi cally live” so
to speak): when the live hypothesis that confl icts with your belief fi rmly belongs to sci-
ence, then your retaining your renegade belief is seriously epistemically defective. So the
argument is assumed to work in the Jupiter case. My project in the rest of this essay is
threefold: present the case that the argument is sound for ordinary philosophical dis-
agreements (that don’t go violently against common sense), present the case that the
argument is sound for the error theory cases, and see what follows from the hypothesis
that metaphilosophical skepticism is false in the error theory cases.
The objector to the metaphilosophical skeptic needs to defend her claim that only
scientifi c liveness and mere mortality, and not purely philosophical liveness and mere
mortality, are strong enough to generate a skeptical result. She needs, in other words, to
point out some relevant epistemological diff erence between science and philosophy,
one that shows that purely philosophical liveness and mere mortality is epistemically
impotent. There are loads of diff erences of course, even interesting epistemological
ones. The goal is to fi nd an epistemological one that justifi es the thought that the argu-
ment fails for purely philosophical hypotheses even though it’s sound for scientifi c
hypotheses. 10
10 There are many objections to metaphilosophical skepticism (e.g. no one is forced to bite the bullets
error theorists bite) that I won’t deal with because they fail to even suggest any weakness in the original
argument or thesis.
138 bryan frances
6 Comments on premise (a)
Premise (a), which says that the renegade situation occurs in philosophy, is pretty clearly
true for many philosophers and philosophical claims. I can think of just two objections.
Here is the fi rst:
The metaphilosophical skeptic is assuming that there are philosophical experts, just like there are
astronomical experts. But this is a dubious assumption. For instance, if there are philosophical
experts, then one would think that both Dennett and Chalmers are experts in the philosophy
of consciousness. But given the diversity of their views on consciousness, at least one of them is
almost completely wrong about the nature of consciousness, thereby ruining his chance at
expertise. And if virtually no one is an expert on material composition or vagueness, for instance,
then the fact that a signifi cant number and percentage of those non-experts endorse radical
error theories doesn’t mean that there is strong evidence for those theories.
The idea here is vague, but I assume the objection is targeting Condition 1, which could
be crudely summarized with “Person S knows that lots of the experts disagree with her.”
However, in the careful presentation of the metaphilosophical skeptic’s argument there
is no mention of philosophical “experts.” The closest claim occurs in part (iii) in Condi-
tion 1:
Many of the professionals who endorse H and reject P are generally more informed than S is on
the topics involving H, they have more raw intelligence than she has, they have thought and
investigated whether H is true longer and in more depth than she has, they have thought about
and investigated the topics surrounding H longer and in more depth than she has, they are just
as or even more intellectually careful than she is, they are no more relevantly biased than she is,
and they have understood and fairly and thoroughly evaluated virtually all the evidence and
reasons she has regarding P (and usually much additional evidence or reasons).
Whether the philosophers described count as “experts” depends on what one means by
that vague term. In any case, there is no reason I know of for thinking that this epistemic
condition isn’t known to be satisfi ed for many of us with regard to many of our philo-
sophical beliefs (setting skepticism aside).
Here is another objection to (a):
Even our best and brightest are utterly epistemically ill-equipped to fi nd the truth regarding the
philosophical problems that we all work on. We are intelligent enough to pose questions that we
are incapable of answering. We might as well be young children wondering what it’s like to be
married for fi fty years. In some sense the twenty year old is in a better position than the eight
year old, but since both are so far from epistemic adequacy, part (iii) of Condition 1 doesn’t
really apply to anyone in philosophical matters, as we are all about equally awful when it comes
to investigating philosophical questions.
Sometimes I am inclined to accept this depressing view. ( Why else are we discussing the
very same things Aristotle investigated so many centuries ago, and in largely the same
terms?) But if it’s true, then surely metaphilosophical skepticism is true too, even if the
argument for that thesis given above fails.
philosophical renegades 139
I conclude that premise (a) is true for many philosophers and their philosophical
beliefs. The only wrinkle is whether it is true for error theories, the extreme case. In the
rest of this section I argue that it is true for those theories.
To begin, I’m stipulating that the philosophical hypotheses are genuine error theories. 11
For instance, as I understand them compositional nihilism and artifact nihilism say three
things: there are no laptops; all ordinary occurrences of “I have a laptop,” “Laptops exist,”
“Some laptops are over $1000” are all just plain false; and ordinary people have the
corresponding ordinary false beliefs about laptops (so it isn’t the case that the sentences
are all false but our beliefs have some fancy semantics that makes them true even when
“properly” expressed by the false sentences). Theories that merely look error-theoretic
about laptops (e.g. “Laptops exist and many cost about $800 but none of them really
exist, or exist in the fundamental sense,” or some contextualism on “exist” or “laptop”)
are not to the point (e.g. the theories of Horgan and Potrč (2000) or Ross Cameron
( 2010 ) are probably not error theories in my sense). 12 I get to pick and choose among
theories here, landing on the ones genuinely contrary to common sense.
Even with that clarifi cation, I can think of four promising objections to premise (a)
applied to error theories. 13
First, some philosophers might be tempted to say “Well, no one really believes radi-
cal error theories that are genuinely inconsistent with common sense beliefs; so those
theories are never live and Condition 1 is thereby not met”; such philosophers are mis-
informed. Please keep in mind that it is neither here nor there whether as a matter of
contingent fact any of these particular genuine error theories is live right this minute;
we should not be parochial. Philosophers will believe almost anything—even today,
when the percentage of philosophers who endorse the truth of commonsense belief is
peculiarly high, historically considered. All one has to do is peruse philosophy journals
from fi fty or hundred years ago to get a sense of how what seems obviously true in one
era is judged obviously false in another era. And in that exercise we are looking at just
one century of the actual world.
Here is a second objection to premise (a) applied to error theories:
Although I recognize the anti-commonsensical philosophers to be much more informed and pro-
ductive than I am when it comes to the topics in question, they aren’t my superiors when it comes
to evaluating those anti-commonsensical theories. David Lewis, for instance, might leave me in
11 Whether “error theory” can be defi ned is beside the point. I’m using that phrase to pick out the theo-
ries listed in the essay as well as others that similarly deny huge portions of beliefs that are nearly universal,
in being commonsensical in almost all cultures throughout almost all history (including today).
12 Compositional nihilism strikes some people as incoherent: it says that there are some particles arranged
tree-wise, it says that there are no trees, and yet this conjunction is incoherent because the fi rst condition is
metaphysically suffi cient for the second condition. But that alleged metaphysical connection is denied by the
nihilist. Nihilism might be necessarily false (like many philosophical theses) but it’s not obviously so.
13 Some objections clearly won’t work. Hegelians deny that we philosophers ever disagree with one
another in any “substantive” way ( Hegel 1995 ). A desperate move to say the least, despite suggesting deep
thoughts about philosophical progress from a temporally wide perspective.
140 bryan frances
the dust when it comes to generating various worthwhile arguments, concepts, claims, and theories,
but he is no better than me when it comes to determining whether those theories and claims are
really true. In eff ect, there are far fewer mere mortals then the metaphilosophical skeptic thinks.
I suppose that view might be correct in a few isolated cases, but for the most part people
who are my philosophical superiors in all those ways will be better than I am at evaluating
claims in the corresponding area. Take a particular case: vagueness. Timothy Williamson,
Roy Sorenson, Hud Hudson, Paul Horwich, and other experts on vagueness are epistemi-
cists (in the possibility we’re envisioning, not to mention our actual present time). I, on the
other hand, barely even understand supervaluationism, how the “defi nitely” operator works
or what it means, the issues relevant to Gareth Evans’ famous argument against vague iden-
tity, and so on. It’s silly to think that I’m anywhere near as good as Williamson, Sorenson,
Horwich, Hudson, and the others in evaluating the pros and cons of epistemicism. While it
is certainly possible that I stumble on an argument that they don’t know about or haven’t
suffi ciently appreciated that dooms their anti-commonsensical theory, we should stick with
the most common scenario , in which the mere mortal has not had the rare fortune to discover
some crucial bit of evidence that all the anti-commonsensical philosophical experts have
missed or failed to suffi ciently appreciate. Part (iii) of Condition 1 in particular is defi nitely
true of me with respect to the topic of vagueness (as well as many other topics that generate
radical error theories) and there is nothing exceptional about that fact.
A third objection says that there are no “genuine” error theories as characterized
earlier. Consider the following rough train of thought.
When people “accept” a certain claim in ordinary life, do they think it’s literally true or are they
best interpreted as thinking that it’s true for all practical purposes ? For the most part, they aren’t even
aware of the contrast, so how do we interpret their assent to “There are four baseballs in the
trunk”? (It won’t help to ask them, as they don’t know the diff erence.) And what kind of com-
mitment is suffi cient for belief ? Does it have to be literal truth or just practical truth? Or is “belief”
polysemous? Maybe it’s indeterminate whether they have “literal belief” or “practical belief.”
One might take those and similar refl ections and (somehow) argue that charity of inter-
pretation requires us to say that although (a) the error theorist truly believes that there
aren’t four baseballs in the trunk, (b) she truly believes that her belief is literally true,
(c) the ordinary person truly believes that there are four baseballs in the trunk, and
(d) she truly believes that her belief is literally true, “belief ” is polysemous and the two
operative notions of belief diff er in such a way that there is no disagreement: the two
baseball beliefs can both be true with no inconsistency (perhaps “true” is polysemous
too). I don’t know how the argument for this combination of claims would go. For one
thing, it hardly seems “charitable” to say that the error theorist isn’t disagreeing with the
ordinary belief when she insists that she is denying the ordinary person’s baseball belief.
But what if this no-disagreement view is true anyway?
These are deep waters, but I don’t think they matter to the metaphilosophical skep-
tic’s argument: strictly speaking, H need not be logically inconsistent with P. Consider
again the Jupiter story. You think Jupiter has fewer than ten moons. You read in
philosophical renegades 141
the science section of the New York Times a long article detailing the fact that 95 per
cent of astronomers have over the last quarter century come to think that Jupiter has
over 200 moons. The article includes descriptions of the new methods used to come to
the new consensus. However, it turns out that the reporter failed to understand that the
astronomers are employing an alternative conception of a moon in such a way that their
“over 200 moons” belief isn’t at all inconsistent with your “fewer than ten moons”
belief. This diff erence in conceptions is missed by the reporter (imagine that). Even if all
of that is true, it seems to me that in this situation you have still been presented with
excellent evidence E 1 that there is excellent evidence E
2 against your belief (so premise
(c) is true; thus, (a) can be altered so that H and P need not be logically inconsistent). It
turns out that E 2 is not excellent evidence against your “fewer ten moons” belief even
though it may be excellent evidence for the astronomers’ “over 200 moons” belief. But
you have no evidence for that fact (the fact that E 2 is not excellent evidence against your
belief ) and plenty of evidence against it. So even if the error theorists aren’t really dis-
agreeing with the ordinary person’s belief—or the amateur metaphysician’s belief—it
seems that the lesson applies anyway, just as in the Jupiter case.
If the amateur philosopher knows , or at least has good overall evidence, that her belief
isn’t really contradicted by the error theorist’s theory, then perhaps she has not been
presented with excellent evidence that there is signifi cant evidence against her com-
monsensical belief (as Ev-of-Ev says). But I am confi dent that not many philosophers
are in such a position. So, even when the objection succeeds it will have vanishingly
small signifi cance. Near the end of section 11 I will remark on the peculiar way that
error theories deny common sense.
Now for the fourth objection to (a) applied to error theories. Assuming that there are
genuine error theories and that they are often live, premise (a) makes the additional claim
that many contemporary philosophers satisfy Condition 1 with respect to those error theo-
ries. But that last claim can be questioned. In section 3 I gave the trinity example, in which
the atheistic philosopher will often think that some theistic philosophers are “more
informed” than she is regarding the trinity, at least in two senses: they will know more about
the various theories of the trinity and they will know many more conditional truths about
the trinity. But in a more substantive sense of “informed,” the atheistic philosopher will
judge the theistic philosophers to be less informed than she is when it comes to the trinity:
after all, she thinks there is no trinity to puzzle about (all there is is the one human, Jesus).
The reason this phenomenon is relevant is that some philosophers have similarly dis-
missive views to whole swaths of philosophical inquiry. For instance, some people think
analytic metaphysics is nonsense all the way through. I once had a colleague who
thought that epistemologists had nothing interesting to work on, and as a consequence
was dismissive of the entire enterprise. These philosophers will not count as renegades
with respect to theses in those philosophical areas, thereby escaping the metaphilo-
sophical skeptic’s argument.
I will off er just a few comments regarding these “dismissive” philosophers. First, I will
be considering the possibility that many renegades have epistemic items—such as Moorean
142 bryan frances
moves—that are suffi cient to avoid metaphilosophical skepticism. It’s not the case that the
only way to avoid skepticism is to think that Kit Fine and Ted Sider are not your epistemic
superiors when it comes to material composition and persistence through time, Tim
Williamson and Paul Horwich aren’t your superiors regarding vagueness, and so on. There
is hope for the commonsensical philosopher even if she respects her colleagues and isn’t
arrogant! Second, notice that the error theories aren’t all from metaphysics (although most
are). Error theories show up in the philosophy of language and logic, the philosophy of
mathematics, the philosophy of mind, metaethics, and the philosophy of physics as well.
And don’t forget traditional skepticism, which is an epistemological error theory. So, in
order to escape the clutches of Condition 1 via the dismissive attitude, a philosopher
would have to be dismissive of an enormous portion of philosophy. Although I won’t
argue the matter here, I strongly suspect the cure is worse than the disease: the epistemic
sin of rejecting the relevant areas of philosophy (thereby avoiding the metaphilosophical
skeptic’s snare) is larger than the epistemic sin of being a well-informed mere mortal who
retains her commonsensical beliefs. Furthermore, one will be faced with the task of
responding to the objection that one’s own philosophizing is hardly better than that found
in the dismissed areas. Indeed, it is diffi cult for me to see how an informed philosopher
could be epistemically responsible in dismissing any one of the areas that the error theories
fall into, let alone all of them. For instance, all one has to do in order to see the merit in
various odd theories in metaphysics is spend a few months thinking about the Statue-Clay
case, the Tibbles-Tib case, the Ship of Theseus case, the problem of the many, and a few
other puzzles (see sections 1–3 of chapter 5 of Sider 2001 for an introduction to some of
these issues). To see the merit in inconsistency views of truth spend a few months grap-
pling with the semantic paradoxes. I’ll consider the Moorean move in section 11.
7 Comments on premise (b)
Premise (b), which is Ev-of-Ev, says that if Condition 1 applies to S, then S has superb evi-
dence E 1 (her knowledge of facts about “expert” endorsement) that there is strong evi-
dence E 2 (e.g. philosophical arguments) that H is true and P is false. This principle should
not be terribly controversial. It doesn’t mean that S should think that E 2 is really is strong
evidence; it doesn’t even say E 2 exists. S might have other evidence E
3 that suggests—or
even proves—that E 2 is quite weak or non-existent, where E
3 is more impressive than E
1 .
For instance, S might know of some relevant fact that the superiors possess but have failed
to suffi ciently appreciate. (Though Condition 2 will close this possibility off .) S might
know that although a signifi cant number and percentage (say 65 per cent) of the relevant
superiors think H is true and P is false, a whopping 100 per cent of the thirty or so superi-
ors commonly acknowledged to be the most knowledgeable about H and P are fi rmly
convinced that H is false and P is true—despite the fact that these thirty philosophers are
fi ercely independent thinkers who disagree with one another all the time on many related
issues. In such a case S can quite reasonably (not to say truthfully) conclude that the many
advocates of (H & ~P) have made some error somewhere that in turn their epistemic
philosophical renegades 143
superiors have noticed, even though S might not have the slightest idea what it is. (This is
similar to the Jupiter case with the small number of renegade astronomers.) So S starts out
knowing P, becomes a bit concerned when she fi nds that 65 per cent of the superiors think
~P (since she has been presented with good sociological evidence E 1 that there’s good
philosophical evidence E 2 against her belief ), but then is reassured when she later learns
that 100 per cent of the top philosophers independently think P (as she has now been pre-
sented with good sociological evidence E 3 that the philosophical evidence E
2 against her
belief has been neutralized). In this scenario in which she retains her belief, it seems pretty
reasonable that she keeps her knowledge (the metaphilosophical skeptic can admit all of
this). But it doesn’t do anything to suggest that Ev-of-Ev is wrong.
Here is an objection to Ev-of-Ev.
We defer to scientifi c experts and liveness; and we ought to. There seems to be a pretty tight con-
nection between being scientifi cally live and being probably-roughly-true: if a hypothesis has the
former quality, then there is good reason to think it has the latter quality. Crudely put, we are all
aware that science is reliable. That is why a scientifi cally live hypothesis that confl icts with your
belief poses a formidable epistemic threat to your belief, a threat that must be defused in order for
the belief to be knowledge (at least provided you’re aware of the threat). But no such connection
holds between the characteristic of being philosophically live and being probably-roughly-true.
Crudely put, we all know that philosophy is unreliable. So expert endorsement fails to mean
signifi cant evidence. A philosophically live hypothesis doesn’t threaten our contrary beliefs.
The target of this objection seems to be Ev-of-Ev, the principle that mere mortals have
strong evidence of strong evidence for H. Alternatively, perhaps the objector is agreeing
that such evidence E 1 exists but is saying that by going through the above reasoning the
mere mortal gets an epistemic item suffi cient for overwhelming E 1 ; that would mean
the objection really targets 2→~D. In any case, I think the objection fails.
The supporter of this objection needs to explain why the fact that we often defer to
scientists but not philosophers is epistemically signifi cant. Clearly, if we did so merely
because philosophers smell worse than scientists this would not mean that philosophical
liveness was less epistemically potent than scientifi c liveness. So the strength of the
objection lies in the plausibility of its explanation for the diff erence in deferment prac-
tice: the objector has to explain why we justifi ably fail to defer to philosophers.
The objection makes an attempt: we defer to one but not the other because we are
aware that the connection between being scientifi cally live and being probably-
roughly-true is much tighter than the connection between being philosophically live
and being probably-roughly-true. But I think that anyone who has actually done
some science knows that that explanation is empirically false. Scientists put forth false
views all the time, and in large quantities. Philosophers and laypeople end up hearing
about just the supremely best ones, but the oodles of run-of-the-mill false ones are
there too. Nevertheless, the objection is worth taking seriously because we are aware
that the primary scientifi c theories, the ones that have been around a long time and are
pivotal for research, are epistemically much better than the analogous philosophical
theories.
144 bryan frances
That sounds reasonable: we are aware that long-term vigorous endorsement of a sci-
entifi c theory by scientists who are experts in the relevant area provides much more
reason to believe that theory than long-term vigorous endorsement of a purely philo-
sophical theory by philosophers who are “experts” in the relevant areas. But this argu-
ment has a gap in the crucial spot. It doesn’t matter whether we know that the signifi cant
scientifi c endorsement → probably roughly true connection is stronger than the signifi cant philo-
sophical endorsement → probably roughly true connection. The metaphilosophical skeptic can
agree with that comparative claim. Her point, which is obviously true, is that the com-
parative claim is moot. The only thing that matters is this: does long-term vigorous
endorsement of a purely philosophical theory by the top philosophers in the relevant
areas provide enough reason to sabotage my retention of my amateur belief that those
theories are false and the relevant part of common sense is true—even if it isn’t as epis-
temically powerful as the analogous scientifi c endorsement? Just because science beats
the hell out of philosophy in this one respect gives us no reason at all to think that philo-
sophical liveness is not epistemically signifi cant enough for the truth of metaphilosoph-
ical skepticism. That’s the gap in the objection.
What if a philosopher comes to reasonably believe that not only is the signifi cant philo-
sophical endorsement → probably roughly true connection much weaker than the analogous
science connection (a belief that the metaphilosophical skeptic may agree with) but is so
weak that E 1 does not, in fact, supply her with strong evidence that there is strong evi-
dence against P—despite the fact that the above objection supplies no reason for think-
ing this? This would mean, of course, that she thinks Ev-of-Ev is false. But so what? The
skeptic relies on just the truth of that principle; the renegade doesn’t have to believe it in
order for it to do its work in the skeptic’s argument.
But suppose I’m wrong and Ev-of-Ev is false for philosophy. As with the last objec-
tion to premise (a), the cure is epistemically worse than the disease. If Ev-of-Ev is false
for philosophy, then we have scenarios such as this: when you learn that 44 per cent of
philosophers of logic and language say H with respect to theories of truth, you have
not acquired strong evidence that they have strong evidence for H. Why might that be?
The only answer I can think of: it’s because those philosophers don’t have any strong
evidence for H, even though they’ve been evaluating H for many years and they started
out not only with no emotional attachment to H (H isn’t anything like “God exists”)
but a strong disposition to reject H (recall that H is an anti-commonsensical claim). If all
that is true, then it says something epistemically horrible about philosophy.
8 Comments on premise (e)
I have already tried to motivate premise (e), the Ev&~Sk→D principle, in section 3. This
principle says that if one of the philosophers mentioned in (a) has strong evidence E 1
that there is evidence E 2 that she doesn’t have or has underappreciated (and so on), then
if in spite of having E 1 her retaining her belief P suff ers no serious epistemic defect, then
she must have some epistemic item that overwhelms E 1 . The skeptic is not putting
philosophical renegades 145
any untoward limits on the items that can do the work in overwhelming E 1 . Perhaps all
that is needed is a huge amount of warrant for P obtained in the utterly ordinary way
(e.g. seeing a baseball in the trunk of one’s car, for the compositional nihilism case), so
nothing like an argument against H or E 2 is required. Further, the item need not “coun-
ter” E 2 at all. For instance, a person who is mathematically weak will still know that 1 ≠ 2
even if she can’t fi nd any error in a “proof” that 1 = 2 (the “proofs” in question usually
illicitly divide by zero at some point). She has no direct counter to the derivation; all she
has is a phenomenal amount of warrant for her commonsensical belief that 1 ≠ 2. Per-
haps something similar holds in the philosophical cases we are examining, at least for the
error theory cases. I will consider that possibility in section 11. All the Ev&~Sk→D prin-
ciple is saying is that philosophers need some item that can do the trick; she is not
demanding that the item counter E 2 or H at all—even if she would be right to make
such a demand.
I will consider just one objection to Ev&~Sk→D.
(i) In truth, there is no evidence for the purely philosophical error theories, and (ii) because of
that fact their sociological liveness does not threaten our beliefs (thereby lowering their warrant
levels enough so that they don’t amount to knowledge). The arguments supporting those theories
are not obviously mistaken in any way, which is why excellent philosophers continue to endorse
them, but that hardly means that those arguments supply honest-to-goodness evidence for their
conclusions. For instance, if all the experts who endorse hypothesis H and reject common belief
P are using fatally fl awed methods in gathering evidence for H and against P, then such methods
are not generating evidence for H or against P, since “evidence” is a kind of success term that rules
out this kind of pseudo-support. This observation is especially warranted for pure philosophy
since there is serious reason to think that large parts of purely philosophical argument (e.g.
synthetic a priori reasoning) are irredeemably fl awed. In other words, in order to escape
metaphilosophical skepticism it is not necessary to have any interesting or impressive epistemic
item when E 2 doesn’t actually exist; and for the error theories E
2 does not in fact exist.
I doubt whether claim (i) is true, for reasons I’ll get to in section 11, but I won’t evaluate
it here, as the objection fails on claim (ii), whether or not claim (i) is true. An example
will prove this. Pretend that all the science behind radiometric dating (the main method
for fi guring out that many things on earth are many millions of years old) is fatally
fl awed in some horrendous manner, so those methods don’t generate (real) evidence (in
the “success” sense of “evidence”). Even so, since I have become aware of the fact that
the scientifi c community is virtually unanimous in the view that radiometric dating is
accurate and shows that the Earth is hundreds of millions if not several billion years old,
my creationist belief that the Earth is just a few thousand years old doesn’t amount to
knowledge, even if it’s true and did amount to knowledge before I heard about radio-
metric dating and scientifi c opinion. Awareness of signifi cant expertly endorsed
“pseudo-evidence,” if you want to call it that, is suffi cient to sabotage belief retention
in many cases.
Further, the metaphilosophical skeptic need not hold that the evidence for purely
philosophical error theories is often or at least some times good enough to warrant belief
146 bryan frances
in the truth of those theories. On the contrary, she might insist that the evidence for
error theories is almost never that good, so the philosophers who actually believe these
theories are doing so unjustifi ably.
9 Comments on premise (g)
I don’t see much basis for quarreling with premise (g), which is the claim that Condition
2 applies to many typical philosophers with respect to philosophical claims they dis-
agree with. Premise (g) says that the specialists who accept H and reject P (as well as the
experts who are agnostic on both H and P) are aware of and unimpressed by S’s evi-
dence and other epistemically relevant items that do or might support P or cast doubt
on H or E 2 . With regard to the error theories, in particular, it is implausible to think that
I the amateur have some special piece of evidence or whatnot that the epistemicists or
compositional nihilists or moral error theorists have overlooked, as I don’t do expert
work in any of those areas. Perhaps the error theorists have seriously underestimated the
epistemic warrant supplied by, for instance, the alethic reliability of the belief-producing
cognitive processes that led to my belief in P, but that is another matter—a potentially
important one I’ll deal with in section 11. Of course, I might be one of the lucky ones
who have reasons that not only cast doubt on H, ~P, and/or E 2 but that would be judged
by the advocates and agnostics on H to be new and impressive; premise (g) allows for the
existence of such people. The metaphilosophical skeptic’s point with (g) is that these
people are uncommon.
10 Initial comments on premise (h)
This premise, principle 2→~D, says that if a philosopher in (a) also satisfi es Condition 2,
then it’s highly likely that she fails to have any epistemic item that overwhelms E 1 . Here
is one objection to this premise:
When I look at the diversity of opinion of my epistemic superiors regarding H and P, and I see
that they are utterly divided on whether H is true despite the fact that they are in constant and
rich communication with one another concerning the considerations for and against H, that
tells me that those considerations are just plain inconclusive. Let’s face it: despite their great
intelligence and best eff orts they are confused by the arguments they are working on. In some
real sense the specialists’ opinions cancel out when they are divided, as is the case when, say, 40 per
cent accept H, 30 per cent reject H, and 30 per cent withhold judgment. And once their views
cancel out, we are left where we started, with our initial belief P unthreatened.
The idea here is that the renegade could go through this reasoning and thereby acquire
an epistemic item suffi cient for overwhelming E 1 .
The fi rst problem here is that none of this singles out pure philosophy over pure
science; the second problem is that it’s pretty clearly false in the scientifi c case (which
suggests it won’t be true in the philosophy case). Just think of the Jupiter case again: if
philosophical renegades 147
40 per cent of the astronomers accept the “over 200 moons” theory, 30 per cent reject it,
and 30 per cent withhold judgment, this shows that the issue is unresolved in the public
square, which is exactly where the mere mortal lies. She would be a fool to think her
childhood belief regarding the number of moons was correct.
Here is another objection to 2→~D:
Some philosophers have given very general arguments that suggest that virtually all purely
philosophical error theories have to be false or at least not worthy of serious consideration
due to radically insuffi cient overall evidence (e.g. Mark Johnston 1992a, 1992b, 1993; Paul
Horwich 1998 ; Frank Jackson 1998 ; Crispin Wright 1994 ; Thomas Kelly 2008 ; for rebuttals
to these arguments see Alex Miller 2002 and Chris Daly and David Liggins 2010). Those
philosophical arguments supply me with good evidence E 3 that the evidence E
1 (based on
expert endorsement) for the evidence E 2 (based on philosophical arguments) for the error
theory is misleading. That is, the anti-error theory arguments show that even though E 1 exists
and is good evidence for the existence and strength of evidence for an error theory, the
evidence E 3 presented by the anti-error theory philosophers cancels out E
1 . So, most renegades
are exceptions to 2→~D.
Let us assume on behalf of the objector that philosopher Fred knows the soundness of
some general line of reasoning that really does show that E 1 is misleading evidence for
E 2 , since the line of reasoning Fred knows about shows that E
2 is actually quite weak. So
we are assuming that there really is a general line of argument that successfully proves that all these
error theories are virtually without any support . Hence, Fred is one of the exceptions to
2→~D. Unfortunately, all this will demonstrate is how lucky Fred is, since the great
majority of philosophers are not so epistemically fortunate, regardless of whether such
arguments are possible or actual or even actually known to be sound by a few fortunate
individuals such as Fred. Matters would be diff erent if it were very widely known that
such anti-error theories are correct, but it goes without saying that this isn’t the case
now; nor was it the case in the past (and there is no reason I know of to think it will
become widely known in the near future). The metaphilosophical skeptic can consist-
ently combine her skepticism with the assertion that she, like Fred, knows perfectly well
that all error theories are false and have no signifi cant supporting evidence (in the “suc-
cess” sense of “evidence”).
There is another way to construe the objection. Perhaps the idea is that if a philoso-
pher sincerely thinks she has an excellent argument against all (or a good portion) of the
error theories, then that’s enough to render her belief retention reasonable. Even if she’s
wrong about having such an argument, if she merely thinks she has one, then she can
hardly be blamed for sticking with her commonsensical belief. Her conviction that she
has the magic argument against error theories is enough to win the reasonableness of
her retaining her commonsensical belief; the conviction suffi ces as the required epis-
temic item.
There’s some truth to that line of reasoning. If one really is so clueless that one thinks
one has, as an amateur with respect to the operative subfi elds, an argument of staggering
philosophical consequence—which is obviously what her argument would have to
148 bryan frances
be—then there is something epistemically appropriate in her sticking with her belief.
The skeptic should allow this: the belief retention is epistemically fl awless. But then she
fi nds epistemic fault somewhere else: the commonsensical philosopher’s belief that she
has the magic pro-commonsense argument that “should” be rocking the philosophical
community. Only the desperately naïve, arrogant, or otherwise epistemically fl awed
professional philosopher could have such a belief. So, although she may escape the
metaphilosophical skeptical snare, she is merely trading one epistemic sin for another.
In spite of my rejection of those two objections to 2→~D, in my judgment this premise
is the one that is most likely to be false for the cases we’re interested in. The obvious can-
didates for epistemic items that are up to the job are, I think, already listed in Condition
2; so they won’t help us avoid the skeptical snare. The whole interest in this premise lies
in the possible non-obvious exceptions: do typical philosophers who satisfy Condition
1 and Condition 2 usually have epistemic items strong enough that they suff er no epis-
temic defect in retaining their commonsensical beliefs?
11 The Overwhelming Warrant objection to premise (h)
The one objection to metaphilosophical skepticism that I think has a prayer of working
off ers an affi rmative answer to the question just posed: a considerable percentage of rene-
gades have epistemic items that are suffi cient for “overwhelming” E 1 . Here is the objection:
Let us admit that when a large number and percentage of recognized “experts” in philosophy
believe an error theory based on epistemically responsible investigations over many years and
what they consider to be multiple lines of signifi cant yet purely philosophical support, we are
faced with impressive sociological evidence E 1 that those theories have impressive philosophical
evidence E 2 in their favor—where E
2 is impressive enough to actually convince all those legiti-
mate experts that the error theory is really literally true. After all, it’s not plausible that the epis-
temic weight of their considered opinion all of a sudden vanishes as soon as they move from
scientifi cally relevant considerations to purely philosophical ones.
Despite all that, our commonsensical beliefs have huge amounts of warrant backing them up
and that’s an epistemic item that suffi ces for the consequent of Ev&~Sk→D. Perhaps reliability
comes in here: commonsensical beliefs such as “I have a laptop” and “Dogs are dogs” are formed
via extremely reliable and otherwise epistemically upstanding processes, and so the resulting
beliefs have a great deal of warrant—even if we aren’t aware of its strength. In any case, it takes
a correspondingly very powerful body of evidence to render those beliefs unjustifi ed overall, and
although we have good reason to think the purely philosophical arguments E 2 for error theories
are good, they are not that good. Science might be up to the task (as science rejected the “Earth
doesn’t move” bit of common sense) but not pure philosophy.
It’s worth noting right away that this objection has no applicability outside of error
theories. It would be over the top to think that one’s belief P had overwhelming war-
rant when it comes to content externalism, four-dimensionalism, the Millian view of
proper names, or hundreds of other philosophical theses. Thus, this objection will not
justify the renegade in any of those cases.
philosophical renegades 149
There are a couple ways to fi ll out the objection. It defi nitely makes this comparative
claim W, which I will focus on below: the warrant that renegades have for P overwhelms
the warrant supplied by E 2 against P (when it comes to philosophical error theories). At
this point the objection can proceed in either of two ways. It can say that the mere truth
of W gives the renegade an epistemic item strong enough that her belief retention is not
epistemically defective—so the renegade doesn’t also need to be aware that W is true.
Alternatively, it can say that the renegade is off the hook only if she is aware that W is
true: only with that awareness does she actually possess an epistemic item suffi cient to
secure her belief retention. Here is a Moorean speech that would express the renegade’s
endorsement of W:
I am epistemically justifi ed in thinking that any argument that says there are no cars, for
instance, has just got to have a premise that is less justifi ed than the claim that there are cars—
even if I can’t put my fi nger on which premise in the anti-commonsensical argument is guilty.
I am justifi ed in gazing at error theories and just saying “That can’t be right.” Notice that I am
engaging in a reliable belief retention scenario: although E 1 is indeed strong evidence that there
is strong evidence E 2 for H and against P, E
2 is not strong evidence for H when two conditions
hold: E 2 comes exclusively from philosophy and P is a bit of universal common sense. Philoso-
phy has a lousy record of refuting common sense! I grant you that the specialists who consider
error theories to have a good chance at being true know a lot more than I do regarding the
relevant philosophical matters, but they’re not my superiors when it comes to that simple judg-
ment about philosophy. Further, since this Moorean response is so well known, a great many
of us renegades have a good reason to think that the evidence E 2 for error theories stinks. So,
we renegades have an epistemic item good enough to defang the skeptic; we fall into the class
of exceptions to 2→~D.
Although there are several problems with the Overwhelming Warrant objection, I will
look just at claim W, off ering fi ve criticisms of it.
First criticism. Much of the warrant for the commonsensical P is also warrant for the
anti-commonsensical H. For instance, much of the warrant for “Here is a tree” is also
warrant for “Here are some particles arranged tree-wise” (which is the corresponding
sentence compositional nihilism off ers). In fact, it’s often remarked that perception
off ers no warrant for the former that it does the latter. This holds for many philosophical
error theories. And if that’s right, then it’s hard to see how the comparative warrant claim
W can be true.
Second criticism. We are familiar with the fact that science, including mathematics,
often overthrows bits of common sense. Philosophers often respond with “Yes, but that’s
science; philosophy’s diff erent.” I looked at that objection in section 7. But the lesson I
want to press here can just grant that philosophy will never, ever overthrow common
sense. My objection starts with the obvious observation: we are already used to common
sense being proven wrong or highly doubtful . It has already failed; the fl aws are plain to see to
anyone with some knowledge of history. So why on earth should we keep on thinking
it’s so epistemically powerful given that we have already proven that it’s false or at least
highly contentious in many stunning cases? Sure, you can fi nd many commonsensical
150 bryan frances
beliefs that have escaped scientifi c refutation thus far, but so what? What makes you think
that with regard to your favorite bits of common sense this time common sense has got
things right ? Pretend that commonsensical beliefs are all listed in one giant book (more
plausibly, the book consists of just a great many paradigmatic commonsensical beliefs
that are representative of the others). The book was around two thousand years ago.
Over the centuries many of the beliefs have been refuted by science: pages and pages
have been crossed off . So why should we think the remainder—the ones that have not
yet been refuted by science—are so epistemically secure? The remainder at the start of
the twentieth century wasn’t secure; why think the remainder at the start of the twenty-
fi rst century is secure? On the contrary, we know that the book is unreliable, and the
mere fact that some beliefs are not yet crossed off gives no reason to think they never
will be.
This is not to say that we should withhold judgment on every commonsensical claim
not yet refuted. This isn’t the Cartesian reaction to surprising science. Instead, it’s the
much more plausible idea that we should no longer think that the warrant for the
remaining commonsensical beliefs is enormous.
Thus, I fi nd it hard to swallow the idea that today’s common sense that has yet to be
refuted by science has some enormous body of warrant backing it up. And that makes
me wary of the comparative warrant claim W that the warrant that renegades have for P
overwhelms the warrant supplied by E 2 against P: it’s not clear that the fi rst bit of war-
rant is being correctly estimated.
Third criticism. In my judgment the arguments in favor of at least some error theories
are especially strong, which again puts serious doubt on W. The complete and utter fail-
ure to defuse certain anti-commonsensical philosophical arguments suggests that the
philosophical reasoning in those arguments, E 2 , is not weak—on the contrary it’s very
strong. For instance, some philosophers have noted that the basic argument for epis-
temicism (or at least sharp cutoff s in truth conditions) has just about the best chance to
be the strongest argument in the history of philosophy , notwithstanding the fact that few phil-
osophers can bring themselves to accept it—although it’s certainly telling that the per-
centage of accepting philosophers increases enormously the more familiar one is with
the relevant issues. Often the paradox is introduced via a simple argument form such as
this:
1. A person with $0 isn’t rich.
2. If a person with $n isn’t rich, then a person with $(n + 1) isn’t rich, for any whole
number n.
3. Thus, no one is rich.
It’s easy to see how (1)–(3) make up a serious philosophical problem: just consider the
following fi ve individually plausible yet apparently collectively inconsistent claims.
• Claim (1) is true.
• Claim (3) is false.
philosophical renegades 151
• The argument is logically valid.
• If claim (2) were false, then there would have to be a whole number n such that
a person with $n isn’t rich but a person with just one more dollar is rich.
• But such sharp cutoff s don’t exist for predicates like “person x is rich.”
Even the most elementary logic says that you have to give up at least one of those bullet
points, and yet no matter which bullet point you give up, you end up with an error
theory! 14 That’s a very strong meta-argument for the truth of at least one error theory , although
it doesn’t say which error theory will be the true one. I have been assuming in this essay
that all the error theories are false (in order to give the renegade her best shot at epis-
temic salvation), but I must confess that I don’t see how that assumption could come out
true. I won’t argue the matter here, but I think the same lesson—we simply must adopt
an error theory, no matter what logical option we choose in resolving the paradox—
holds for the group of semantic paradoxes, as well as the group of material composition
paradoxes. So for at least some of the cases we’re interested in, E 2 is indeed strong
evidence against P even if the H in question is really a small disjunction of error theories
(P will have to be a small disjunction as well).
Of course, one may always hold out for a commonsensical solution to the paradoxes.
One can insist that there simply must be some approach consistent with common sense
that no one has thought of, despite the fact that the large group of geniuses who have
been working on the paradoxes for centuries have failed to fi nd one. This sounds like a
desperate, baseless hope to me.
I know the following remark won’t make me any friends, but I think that in many
cases a Moorean response to the philosophical error theories is endorsed only by those
people ignorant of the extreme diffi culties that error theorists in the philosophy of lan-
guage, philosophy of logic, philosophy of physics, metaethics, and metaphysics are deal-
ing with. The Moorean response assumes that we are justifi ed in thinking that some
premise in the error theorist’s argument is far less supported than the corresponding
commonsensical claim (which is obviously inconsistent with the error theory). But of
course that’s precisely what’s at issue: the error theorist says that the overall evidence is in
favor of her premises and not in favor of the commonsensical proposition whose nega-
tion is entailed by those premises. More importantly, the philosopher who thinks some
error theory has got to be true—although she is agnostic on which one is true—has an
even better argument than the individual error theorists. For what it’s worth, I was once
a Moorean, and I agree that Mooreanism is the most rational way to start doing philoso-
phy. In particular, when one fi rst hears about a philosopher who says “There are no
trees” one should, rationally, adopt the Moorean approach. But as soon as one educates
oneself about (a) the sanity, level-headedness, and intelligence of people who say these
weird things; (b) the incredibly long history of the failure to fi nd commonsensical
14 In fact, if it turns out that the fi ve bullet points are not collectively inconsistent, contrary to appearances,
that still gives us an error theory.
152 bryan frances
solutions to the paradoxes; (c) the meta-arguments for anti-commonsensical theories;
and (d) the history of common sense being refuted by science, then one has to sober up.
We don’t employ the Moorean move when faced with teams of physicists or biologists
who claim to have upended some bits of common sense; we act that way because of the
experimental data and inventions those investigators generate (roughly put). The philo-
sophical error theorists generate neither. Instead, they have (a)–(d). After I took a close
look at the philosophical paradoxes and refl ected on how science has already shown that
common sense is not as impressive as it’s cracked up to be, I dropped my Mooreanism. 15
Fourth criticism. Consider a purely scientifi c case: that of astronomers saying that
common sense is wrong about the Sun going around the Earth. They have a strong expla-
nation of why every single day it sure looks as though the Earth is still and the Sun is
going around it, even though that’s all false. But the purely philosophical error theorists
also have these explanations, which suggests that they are successfully following the sci-
entifi c way of combating common sense. I know of no argument at all that suggests that
there is anything awful with the compositional nihilist’s explanation that we perceive
things as composite just because the simples are arranged in certain tight and stable ways.
On the contrary, usually it’s simply granted as perfectly obvious that no possible experi-
ment could tell the diff erence between a world with composite trees and a world with
just tree-arranged simples! As noted above, it is commonly thought that perception, for
instance, off ers no support for common sense over nihilism. The fact that “There are
trees” is part of common sense comes from the fact that we have certain perceptions;
that’s the evidential basis. But we would get the very same perceptions if the error the-
ory were true. And if there were something awful with the compositional nihilist’s
explanation, then why on earth would those people embrace it after so much long-term
sophisticated refl ection by our best and brightest who are trying to retain common
sense?
My fi fth reason for not accepting the Overwhelming Warrant objection has to do
with the source of discomfort philosophers have with error theories: I think that in
some cases it’s the result of a misunderstanding, and this causes them to underestimate
the warrant for error theories (which in turn leads them to endorse W). Let me
explain.
I think there might be a specifi c disposition at work in some philosophers who insist
that error theories don’t have much backing warrant, a disposition that accounts for a
good deal (but certainly not all) of their hard resistance to such theories despite the fact
that the arguments supporting them—including the error explanations mentioned in
the previous paragraphs—are quite reasonable and evidentially supported.
15 Kelly ( 2008 ) attempts to show that the philosophical “moderate,” who rejects error theories but also
thinks there is something procedurely wrong with Moorean moves, lands in all sorts of trouble. He casts the
debate as involving the error theorist, the Moorean, and the Moderate. But he fails to consider a fourth
participant: the philosopher who rejects the Moorean move but is agnostic regarding the error theories,
neither accepting nor rejecting them. This character escapes the woes of moderation.
philosophical renegades 153
When a theory says that your belief that fi re engines are red is incorrect, you should
initially be stunned. The right fi rst reaction should be something like “Well, are they
dark orange or purple or what?” Same for other theories: when a theory says that 2 10 ≠
1024, or that what Hitler did wasn’t morally wrong, or that Moore didn’t believe that we
have knowledge, or that there are no baseballs, one is disposed to look for explanations
such as “2 10 = 10 4 4, not 10 2 4,” “Hitler was morally okay because he actually had good
intentions,” “Moore was lying in all those articles and lectures,” and “The whole edifi ce
of baseball has been an elaborate joke erected just to fool you!” And we can’t take those
explanations seriously for two reasons: there is no evidence for them, and there is no
evidence that we made such ridiculous mistakes.
But the error theorist isn’t accusing us of any mistake like those. Indeed, although she
is alleging false beliefs it seems strained to call them mistakes at all. Believing that fi re
engines are red when in fact they’re orange is a mistake; believing they are red when no
ordinary physical object in the universe is red or any other color but appears colored in
all perfectly good viewing conditions is something else entirely, even though it’s still a
false belief. When one has a visceral reaction to error theories (you’ve probably wit-
nessed them: rolling of the eyes, knowing smiles, winks, “that’s preposterous,” “that just
can’t be right,” “get serious,” and so on), often enough it’s not reason that is doing the
talking. Instead, and here is my attempt at armchair psychology, what is at work here is
the disposition to treat philosophical false beliefs as something akin to mistakes, in the
ordinary sense of “mistake.” And that’s a mistake. When a nut says that twice two isn’t
four but seven, she’s saying that we’ve all made an arithmetic error; when a philosopher
says that twice two isn’t four or any other number , she isn’t accusing us of any arithmetic
error. And she isn’t accusing us of some philosophical error, some error in philosophical
reasoning. Instead, she’s saying that there is a naturally occurring error in a fundamental
part of our conceptual scheme that oddly enough has no untoward consequences in any
practical or even scientifi c realm. And that’s why the error survives to infect our com-
monsensical beliefs. The presence of such an error is by no means outrageous; for
instance, surely there’s little reason why evolutionary forces would prevent such errors.
The “mistake” the error theorist is accusing us of is akin to (but of course diff erent from)
the error a child makes when counting fi sh in a picture that has whales and dolphins in
it: it’s a “mere technicality” that happens to be philosophically interesting (depending on
one’s philosophical tastes of course). The error theorist is saying that like the child we
have made no gaff e, or blunder, or slip-up, or oversight. If you throw a chair at a com-
position nihilist, she ducks anyway.
None of this is meant to convince you that the error theories are true ; throughout this
essay I have assumed that they are all false. Instead, I’m trying to block the objection to
metaphilosophical skepticism that runs “philosophical error theories shouldn’t be taken
to have a serious amount of warrant because they have lousy explanations for our false
beliefs.” Due to the peculiar nature of the errors being attributed, I think the error theo-
ries are not profi tably thought of as radically against common sense (although I stick
with the vulgar and call them “radical” from time to time). To say that twice two is seven
154 bryan frances
and fi re engines are purple is to express a view radically at odds with common sense; to
say that twice two isn’t anything and fi re engines have no color at all is not to express a
view radically at odds with common sense (although of course it does go against com-
mon sense).
It’s also worth noting that in the usual case the philosophical error theorist comes to
her anti-commonsensical view kicking and screaming , which again suggests that when they
endorse an error theory they are doing so on the basis of impressive warrant (and not, for
instance, some weakness for anti-commonsensical theories), which again suggests that
W is false. Many philosophers consider certain religious views (e.g. there is something
morally wrong about non-heterosexual sex, the bread becomes the body of Christ) to be
comparably odd and hold that extremely intelligent people have these views only because
they have been indoctrinated, usually as children. Needless to say, this isn’t the case for
philosophical error theories. For instance, Williamson began the project that led to his
epistemicism with the explicit goal of refuting epistemicism ( Williamson 1994 : xi)! I
myself initially found epistemicism, color-error theory, compositional nihilism, and tradi-
tional skepticism highly dubious, but after soberly looking at the relevant arguments over
several years I became somewhat favorably disposed to all those theories even if I never
quite accepted any of them. In addition, I don’t see how anyone can not take error theo-
ries seriously if they are actually aware of the array of problems that any theory of material
composition would have to solve in order to be comprehensive (e.g. Statue-Clay, Tibbles-
Tib, Ship of Theseus, vagueness, problem of the many, and so on). Or just look at the vari-
ous logical options for dealing with the sorites or semantic paradoxes. The point is this:
these error theorists became convinced of the error theories based on the argu mentative
evidence, since quite often they were initially strongly inclined to disbelieve them. So it’s
over the top to suggest that most philosophers who take error theories seri ously do so
based only on some weakness for weird ideas, as opposed to strength of evidence. 16
12 The nature of the epistemic defect
I don’t see any good way to defeat the metaphilosophical skeptic’s argument applied to
“ordinary” philosophical disputes (i.e. those not involving error theories). For example,
16 This also casts doubt on the idea that some pernicious selection eff ect skews the percentages of philoso-
phers who are in favor of or at least hospitable to (i.e. not dismissive of ) error theories in the philosophy of
language, philosophy of logic, metaethics, philosophy of physics, and metaphysics. One might initially
wonder whether it’s virtually an entrance requirement into the club of philosophers who publish on certain
topics that one is strongly disposed to favor error theories. If so, then just because a large number and
percentage of philosophers endorse error theories might not indicate signifi cant evidence in favor of those
theories. (The reverse holds as well: for some clubs one must toe the commonsense line.) I don’t deny that
there are selection eff ects, but a blameworthy weakness for error theories strikes me as implausible ( especially
over the last few decades compared to other periods in the history of philosophy, as today’s philosophers
have tended to put a higher than usual epistemic weight on common sense). If anything, a strong aversion
to error theories causes philosophers to avoid areas in which they are rife, thereby causing pernicious
selection eff ects in other areas.
philosophical renegades 155
if you are a content internalist but not a philosopher of mind and you satisfy Condition
1 and Condition 2, then retaining your internalist belief after fi nding out about the large
number and percentage of disagreeing philosophers of mind means that you suff er a
serious epistemic defect. 17 But what is this defect?
The obvious answer: the belief retention is defective in the sense that the retained
belief does not amount to knowledge, it’s not justifi ed, and the believer is blameworthy
for retaining it. So the action of belief retention is defective because the retained belief is
defective in familiar ways. However, I think matters might not be so simple here. I agree
that in the ordinary, non-error theory cases the renegade’s belief isn’t justifi ed or war-
ranted and doesn’t amount to knowledge. I’m less sure about the blameworthiness point.
For one thing, philosophy is a very individualistic enterprise. Appeals to authority, for
instance, are viewed as virtually worthless. Given all that, perhaps the blame that applies
to the renegade with respect to non-error theories is relatively mild. So, I would recom-
mend that the metaphilosophical skeptic adopt the modest view that the epistemic
defect in ordinary philosophical cases includes lack of knowledge and justifi cation along
with at least a mild kind of blameworthiness.
The obvious view regarding the epistemic defect in the error theory cases is that it’s
the same as in the non-error theory cases: the renegade’s belief retention is defective in
the sense that her retained belief won’t be justifi ed or amount to knowledge, even if it
was justifi ed and amounted to knowledge before she found out about her disagreeing
epistemic superiors. However, a wise skeptic who thinks her thesis is true pretty much
no matter what the correct theory of central epistemological notions turns out to be
will want to allow the epistemic possibility that in the actual world (if not all possible
worlds) propositional knowledge is not only cheap and easy but very hard to knock
down once established (so once you know something, it is very diffi cult to encounter
counterevidence powerful enough to ruin that knowledge). Perhaps the “bar” or thresh-
old for justifi cation and warrant is much, much lower than philosophers have thought
over the centuries (even now, with race-to-the-bottom reliabilism so popular!). If that’s
right, then the renegade’s retained belief in P might amount to knowledge even if it suf-
fers from a serious epistemic defect: upon learning about her disagreeing superiors her
belief ’s overall warrant decreases considerably—that’s the defect the skeptic is insisting
on—but remains high enough to meet the very low threshold for warrant and justifi ca-
tion. The belief still amounts to knowledge but this is impoverished knowledge compared
to its status before learning about the disagreeing superiors.
However, although I think this is a theory of the epistemic defect that the skeptic
should allow for (as a plausible option), even this theory might not be optimal because it
is hostage to the results of the relations among epistemic qualities. As bizarre as it might
sound, I think the skeptic should admit that even if her skeptical thesis is true in the error
17 The only way I see around this is the idea that the renegade’s belief is merely of the inclination-fi rst-
order kind I mentioned at the end of section 3.
156 bryan frances
theory cases, if the renegade’s circumstances and the true epistemological theories are
peculiar enough (as discussed below) then all the following might be true as well:
1. The renegade’s belief P starts out justifi ed, warranted, and amounting to
knowledge.
2. Upon learning of his epistemic superiors his belief retains those statuses.
3. Upon learning of his epistemic superiors the warrant that his belief has doesn’t
change. Thus, the discovery of signifi cant contrary epistemic superior opinion
does not diminish the warrant had by the renegade’s belief.
4. Upon learning of his epistemic superiors he doesn’t acquire any evidence against
his belief.
5. He is blameless in retaining his belief.
When I say that the metaphilosophical skeptic should “keep open the possibility” that
(1)–(5) are true even though her skeptical thesis is true as well (again, applied only to
purely philosophical error theories), I don’t mean to imply that the skeptic should hold
that her thesis is metaphysically consistent with the conjunction of (1)–(5). All I mean is
that she should say something like “For all I can be certain about, (1)–(5) might be true
along with my skepticism.”
I now have two tasks in the rest of this section: explain why the metaphilosophical
skeptic should admit that despite the truth of her position, (1)–(5) might be actually true
as well; and explain what “seriously epistemically defi cient” means in light of that
explanation.
Before I carry out those two tasks, I need to make sure we agree that the epistemic status
of the belief retention can be quite diff erent from the epistemic status of the retained belief . For
instance, if my belief starts out unjustifi ed and I encounter a small amount of evidence for
it (e.g. a recognized epistemic peer agrees with me, as she has made the same faulty assess-
ment of the evidence), the reasonable thing for me to do in response to the peer is keep the
belief: the belief retention is reasonable but the retained belief is not. It is harder to see how
the situation could arise in which the belief starts out justifi ed, the belief retention is
unreasonable, and the retained belief is justifi ed. However, I am in the process of giving
reasons for the idea that the skeptic should be willing to say that that may be the situation
when it comes to commonsensical beliefs and purely philosophical error theories. 18
Suppose Mo thinks he might have disease X. He goes to his doctor who administers a
test meant to determine whether or not Mo has X. The doctor tells Mo, correctly, that
the test has two excellent features: if someone has X and takes the test, the test will cor-
rectly say “You have X” a whopping 99.9 per cent of the time and it will incorrectly say
“You don’t have X” a measly 0.1 per cent of the time; and if someone doesn’t have X
and takes the test, the test will correctly say “You don’t have X” 99.9 per cent of the time
18 It’s somewhat easier to see how a belief could start out unjustifi ed, the belief retention is unreasonable,
and the retained belief is justifi ed: the initial unjustifi ed element is cancelled out by the second unreasonable
element.
philosophical renegades 157
and it will incorrectly say “You have X” 0.1 per cent of the time. Mo is impressed with
these facts and comes to have belief B: if the test says you have X, then there’s an excel-
lent chance you have X. I take it that this is a reasonable belief for Mo to have; his belief
is also blameless. (If that’s not clear, assume that almost all diseases that affl ict people in
Mo’s country occur relatively frequently (nothing like one in a million), and this fact is
generally known by the medical establishment and at least dimly appreciated by lay-
people such as Mo.) Finally, the belief is true.
Next, the doctor tells Mo that only one out of a million people have X. Let’s assume
that the doctor has slipped up: in reality, about one out of a hundred people has the dis-
ease, the doctor knows this, but she misspoke. At this point Mo doesn’t change his belief:
he still believes that if the test says you have X, then there’s an excellent chance you have
X. He doesn’t realize that this new piece of information devastates his belief, as follows.
Suppose everything the doctor said were true, including that only one out of a million
people have the disease. Suppose further that ten million people take the test. Since about
one in a million actually have the disease X, or so we’re supposing, about ten of the ten
million people will have the disease. When those ten people take the test, the odds are that the
test will say “You have X” all ten times (as it is 99.9 per cent accurate in that sense). But now
consider the remaining people, the 9,999,990 folks who don’t have X. When they take the
test, 0.1 per cent of the time the test will mistakenly say “You have X.” Of 9,999,990 folks, 0.1
per cent is about 10,000. So all told, the test will say “You have X” about 10,010 times: that’s
the fi rst ten (who really do have X) plus the next 10,000 (who don’t have X). But only ten of
those times is the test right. Thus, when the test says “You have X,” which is about 10,010 times,
the test is wrong about 10,000 out of 10,010 times: it’s wrong about 99.9 per cent of the
time! So if the doctor were right, then Mo’s belief that if the test says you have X then there’s
an excellent chance you have X, would be about as false as it can get. 19
However, and this is a crucial bit of the story, we are assuming that Mo doesn’t cur-
rently have the background to grasp these mathematical matters, and as a consequence
he does not see how the new (mis)information, about X’s extreme rarity (one in a mil-
lion), ruins the overall warrant he has for his belief. On the other hand, if Jo is a mathe-
matician who had heard precisely the same things as Mo and likewise came to have and
then retain belief B, then given her mathematical competence she would be in a posi-
tion to see the relevance of the one-in-a-million claim. We would say of Jo, but not of
Mo, that she “should know better” than to retain her belief B. Both Jo and Mo have
made a mistake in retaining B, both have an epistemic defect, but only Jo is blameworthy
since only she has the cognitive tools and training to see the relevance of the one-in-a-
million claim. It would take hours to get Mo to see the relevance, as the above paragraph
is too advanced for him; Jo should have seen it immediately.
Moreover, if knowledge is cheap and tenacious—in the sense of being little more
than true belief, with a low threshold for warrant, and very hard to dislodge—then Mo’s
19 Given that one in a hundred people actually have disease X, Mo’s belief B is true, as a little calculation
will show. The case described is a variant of ones used to illustrate the base-rate fallacy.
158 bryan frances
true belief might amount to knowledge even after hearing and accepting the doctor’s
misinformation. Given that he knew the 99.9 per cent facts, the brute fact that the dis-
ease affl icts about one out of a hundred people, and Mo is dimly aware that virtually all
diseases have similar likelihood rates, Mo, like many informed people in his community,
started out knowing that if the test says you have X then there’s an excellent chance you
have X. When he hears his doctor’s one-in-million claim he cannot see how it goes
against his belief. Since propositional knowledge is so cheap, he retains his knowledge.
In addition, because justifi ed belief also has a low threshold, his belief remains justifi ed.
The Mo story can be used to motivate several interesting epistemological theses. For
our purposes, it is best thought of as illustrating how the following might come about:
• A person starts out with a belief B that is true, reasonable, justifi ed, and amounts
to knowledge.
• Next, he acquires a belief C that in some sense “goes against” the fi rst belief B
(this can happen in several ways, either targeting the truth of B, as in the Mo
example, or perhaps targeting his justifi cation for B).
• Despite coming to believe C, the person retains the fi rst belief B.
• The retained belief B is still justifi ed and amounts to knowledge, as the “bars” for
knowledge and justifi cation are very low.
• But his retaining B is still seriously epistemically defective.
I am not saying that renegades with respect to error theories are in the same position as
Mo. More specifi cally, renegades aren’t guilty of a cognitive defi ciency like Mo is. Even
so, the illustration is useful because I think the renegade philosophers might be blame-
less just like Mo is. The renegade’s retained beliefs are doxastically central (if one of them is
blameworthy, then a fantastic amount of one’s beliefs will be blameworthy) and held
with a very high confi dence level . Given those two facts, it seems a bit extreme to have the
threshold for blamelessness be so high that one is blameworthy in retaining beliefs that
are both central and held with the highest levels of confi dence. Humans aren’t that
rational. The metaphilosophical skeptic should allow for the possibility that standards
for epistemic blame are relative to the epistemic and doxastic capacities of the subject
(or perhaps her community or species).
In order to bring out the relevant complexities concerning the relative standards for
epistemic blame, consider a non-epistemic example. A twelve-year-old girl plays third
base for her Little League baseball team. The batter hits a sharp grounder to her right;
she dives to get it but misses and falls down; the ball shoots off down the left fi eld line for
a double. No one is going to blame her for missing the ball: not her teammates, not her
parents, not the manager, not even herself. However, if Brooks Robinson, who was one
of the greatest fi elding third basemen in Major League history, had had the very same
opportunity and did exactly what the child did, then he would have been blameworthy
(for one thing, he would have been accorded an error by the game’s offi cial scorer).
Despite that diff erence, which depends on the individual circumstances—especially
the relative strengths and weaknesses of the two players—both attempts to fi eld the ball
philosophical renegades 159
are “baseball defective.” It’s a good question as to why the child’s attempt is defective
even though she was charged with no error and is blameless. The answer is probably
something along the lines of “Being able to fi eld that ball is a goal associated with the
role of a third baseman, and one that is typically attained by those who pursue the posi-
tion seriously.” I am saying that the metaphilosophical skeptic might want to allow that
something similar is true of Mo and Jo: although only Mo is blameless, both belief reten-
tions are “epistemically defective.” Despite his present inability to see the error of his
ways, Mo’s belief retention is epistemically defective because he is an epistemic agent
and part of a community that is able to handle the mathematics needed to see the error.
It would not exactly kill him to see the relevance of the rarity of the disease.
To be sure, questions of blame are tricky—and in ways that may seem to matter to the
skeptic’s position. Consider a diving leap by Brooks Robinson that fails to snag the ball.
He really had almost no chance at getting it, so his attempt is hardly defective. Only
Superman could have got the ball! But now imagine a race of superpeople who watch
our baseball games. They say that Robinson’s dive is “baseball defective,” as their best
third basemen are usually able to snag the ball when faced with that play. But it seems
incorrect for us to say that Robinson’s play is defective. Whether it’s defective depends on
the relevant standards, and Robinson’s attempt does not fail to meet any of our standards
even if it does fail to meet the alien standards. Something similar may hold in epistemic
cases. Perhaps there is a group of super-epistemic aliens for whom induction is just plain
stupid. Only horribly weak creatures, they say, employ inductive reasoning. After all, it’s
not even guaranteed to be truth preserving! They look upon us as primitive epistemic
beings. Even if they are right about all of that, it is hard to see how it would make it true
for us to say that all merely inductive reasoning is epistemically defective.
Thus, perhaps some renegades are not blameworthy. However, some will be blame-
worthy: it depends on the tenacity of their belief in P. We can imagine two philosophical
renegades, Barb and Ron. Both of them endorse W from the Overwhelming Warrant
objection. We then confront them with “Yes, but suppose that 98 per cent of your
10,000 epistemic superiors fi rmly believed P was false, and it had been this way for cen-
turies; what would you think then?” Barb responds in a level-headed way: “Well, I’d at
least suspend judgment, if not accept their view.” Ron responds with bluster: “That
would just mean those philosophers are nuts.” I think that in that scenario Ron comes
out blameworthy, perhaps because his dispositions reveal an unreliable belief retention
type. In any case, the charge of blame is so complicated that the skeptic is best advised to
steer clear of it for the most part.
This shows how the skeptic should admit that for all she can be certain of, her thesis
could be true even if (1), (2), and (5) are all true of S. The following story is intended to
deal with (3) and (4). 20
20 I couldn’t think of the best case: one that handles all of (1)–(5) at once. But keep in mind that all I’m
doing here is arguing that the skeptic should be willing to admit that for all she can be certain of (1)–(5) are
true even though her thesis is true too. I’m not arguing that skepticism plus (1)–(5) really can all be true.
160 bryan frances
Suppose Cho is an undergraduate physics major taking a standard course in classical
mechanics. She does a problem from her textbook and gets the answer “4π.” But the
answer key in the back of the book says “8π 2 .” And the better classmates in the class agree
with the book’s answer. She rechecks her answer to make sure there is no calculation
error. The teaching assistant for the class agrees with the book’s answer. As does the pro-
fessor. Cho consults a couple of other professors she likes and took courses from in the
past: they also agree with the “8π 2 ” answer. Cho admits that these people are her epis-
temic superiors when it comes to classical mechanics, as the evidence for this judgment
is obviously quite good and objective. At this point she has excellent evidence E 1 —the
reports from recognized superiors—that there is excellent evidence E 2 against her
answer. Despite all that, Cho sticks with her old answer, merely shrugging her shoulders
and stubbornly fi guring that what the superiors say “doesn’t prove anything” as “no one
is perfect.” I think it’s pretty clear that Cho is being irrational given that she meets Con-
dition 1 and Condition 2; in particular, by meeting the latter condition Cho doesn’t
have anything up her sleeve, so to speak, that her professors and other superiors are miss-
ing. In at least one important epistemic sense she should give up her belief. The reason
she should give up her belief is this: she has been given excellent reason that it is false, the
reason being E 1 .
That conclusion seems right. But the interesting point I want to make here is this:
even though Cho has been given good reason that “4π” is the wrong answer (and that
“8π 2 ” is the right answer), this may not be the same thing as saying that she has been
given good evidence that “4π” is the wrong answer. What she actually has is testimonial E 1 .
The evidence against her belief P (P is the claim that the answer is “4π”) is E 2 , which is
some alternative calculation with which Cho is unfamiliar. Some philosophers would
say that E 1 is evidence that her belief is false: at least in most cases if E is evidence that
there is evidence against Q, then E itself is evidence against Q. I think there might be
counterexamples to that bare principle, although if we throw in more conditions rele-
vant to the disagreement cases the counterexamples may vanish (for all I know). What I
want to emphasize here is that even if having E 1 (and recognizing that it’s good evidence
for the claim that there is good evidence against P) isn’t necessarily to have good evidence
for thinking P is false, it remains true that having E 1 gives one excellent reason to think P
is false. The issue isn’t whether E 1 is evidence. It’s defi nitely evidence of something. The
issue here is whether E 1 —that body of knowledge which S possesses—is evidence
against P . Whether it is will depend on what the true story of evidence is, a position on
which the metaphilosophical skeptic wants to avoid taking a position since she thinks
her thesis is true independently of those details, as philosophically important as they are.
This separation of evidence from reason isn’t terribly counterintuitive, as the physics
story shows. The Homer-Holmes case works as well. Homer knows that Sherlock Hol-
mes has a fantastic track record in murder investigations and has recently announced
that he’s as confi dent as he’s ever been that he’s cracked the new case of the maid’s mur-
der: the butler did it. When Homer learns all this about Holmes he acquires excellent
reason to think the butler did it. But does he acquire evidence that the butler did it? Well,
philosophical renegades 161
he certainly doesn’t have any of Holmes’ evidence. A philosopher could become
convinced that although Homer has excellent reason to think the butler’s guilty, he has
no evidential reason.
Cho’s physics case is diff erent from Mo’s disease case. As we saw earlier, given Mo’s
epistemic defi ciencies, the skeptic might allow that Mo has been given no reason to
discard his belief. That’s not true of Cho; she has excellent reason to give up her
belief. But Cho’s case illustrates how one’s retaining a belief can be epistemically
defi cient even though its evidential status has not changed—given certain views
about evidence. And if its evidential status hasn’t changed, then perhaps its warrant
status hasn’t changed either. Naturally, it’s a signifi cant step from “Her evidential sta-
tus hasn’t changed” to “Her level of warrant hasn’t changed.” But as I’ve said a couple
times already, the skeptic is trying to articulate her thesis in such a way that it’s not
hostage to how various debates about evidence, warrant, knowledge, and justifi ca-
tion turn out (either next week or in the next century). Her central insight, if it
deserves that status, is that if upon learning about her epistemic superiors the well-
informed mere mortal retains her belief, then that action of hers thereby suff ers a
serious epistemic defect.
But what on earth could the defect in the action be, now that we’ve allowed for the
possibility that the retained renegade belief is warranted, is justifi ed, is blameless, amounts
to knowledge, and has no evidence against it? What epistemic defect could the action
possibly have? It leads to a belief that is as good as it gets: it amounts to knowledge!
In order to answer the question I fi rst need to explain how the action might be epis-
temically defective even though the retained belief isn’t. Then I’ll say what I think the
defect is.
Bub thinks claim C is true. But his belief is based on a poor reading of the evidence.
Bub sticks with his belief in C despite the poor evidence just because he has a raging,
irrational bias that rules his views on this topic. Suppose C is “Japan is a totalitarian state”
and Bub has always been biased against the Japanese.
Then he meets George. After long discussion he learns that George is his peer when it
comes to politics and Japan. He then learns that George thinks C is true. This is a case of
peer agreement , not disagreement.
I take it that when Bub learns all this about George, he has not acquired some new
information that should make him think “Wait a minute; maybe I’m wrong about
Japan.” He shouldn’t lose confi dence in his Japan belief C merely because he found
someone who is a peer and who agrees with him!
The initial lesson of this example: Bub’s action of not lowering his confi dence in his belief as
a result of his encounter with George is reasonable even though his retained belief itself is unreason-
able . The right answer to “Should you lower your confi dence level in reaction to a rec-
ognized peer disagreement?” can be “no” even though the right answer to “If you don’t
lower your confi dence level in that situation, is your belief reasonable?” is also “no.”
Bub’s assessment of the original evidence concerning C was irrational, but his reaction
to George was rational; his subsequent belief in C was (still) irrational. The simplistic
162 bryan frances
question, “Is Bub being rational after his encounter with George?” is ambiguous and
hence needs to be cut in two parts: “Is his retained belief in C rational after his encounter
with George?” vs. “Is his response to George rational?” The answer to the fi rst question
is “No” while the answer to the second question is “Yes.”
This story is suffi cient to show that one’s action of retaining one’s belief can be epis-
temically fi ne even though the retained belief is epistemically faulty. If we alter the story
just a bit, we can see how the action can be faulty while the belief is fi ne.
Suppose that Bub started out with confi dence level 0.95 in C. And he found George
to have the same confi dence level. And suppose that the original evidence Bub had only
justifi es a confi dence level of 0.2. So like we said before Bub has grossly misjudged his
evidence. If in reaction to his encounter with George Bub did lower his confi dence in C
to 0.2 or 0.4 or whatever—whatever level is merited by the correct principles of ration-
ality that make his belief in C rational 21 — he would be irrational . If you have a certain level
of confi dence in some claim and fi ve minutes goes by in which you correctly and
knowingly judge to have not been presented with any new evidence whatsoever that
suggests that your confi dence level is mistaken, then you would be irrational to change
your confi dence level—even if you happened to adjust so that your belief itself was
rational. Now, if you took some time to reassess your original evidence, then of course
you might learn something that would justify your changing your confi dence level. But
if nothing like that has happened, as in Bub’s case, then you would be irrational to
change your confi dence level. So if Bub did adjust his confi dence level to 0.2 or 0.4 or
0.6 say, then although his subsequent confi dence level might accurately refl ect his total
body of evidence—so his position on C would now be rational—his process to that
rational position would be irrational. 22
The renegade case is like this second story: the action is faulty while the retained
belief might be fi ne. Now that I’ve explained the diff erence in the epistemic statuses of
the retention action and retained belief, I can off er a conjecture as to what the serious
epistemic defect might be in the case of the renegade who retains her belief in the face
of her awareness of purely philosophical error theories.
In the imagined possibility that makes (1)–(5) true (for all we can be certain of ),
knowledge, warrant, and blamelessness are “lower” epistemic qualities. So, perhaps wis-
dom, deep understanding, cognitive penetration, and mastery (e.g. of a topic) are signifi -
cantly “higher” epistemic qualities. And being a renegade with respect to error theories
inhibits the development of such qualities, at least with respect to the topics relevant to
the error theories. If so, that would mean that the belief retention is still epistemically
defective, and in a “serious” manner.
21 I’m assuming that there is a rational level of confi dence for his retained belief. I’m not sure how to
argue for this claim.
22 This seems to show that Bub is epistemically at fault no matter what he does in response to his discov-
ery of George’s agreement with him. However, it doesn’t show that he is utterly epistemically doomed: he
could go back and discover that his initial judgment was unjustifi ed. In that sense it’s not quite a case of
“damned if you do and damned if you don’t.”
philosophical renegades 163
So here is my theory regarding the epistemic defect in question, the one
I recommend for the metaphilosophical skeptic:
• If the true story of epistemic relations is more or less what philosophers have
thought, then renegades with respect to error theories lose the warranted and
justifi ed statuses of their belief (and hence the knowledge status).
• On the other hand, if the threshold for justifi ed belief and knowledge is much
lower than philosophers have thought, then the renegade’s retained belief might
exceed those thresholds (contrary to the fi rst bullet point) even though its war-
rant has signifi cantly decreased.
• Finally, if the thresholds are low, if reason and evidence are separated (as in the Cho
story), and if evidence and warrant are not separated (as in the Cho story), then
although the warrant for the renegade’s retained belief might not change upon her
learning of her superiors (contrary to the fi rst two bullet points), the belief retention
process type is not conducive to epistemic qualities such as wisdom, deep understand-
ing, cognitive penetration, and mastery with respect to the topics the error theories
are about (the topics: truth, material existence, meaning, morality, and so on).
I close this section with a few comments on what is meant by “the action of retaining
the commonsensical belief P.”
One’s view of philosophical error theories is utterly divorced, psychologically, from
real life. Even an advocate of an error theory will not have her view change her behav-
ior. A color error theorist still says that she likes red cars and yellow bananas; a composi-
tional nihilist will insist to his insurance company that he really did have a television
ruined by the fl ood in his basement. The same holds for philosophers like myself who
have become agnostic on the truth-value of error theories. In fact, in many contexts the
error theorists and agnostics will say things like “I know that her bag was red,” “I think
there are four extra chairs in the next room,” which when interpreted straightforwardly
strongly suggest that they are retaining the commonsensical beliefs. Given that we walk
and talk exactly like someone who retains her commonsensical beliefs, if the common-
sensical philosopher’s belief retention is seriously epistemically defective, it’s not defective
in virtue of almost any of her real-life behavior, including much that is linguistic. Depend-
ing on what the notion of belief amounts to, it might be even true to say that in one sense
of belief the error theorist and agnostic believe that some cars are red. Instead, the diff er-
ence between the commonsensical philosopher and the error theorist (or agnostic)
comes to the fore in things like their dispositions and certain episodes of behavior, as
when she says to herself things like “H can’t be right; P is right instead” while the error
theorist and agnostic typically end up saying to themselves contrary things.
13 What if metaphilosophical skepticism is false?
I will now show that no matter what your take on the metaphilosophical skeptic’s argu-
ment, pro or con, you get a new and philosophically interesting conclusion. We all win
in that respect.
164 bryan frances
So let us now assume for the sake of argument that contrary to the Will of God and all
forms of justice I have made a fatal error in the previous sections: metaphilosophical
skepticism is completely false. So, the renegade’s belief in P and ~H amounts to know-
ledge, is fully justifi ed, and her belief retention suff ers no epistemic defect whatsoever.
She is perfectly aware of the respected status of compositional nihilism, moral error
theory, and color error theory; she knows that she is nothing remotely like an expert in
those areas; she has no amazing piece of evidence or knowledge that the error theorists
have missed; and yet there is nothing epistemically wrong in her just concluding that all
those theorists are wrong and she’s right.
If that’s the way things are, then by my lights something is terribly amiss in a great
portion of philosophy. However, what is amiss in the philosophical community is not
that the arguments for H must be pretty weak even though they are the well-respected
products of our best and brightest working over many years. At least, we have no reason
to leap to that conclusion even if we reject metaphilosophical skepticism. We already
admitted in the discussion of the Overwhelming Warrant objection that the error the-
ory supporting arguments are at least on a par with many other philosophical arguments
and are quite strong. The only problem with the former, it was alleged, was the rare
strength of their opponent: virtually universal commonsense.
What must be true if metaphilosophical skepticism is false is that purely philosophical
theories against commonsense are virtually never worthy of belief . And if that is true, then large
parts of philosophy have to change, for the following reason. Virtually all wide-ranging
metaphysical theories regarding composition, parthood, persistence over time, and so
on, are radically anti-commonsensical at some point (it’s not hard to cleave to common-
sense if one isn’t comprehensive). The same holds for all theories of truth (that don’t
ignore the semantic paradoxes) and all theories of vagueness and meaning. And yet, if
skepticism is false, none of these theories is any good, as we philosophers know full well
that they are false even if we are aware of our status as amateurs and are perfectly aware of
the impressive arguments for those theories.
Ontology, and metaphysics in general, is almost always said to be extremely hard; same
for the philosophy of language, logic, and physics. But if the view being presently con-
sidered is true, then large parts of these areas are very easy. After all, under the current
assumption we already know that any theory that goes against virtually universal com-
mon sense is false—because the commonsense beliefs are known, we know which ones
they are, and we’ve done the elementary deduction to see that they entail the falsehood
of all the popular error theories.
As pointed out earlier, one can’t rely on distaste for metaphysics here. Error theories
show up in the philosophy of language, the philosophy of mind, epistemology, the
philosophy of logic, the philosophy of mathematics, metaethics, and the philosophy of
physics as well. Further, metaphysical thought is not always the source of those error
theories.
To me, that sounds like a justifi cation for saying that many areas of philosophy
are bunk. If I, as someone with a defi nitely amateur understanding of much of the
philosophical renegades 165
philosophical areas just mentioned, can know that all sorts of error theories are false even
though I have absolutely nothing at all interesting to say against those theories—and if
this knowledge of mine is not some anemic thing and there is nothing epistemically
wrong with my retaining my commonsensical beliefs—then there is something deeply
wrong with those areas of philosophy, since many of the most popular and expertly
endorsed theories are error theories. Obviously, that last conclusion, “something is
deeply wrong with those areas of philosophy,” has been endorsed for centuries with
respect to some parts of metaphysics, but now we have a novel argument for the novel
proposition that leads to it—as well as a much more expansive conclusion, going well
beyond metaphysics. Not only that: we conclude that those areas of philosophy are bunk
despite their relying on arguments as good as or even better than those found in other areas
of philosophy. That is a paradoxical conjunction.
This gives us my essay’s disjunctive thesis that one of the following is true:
• Metaphilosophical skepticism is true. When it comes to ordinary philosophical
disagreements, the renegade’s belief is unjustifi ed, unwarranted, and at least mildly
blameworthy. When it comes to error theories, either we don’t know P or our
retaining our belief in P is epistemically impoverished in the ways described at the
end of section 12.
• A good portion of the philosophy of language, philosophy of mathematics, phil-
osophy of logic, metaethics, philosophy of physics, and metaphysics is bunk and
error theorists should give up most of their error theories despite the fact that
their supporting arguments are generally as good as or even better than other
philosophical arguments. 23
References
Balaguer, Mark (1998) Platonism and Anti-Platonism in Mathematics (Oxford: Oxford University
Press).
Cameron, Ross (2010) “Quantifi cation, Naturalness and Ontology,” in Allan Hazlett (ed.)
New Waves in Metaphysics (Basingstoke: Palgrave-Macmillan).
Daly, Chris and David Liggins (2010) “In Defence of Error Theory,” Philosophical Studies 149:
209–30.
Feldman, Richard and Ted Warfi eld (2010) Disagreement (Oxford: Oxford University Press).
Field, Hartry (1980) Science Without Numbers (Princeton, NJ: Princeton University Press).
—— (1989) Realism, Mathematics, and Modality (New York: Basil Blackwell).
Frances, Bryan (2005a) Scepticism Comes Alive (Oxford: Oxford University Press).
—— (2005b) “When a Skeptical Hypothesis Is Live,” Noûs 39: 559–95.
—— (2008) “Spirituality, Expertise, and Philosophers,” in Jon Kvanvig (ed.) Oxford Studies in
Philosophy of Religion 1: 44–81.
23 Thanks to David Christensen and Margaret Frances for comments. Thanks to Ariella Mastroianni for
insight regarding alternative conceptions of belief.
166 bryan frances
—— (2010) “The Refl ective Epistemic Renegade,” Philosophy and Phenomenological Research 81:
419–63.
—— (2012) “Discovering Disagreeing Epistemic Peers and Superiors,” International Journal of
Philosophical Studies 20: 1–21.
Hegel, G. W. F. (1995) Lectures on the History of Philosophy , trans. E. S. Haldane and F. H. Simson
(Lincoln, NE: University of Nebraska Press).
Horgan, Terence and Potrc Matjaz (2000) “Blobjectivism and Indirect Correspondence,” Facta
Philosophica 2: 249–70.
Horwich, Paul (1998) Truth , 2nd edn. (Oxford: Oxford University Press).
Jackson, Frank (1998) From Metaphysics to Ethics (Oxford: Clarendon Press).
Johnston, Mark (1992a) “Reasons and Reductionism,” Philosophical Review 101: 589–618.
—— (1992b) “How to Speak of the Colors,” Philosophical Studies 68: 221–63.
—— (1993) “Objectivity Reconfi gured: Pragmatism Without Verifi cationism,” in J. Haldane and
C. Wright (eds.) Realism and Reason (Oxford: Oxford University Press), 85–130.
Kelly, Thomas (2008) “Common Sense as Evidence: Against Revisionary Ontology and Skepti-
cism,” in Peter French and Howard Wettstein (eds.) Midwest Studies in Philosophy: Truth and Its
Deformities 32 (Oxford: Blackwell), 53–78.
Mackie, J. L. (1977) Ethics: Inventing Right and Wrong (New York: Penguin Books).
Merricks, Trenton (2001) Objects and Persons (Oxford: Oxford University Press).
Miller, Alex (2002) “Wright’s Argument Against Error-Theories,” Analysis 62: 98–103.
Rosen, Gideon and Cian Dorr (2002) “Composition as Fiction,” in Richard M. Gale (ed.)
The Blackwell Guide to Metaphysics (Oxford: Blackwell).
Sider, Theodore (2001) Four-Dimensionalism (Oxford: Oxford University Press).
—— (forthcoming). “Against Parthood,” in Karen Bennett and Dean W. Zimmerman (eds.)
Oxford Studies in Metaphysics , viii (Oxford: Oxford University Press).
—— and David Braun (2007) “Vague, So Untrue,” Noûs 41: 133–56.
van Inwagen, Peter (1990) Material Beings (Ithaca, NY: Cornell University Press).
Williamson, Timothy (1994) Vagueness (London: Routledge).
Wright, Crispin (1994) “Response to Jackson,” Philosophical Books 35: 169–75.
1
In this paper I want to develop a problem that arises when we think about peer disagree-
ment in the context of the often-contentious claims we make in the course of doing phil-
osophy. 2 The problem, in a nutshell, is that (in the following MASTER ARGUMENT )
four propositions, each independently plausible, imply an intolerable conclusion, as follows:
1. In cases in which S believes that p in the face of (what I will call) systematic
p-relevant peer disagreement , there are (undefeated doxastic or normative) defeaters
with respect to S’s belief that p.
2. If there are (undefeated doxastic or normative) defeaters with respect to S’s belief
that p, then S neither knows, nor is doxastically justifi ed in believing, that p.
3. If S neither knows, nor is doxastically justifi ed in believing, that p, then S is not
warranted in asserting that p.
4. Some cases of philosophical disagreement regarding whether p are cases of sys-
tematic p-relevant peer disagreement.
5. (Therefore) In such cases, S is not warranted in asserting that p.
I submit that (5)—the conclusion of the MASTER ARGUMENT —is intolerable. In
eff ect, it states that in any case in which philosophical (peer) disagreement regarding
7
Disagreement, Defeat, and
Assertion 1
Sanford Goldberg
1 With thanks to audiences at Oxford University and a conference at the University of London, where
portions of this paper were given as talks. Thanks also to many people for helpful discussions of the paper
and related topics: Louise Antony, Fabrizio Cariani, Cian Dorr, Sean Ebels Duggan, Lizzie Fricker, Bryan
Frances, Miranda Fricker, Richard Fumerton, Alvin Goldman, Nick Leonard, Christian List, Peter Ludlow,
Matthew Mullins, Baron Reed, Russ Shafer-Landau, Barry Smith, Tim Sundell, Deb Tollefson, and Tim
Williamson. Finally, a very special thanks to David Christensen and Jennifer Lackey, for extremely helpful
comments on earlier drafts of this paper; and to the graduate students in Jennifer Lackey’s 2011 epistemology
seminar at Northwestern, where a draft of this paper was discussed.
2 This topic has recently begun to receive more attention, not only in the literature on disagreement, but
also (on occasion) in the literature on philosophical methodology. To see the literature focused on disagree-
ment and philosophical practice, see Frances 2005 , Goldberg 2008 , Kornblith 2010 , and Fumerton 2010 .
168 sanford goldberg
whether p is systematic, asserting that p is unwarranted. Since it is common practice to
assert claims, including the conclusions of our philosophical arguments, even under
conditions in which there is systematic peer disagreement in philosophy—or so I will
argue—the MASTER ARGUMENT supports a skeptical conclusion: at least some-
times, and arguably often, standard assertoric practice in philosophy results in unwar-
ranted assertions.
In this paper I develop this problem. 3 Since (2) and (3) are relatively uncontroversial,
my focus will be on (1) and (4). After defending them, I go on to defend the idea that
ordinary philosophical practice does involve assertions under conditions of systematic
relevant peer disagreement. The result is that the skeptical conclusion of the MASTER
ARGUMENT is an unhappy one.
2
I regard (1) as the most controversial of the claims in the MASTER ARGUMENT . To
argue for it, I fi rst need to introduce the relevant key notions of systematic p-relevant peer
disagreement , and of a defeater .
For the purpose of the following discussion, a defeater is a proposition 4 that bears
against the positive epistemic status(es) enjoyed by a given belief. I will take defeaters to
be of one of three types: doxastic defeaters, which are propositions that function as defeat-
ers in virtue of being believed; normative defeaters, which are propositions that function
as defeaters by being such that the subject ought (from the epistemic point of view) to
believe them; and factual defeaters, which are propositions that function as defeaters by
being true. Along with tradition, I will assume that rational belief and doxastically justi-
fi ed belief are susceptible to doxastic and normative defeaters, but not to factual defeat-
ers; 5 whereas knowledge is susceptible to all three kinds of defeaters.
The “no defeaters” condition in epistemology—whether in the theory of rational
belief, doxastic justifi cation, or knowledge—can be motivated in a variety of diff erent
ways. In the theory of rational belief and in the theory of doxastic justifi cation, the move
to impose a “no defeaters” condition can be motivated by appeal to the normativity of
rational and justifi ed belief. Two points are of relevance here. First, S’s belief that p is not
rationally held or doxastically justifi ed if S should not (from the epistemic point of view)
believe that p. Second, S should not (from the epistemic point of view) believe that p if
any of the following three conditions hold: (i) S’s other beliefs are such that they are
inconsistent with (or render suffi ciently improbable the truth of ) the proposition that p,
where this sort of inconsistency is the sort that subjects are expected to discern and
3 I propose a response to this problem in Goldberg (unpublished manuscript).
4 There is some disagreement over whether cognitive states can count as defeaters as well. For a very nice
discussion, see Bergmann ( 2006 : ch. 6 ). Ignoring this possible complication will not aff ect the treatment of
disagreement, so I will do so.
5 At least not unless the factual defeater is at the same time a normative or doxastic defeater. For a nice
example of a defeater that is at one and the same time factual and normative, see Gibbons ( 2006 ).
disagreement, defeat, and assertion 169
(subsequently) to avoid; 6 (ii) S’s other beliefs call into question the goodness of S’s
grounds for believing that p; or (iii) there is a proposition that S should (from the epis-
temic point of view) believe, and were she to believe this proposition at least one (i) or
(ii) would hold. Here conditions (i) and (ii) identify doxastic defeaters (those proposi-
tions the subject believes, belief in which makes (i) or (ii) true), whereas condition (iii)
identifi es normative defeaters (those propositions belief in which would make (i) or (ii)
true). 7 Since the topic I will be interested in concerns the bearing of a certain kind of
disagreement on rational or justifi ed belief, I will be focusing attention only on doxastic
and normative defeaters, not on factual defeaters.
The notion of peer disagreement I have in mind is the notion of a disagreement among
parties who are roughly equivalent in cognitive competence and intelligence (at least
insofar as these bear on the matter at hand), in judgment bearing on the matter at hand,
and in the relevant evidence they have. I will be focusing only on cases in which both of
the parties not only are, but also regard each other as, peers in this sense. Next, we can say
that a peer disagreement is p-relevant when either (a) the disagreement is over whether p,
or else (b) whether p is true turns on the outcome of what is being debated. The notion
of a systematic p-relevant peer disagreement can now be introduced by way of a distinc-
tion between two types of disagreement. (These two “types” are really extreme points
on a multi-dimensional continuum, but it will be easiest to idealize them and so treat
them as two distinct kinds of disagreement.) Let us say that a peer disagreement regard-
ing whether p is one-off when it concerns just the issue regarding whether p itself, and
nothing more. 8 To be sure, disagreements over whether p typically bleed into other
areas: if I disagree with you (whom I regard as my peer) over whether p, I may well also
disagree with you regarding which of us is more likely to have misjudged the relevant
evidence, and so forth. But this bleeding can be more or less localized; one-off cases are
cases in which the issue regards only whether p and those (few) localized matters that
bear on this disagreement. (A paradigmatic case of one-off peer disagreement would be
the Check-Splitting case from Christensen 2007 .) Peer disagreement regarding whether
p is systematic , on the other hand, when three conditions hold. First, the disagreement is
not localized around the question whether p—that is, the disagreement regarding
whether p is part of a much wider disagreement, with lots of other related matters in
6 The part following “where” is meant to distinguish the cases in question from the sort of case in play in
a preface- paradox-type situation, where the subject reasonably regards herself as having inconsistent beliefs,
but where this fact alone does not show her to be irrational. I will not attempt a characterization of when
a discerned inconsistency is the sort that subjects are expected to avoid, and when it is not—I assume that
we have an intuitive grip on the diff erence. (With thanks to David Christensen.)
7 The status of normative defeaters is somewhat vexed, owing perhaps to the unclarity of the epistemic
“should believe” as well as unclarity about the motivation various theories in epistemology might have for
postulating such epistemic oughts. To the extent that I rely on normative defeaters in the argument to follow,
I will do so only in contexts where I anticipate a broad consensus in the relevant claims of epistemic
oughts.
8 Both Elga ( 2007 ) and Kornblith ( 2010 : 33) speak of “isolated disagreement,” which is a sort of disagree-
ment that (in Kornblith’s words) does not “threaten to force [participants] to suspend judgment very
widely.”
170 sanford goldberg
dispute. (When a peer disagreement has this property I will call it non-localized .) Second,
the disagreement is widespread , in the sense that at least two of the positions endorsed by
the disagreeing parties have attracted, or are capable of attracting, a substantial and dedi-
cated following. Thus it is not just a disagreement between two people, but between two
(or more) groups of people, each of which is to some degree committed to its claims in
the face of the disagreement. Third, and fi nally, the disagreement is entrenched , in the
sense that the disagreement has persisted for at least some time, with both sides continu-
ing to defend and advance their side, in the face of persistent challenges from the other
side, where the defenses in question remain responsive to the relevant evidence and
arguments. 9 Obviously, these three features are a matter of degree—a disagreement can
be more or less non-localized, more or less widespread, and more or less entrenched. For
this reason it would be best to speak of disagreements as more or less systematic (accord-
ing to the degrees of each of these three characteristics they exhibit), but I will continue
to speak of systematic disagreement simpliciter .
Claim (1) states that in cases in which a subject S believes that p in the face of system-
atic p-relevant peer disagreement, there are (undefeated) doxastic and normative defeat-
ers with respect to S’s belief that p. One immediate objection that might be raised against
(1) is this: the proposition that there is systematic p-relevant disagreement is not a
defeater of one’s belief that p unless there is some independence in the way the various
disagreeing parties reached their views. To illustrate, suppose that views in philosophy
are eff ectively passed down from teacher to student in the course of PhD training. In
that case one might think that the fact that one’s opponent believes as she does (in a
given systematic philosophical disagreement) refl ects more on where she trained than it
does on the probative force of the mutually possessed evidence—thereby decreasing
the distinctly epistemic signifi cance of the disagreement. 10 Now this point must be
acknowledged to be of great importance in assessing the probative force of evidence,
and so is crucial to any discussion of disagreement that is formulated in evidentialist
terms. However, it is of less importance in connection with the issue whether systematic
peer disagreement constitutes a defeater . This is for a simple reason. Below I will be argu-
ing that the “mechanism” by which systematic peer disagreement constitutes a defeater
is by way of making salient the possibility that at least one of the disputing parties to the
debate is unreliable on the matter at hand. In light of this, suppose (with the objection)
that the transmission of views in philosophy is not a rational process of sifting through
the evidence, but instead is a matter of one’s being infl uenced (via non-rational mecha-
nisms) by one’s teachers in graduate school. This fact will only make the possibility of
unreliability on these matters even more salient—and so will only enhance the case for
9 The point of the “responsiveness” condition is to exclude cases in which allegiance to one or both of
the positions is based on non-epistemic considerations.
10 This is based on refl ections in Kelly ( 2010 : 145–50). Kelly himself does not speak of defeaters, but
instead of the extent to which something like systematic disagreement threatens widespread skepticism. Still,
I can imagine someone taking the spirit of his points and making claims regarding the extent to which
systematic disagreement constitutes a defeater, in the manner suggested above.
disagreement, defeat, and assertion 171
thinking that systematic peer disagreement in philosophy generates a relevant defeater.
(After all, what goes for one’s opponent equally goes for one oneself !) 11 On the other
hand, if the acquisition of one’s views in philosophy remains a rational process even in
graduate school—the infl uence of graduate training in philosophy remains a rational
infl uence, in which the student remains sensitive to the probative force of the evi-
dence—then the fact that a person’s views in philosophy are best predicted by where she
went to graduate school, even if true, is irrelevant to the issue whether systematic dis-
agreement in philosophy generates a defeater. For in that case one cannot downgrade
one’s opponent merely because of where she went to graduate school, and so the argu-
ment to be given below, in defense of the claim that systematic peer disagreement in
philosophy generates a defeater, will be untouched.
With this initial objection dismissed, I want to develop the case for (1), that is, for
thinking that systematic peer disagreement generates a defeater. 12 The case I will be pre-
senting for (1) is intended to be independent of one’s views on the epistemic signifi -
cance of peer disagreement more generally: the case aims to be compelling whether one
favors a “Conformist” or a “Non-Conformist” position. 13 What I will be highlighting is
the distinctive epistemic situation that arises in cases in which the disagreement is sys-
tematic. My claim will be that in such cases one has good reason to think that there is a
serious chance that one of the parties to the dispute is unreliable regarding whether p , under con-
ditions in which one does not know that one oneself is not among the unreliable parties to this
dispute . 14 What is more, the rational pressure to accept this combination of claims rises in
proportion to the increased systematicity of the disagreement. This point holds, I will
argue, even if we assume, with the Non-Conformist, it can be rational to preserve one’s
pre-disagreement doxastic attitude towards p even in the face of p-relevant peer dis-
agreement. (The thrust of my argument will be that matters are otherwise when the
disagreement is systematic.)
3
I will divide my argument here into two parts. In the fi rst, I argue that, in cases of sys-
tematic disagreement in philosophy, we have reason to prefer an explanation of the dis-
agreement that calls into question the reliability of at least some of the disputing parties
on the matter at hand (3.1). I will then go on to argue that in these cases none of the
disputing parties is in a position to rule out the hypothesis that she herself is among
those who are unreliable (3.2). Finally, I will bring these considerations to bear on our
conclusion, (1), in section 3.3.
11 See below for a development of the reasoning behind this symmetry claim in this context.
12 Interestingly, the literature on disagreement has had very few discussions that raise the question of
defeaters. The only exceptions of which I know are Frances 2005 , Goldberg 2008 , Bergmann 2009 , and
Lackey 2010 .
13 I borrow these terms from Lackey 2010 .
14 With thanks to Jennifer Lackey for this way of putting the point.
172 sanford goldberg
3.1
Suppose that one has a compelling reason to doubt one’s reliability in a particular judg-
ment one makes. In that case, one ought to refrain—it is rationally required that one
refrain—from making the judgment (or at least have a lower degree of credence in the
proposition in question). Let us say that a subject who has no reason to question her reli-
ability on a given matter has an “epistemically clean bill of health” in making the judg-
ment (or forming or sustaining the belief ) in question. Now I submit that, when it
comes to peer disagreement, the claim that a given subject has an epistemically clean bill
of health on the matter would appear to be most plausible in a one-off case of peer dis-
agreement, such as the Split-the-Bill case in Christensen 2007 . Here the fact that this is
a one-off case of peer disagreement, on a topic on which it is mutually recognized that
all parties have been reliable on related matters in the past, makes it plausible to suppose
that this disagreement is the result of a temporary (one-time) problem with one (or per-
haps both) of the disagreeing parties. This sort of disagreement is not something that
ought to lead either of the disagreeing parties to question whether she is generally reli-
able on the matter at hand. Indeed, Non-Conformist positions on peer disagreement,
according to which the fact of peer disagreement per se does not rationally require one
to modify one’s doxastic attitude in any way, insist on this point. On such a view, the fact
of peer disagreement (taken by itself ) does not give one any reason to question one’s
reliability on the matter at hand. I take this to be uncontroversial.
But now consider cases of systematic peer disagreement. Whereas in the one-off case it
seems likely that we can explain the disagreement without postulating general unreli-
ability in any of the disputing parties, the explanatory challenge is diff erent when we
consider peer disagreements that are systematic. For here we need to explain not only
the present case of disagreement, but also the various other disagreements that go into
making this an instance of a disagreement that is non-local , widespread , and entrenched .
Given these features of the disagreement, it becomes increasingly plausible to suppose
that at least one, and possibly many or all, of the parties to the dispute is/are unreliable
on the topic at hand. Or so I contend.
My argument on this score takes the form of an inference to the best explanation.
When we seek to explain the systematic nature of the disagreement regarding whether
p, there are various hypotheses to consider. For present purposes, there are two key
choice points. One is whether or not an explanation will need to advert to the unreli-
ability of one or more of the parties to the dispute. The other is whether an explanation
will need to treat all of the parties in like manner (or whether there is room to distin-
guish those “in the right” from those not “in the right”). With this in mind, I highlight
the following four candidate explanatory hypotheses: 15
H1: All of the parties are unreliable on the question whether p;
H2: Some but not all of the parties are unreliable on the question whether p;
15 I am indebted to Jennifer Lackey for the formulations in H1–H4.
disagreement, defeat, and assertion 173
H3: None of the parties is unreliable on the question whether p; someone is merely
wrong regarding whether p. 16
H4: None of the parties is unreliable on the question whether p; it’s just that no
one is right regarding whether p (that is, everyone is merely wrong).
H1 and H2 both allege unreliability on the part of at least one of the disputing parties;
H3 and H4 chalk the disagreement up to errors that do not call into question the reli-
ability of any of the disputing parties. My contention is this: when disagreement is sys-
tematic, we have evidence that at least one of the parties, and perhaps many or all, is/are
unreliable on the matter at hand; it is simply not credible to suppose that this sort of
dis agreement can be explained by mere mistakes . So to the extent that a disagreement
is systematic, we have reason to favor H1 or H2 over both H3 and H4. 17
To begin, the sort of explanation that would leave unchallenged the reliability of the
various disputing parties in a case of systematic disagreement would leave us with a mys-
tery: if all parties are reliable, so that the disagreement is to be chalked up to mere mistake(s)
on the part of one (or more) of the parties, why then is it that the subject matter of phil-
osophy as a whole, or at least portions of certain central subfi elds of philosophy,
engender(s) disagreements that are systematic ? If it really were a matter of one or both
sides making a mere mistake but still remaining reliable on the issue at hand, we might
expect to see far fewer disagreements, and certainly far fewer systematic disagreements.
To see why, return to Christensen’s check-splitting case. Here we do not think to ques-
tion the reliability of the participating parties because (we can imagine) all parties have a
track record of success on a wide variety of questions that call upon their arithmetic
competence. In this mass of cases, they agree, or would agree, with one another. And
furthermore, if we were to consider the parties’ verdicts in future cases of mental math,
we would predict much more agreement than disagreement. It is against this back-
ground that we do not think to question the competence (general reliability) of either
party in the present case. The disagreement in the check-splitting case is thus seen as a
relatively rare exception to the rule—a case of a mere mistake (or perhaps mistakes: maybe
both parties are wrong).
Of course, matters are otherwise in philosophy: we positively expect widespread dis-
agreement, at least regarding a good number of topics (more on which below). Consider
in this context the claim that systematic disagreements in philosophy are to be explained
by the hypothesis of mere errors on the part of one or more parties to the dispute (where
16 Let us say that S is “merely wrong” that p when (i) S judges that p, (ii) it is false that p, and (iii) S’s judg-
ment that p was formed in a way that does not call into question S’s competence or general reliability on the
question whether p. An example would be the check-splitting case: the fact that S got it wrong does not call
into question her general reliability in matters of arithmetic. (Her being wrong is chalked up, rather, to a
momentary condition—one which she herself could address if given enough time, paper, and pencil, etc.)
17 Objection: if one (but not all) of the parties is not reliable on the topic at hand, this jeopardizes the
claim that the dispute is a peer disagreement. Reply: this is correct but does not undermine the point I will
be making about the sort of disagreement we fi nd in philosophy. I address this matter later in this chapter at
length.
174 sanford goldberg
the parties are all assumed to be reliable on the topic at hand). Such a claim faces an
obvious challenge. Why is it that the disagreement remains even after full disclosure,
even after all of the parties continue to give the matter thought, and even after we avail
ourselves of all manner of opportunities to discover our errors? Perhaps it will be said:
because philosophy is just very hard. I don’t disagree; but this, I submit, is a reason to
think that at least some party to the dispute, and perhaps many or even all, is/are not reli-
able on the topic at hand. The contrast with the check-splitting case could not be clearer:
in that case we don’t think to question the arithmetic competence (the general reliabil-
ity) of either participant precisely because we imagine that, were the disputing parties to
do the math on a napkin and discuss it with each other, they would reach agreement . If they
didn’t, we would think to question the relevant reliability of one or both of them. This
suggests that hypotheses H3 and H4 become implausible to the extent that a peer dis-
agreement is systematic.
Our point can be reinforced by refl ecting on a case in which you are a mere observer
seeking to explain the distribution of doxastic attitudes among a certain group of people
engaged in a discussion. Suppose that you give a series of quizzes on some topic (about
which you know very little) to this population (about whom you are largely ignorant).
In particular, you don’t know the right answer to the questions on the quiz; and you do
not know how well the participants know the topic in question, but you do have some
reason to think that they are roughly “equals” in their degree of competence and reli-
ability. Examining the results of the quiz you administered, you discover that there is
prevalent disagreement among them as to the correct answers. Further, you discover
that this disagreement remains, and in some cases expands, even after full disclosure—
even as neither of the sides is disposed to charge the other side with ignoring evidence,
or with being less intelligent, and so on. I submit that it would be more reasonable—
indeed, much more reasonable—for you to conclude that at least someone’s reliability is
at issue, than it would be to look for explanations that continue to assume that all parties
are reliable on the topic in question. If unreliability isn’t at issue, it would appear just a
strange coincidence that there is such widespread disagreement on the topic, across a
variety of diff erent questions, even after full disclosure. Our conclusion would appear to
be this: to the extent that peer disagreement is systematic, we have reason to favor H1 or
H2—that is, we have reason to question the reliability of at least some of the parties to
the dispute—over both H3 and H4—that is, over assuming that their reliability is intact
(and that the disagreement is a matter of one or both sides’ being merely wrong, occa-
sion after occasion). 18
This analogy is limited. 19 For one thing, there is a diff erence between being an
observer to and a participant in a discussion. For another, philosophers who are partici-
pants in systematic philosophical disagreements are unlike the observer I have described
18 Later in this chapter I will consider an argument that purports to show that if we must choose between
H1 and H2 on the basis of considerations like the ones I just put forward, then we ought to choose H1.
19 With thanks to Peter Ludlow for a helpful conversation on this point.
disagreement, defeat, and assertion 175
in that philosophers have familiarity, both with their interlocutors, and with the subject
matter about which they are disagreeing. Still, the analogy is suggestive, since it makes
clear that there are cases of an observed systematic disagreement in which we will
explain the disagreement by hypothesizing that at least one side is unreliable on the
topic at hand.
What is more, further support can be off ered for my contention that we have reason
to favor an explanation of systematic disagreement which calls at least one party’s reli-
ability into question, over an explanation which insists that none of the disputing par-
ties’ reliability is at issue. On any topic on which there is reliability (if perhaps only
reliability among the experts ) in a given domain or on a given topic, we would expect a
high degree of unanimity in the judgments that are made within the domain (or on the
topic at hand). In the case of philosophy, insofar as no one’s reliability is in question, we
would then expect that, once we restrict our attention to the relevant “experts”—PhD-
trained philosophers who have been interacting with the relevant literature for some
time—we will fi nd at least some agreement on foundational matters. Or, at a minimum,
we might expect that as time goes on, there will be some coalescing around the posi-
tions (truth wins out in the end). Yet in cases of disagreement in philosophy, this is not
what we fi nd: the disagreement is systematic, and it persists. If ordinary epistemic expla-
nations of the fact of disagreement are not to the point—we cannot downgrade our
opponents for being ignorant of relevant evidence, or lacking in intelligence—some
other explanation is needed. The ascription of unreliability to one (or more) of the dis-
puting parties begins to seem a more reasonable explanation.
It might be objected: there is some agreement over such matters as proper methods
and which papers (and authors) are the ones to be read on a given topic. For this reason
(the objection continues) things are not as bad as the (would-be) explanation in terms of
unreliability would lead us to believe. But this objection can be met in two ways. First, it
is not clear how substantial the agreement is: the topic of philosophical methodology has
become a particularly hot topic these days, but it is also one that generates a good deal of
disagreement in its own right. Consider in this light the recent debates over the epistemic
status of intuitions, the utility of conceptual analysis, the role of linguistic analysis in
philosophical disputation, the proper status to accord to common sense (and to “com-
monsense propositions”), and so on. Consider also relevant concerns raised by feminist
philosophers and others regarding implicit bias, which threaten to call into question the
reliability of a whole range of judgments—including some of those regarding proper
method and “the (contemporary) philosophical canon.” 20 Second, even if it is granted for
the sake of argument that there is agreement both over methodology and over the authors
to be read on a given topic, this agreement is swamped by the tremendous disagreement
over which (fi rst-order) views on the topic are correct, or most reasonable, and over how
to weigh the various considerations in play. The persistence of this fi rst-order disagree-
ment would seem a bit strange if in fact all parties are reliable on the topic.
20 Of course, the hypothesis of implicit bias is relevant far beyond philosophy.
176 sanford goldberg
Still, those unconvinced by the plausibility of the hypothesis of unreliability—those
who think that any particular case of systematic disagreement in philosophy will be best
explained without calling into question the reliability of any of the disputing parties—
can respond with a challenge of their own. How can it be, they will ask, that we philoso-
phers are unreliable in (some or perhaps many of ) the philosophical judgments we
make? What is wanted is a concrete proposal regarding how it could come to pass that a
practice like philosophy could fl ourish, despite its occasional (regular?) lapse into topics
in which its practitioners are unreliable in their judgments.
This is a large question, one that I am in no position to answer in anything like the
detailed way it deserves. Still, I can make a few relevant remarks. One possibility is this:
while there are reliable belief-forming processes or methods that can be brought to bear
on the philosophical topics in question, it turns out that these processes/methods can be
expected to be signifi cantly less reliable in the philosophical domain in question (the
reliability of those processes/methods being established in other contexts where they
are more ordinarily employed). 21 Another, more radical possibility is that no one is reli-
able on the matter at hand because there are no reliable belief-forming processes or
methods that can be brought to bear on these matters. A possibility which is more radi-
cal still is that in many of the areas of systematic peer disagreement in philosophy, there
simply are no facts of the matter , hence no getting things right. And there are other possi-
bilities as well. 22 Here is not the place for me to endorse one or another possibility as the
proper explanation—I off er them only to give a sense of the range of options one has for
deepening one’s explanation, once one comes to think that the proper explanation for
the systematic disagreements in philosophy will involve appeal to the unreliability of
one or more of the disputing parties.
3.2
So far, I have been arguing that in cases of systematic peer disagreement in philosophy,
we have reason to prefer explanatory hypotheses H1 or H2 over both H3 and H4—that
is, that we have reason to think that one or more of the parties to the dispute are unreli-
able on the matter at hand. But for all my argument so far has shown, there can exist
parties to the dispute that not only have it right but who are in fact reliable on the matter at
hand . What is more, if there is a side that is reliable on the topic at hand, it would appear
21 I develop this idea at some length, under the label of “dangerous contexts,” in Goldberg 2008 . It is
worth noting that this possibility does not save us from having to acknowledge a defeater; see Goldberg
2008 for a discussion.
22 In conversation, Sean Ebels-Duggan has raised another possible explanation—but one which would
preserve the hypothesis that all sides are reliable. It is this: the various sides are simply talking past one
another, arguing over diff erent things. If so, all sides can be both reliable and right, and are merely confused
in thinking that they are disagreeing when they are not. I agree that this is a possibility; and I even concede
for the sake of argument that this sometimes is the best explanation of what is going on in particular cases
of philosophical disagreement. What I deny is that this will always, or even usually, be the best explanation
of what is going on in cases of philosophical disagreement. (For a discussion of this move in the context of
the internalism/externalism dispute in epistemology, see Goldberg (forthcoming).)
disagreement, defeat, and assertion 177
to be under no rational pressure to regard as “live” the hypothesis of its own unreliabil-
ity. 23 Or so we might think if we endorse something like the “Right Reasons” view of
Kelly 2005 , or some other version of a Non-Conformist view. Since I am trying to
make a case for (1) that is strong no matter one’s approach to the epistemic signifi cance
of disagreement, the challenge is this: how to get from the claim that at least some party is
unreliable on the topic at hand, to the claim that all parties to the dispute have reason to
question their own reliability.
Before responding to this challenge, it is important to have in mind the diff erence
between the present discussion and standard discussions of the epistemology of dis-
agreement. In standard discussions of peer disagreement, including the discussion in
Kelly 2005 , the question is framed as one regarding the epistemic eff ects of the present
case of peer disagreement. Should the fact that you presently disagree with someone you
recognize as a peer lead you to revise your doxastic attitude towards the disputed propo-
sition? The claim that I am making, and which I will continue to defend below, is
addressed to a slightly diff erent sort of case. The sort of case I am addressing is one in
which the parties recognize that the disagreement is systematic . How should your recog-
nition of the existence of systematic peer 24 disagreement on some topic aff ect your views (if
any) on the topic? Better yet, how should such recognition aff ect your attitude towards
the prospect of having or arriving at reliable judgments on the topic? My claim has been
that once one recognizes the existence of systematic peer disagreement on some topic,
it is reasonable for one to endorse a hypothesis which questions the reliability of at least
some of the disputing parties. In such a context it may well be that the epistemic eff ects
of fi nding yet another peer who presently disagrees with one will be negligible; but this is
because the recognition of the systematic nature of the peer disagreement should already
have had substantial epistemic eff ects. This is a point that even the proponent of the
“Right Reason” view itself should accept.
Consider then the sort of evidence one has when one has evidence of a systematic peer
disagreement. It is not merely evidence of a present disagreement. It is much more than
that: it is evidence of a disagreement among peers that is entrenched, non-local, and
persistent. We skew the signifi cance of this evidence if we focus on the question regard-
ing what one ought doxastically to do given only the present disagreement with an
acknowledged peer. I mention this to dispel the idea that, assuming that Kelly’s ( 2005 )
“Right Reasons” view is correct, the Right Reasoner is under no rational obligation
whatsoever to take seriously the hypothesis that she herself is unreliable on the topic at
hand. This may be so in a case of one-off disagreement. But it strains credibility to think
that this is so in cases of systematic disagreement. Simply put, the two cases are not
analogous in a crucial respect: to have evidence that the dispute in which one is engaged
23 I thank Cian Dorr, Jennifer Lackey, and Tim Williamson, each of whom pointed out something in this
vicinity (in conversation).
24 This should really read: “apparently peer.” If one side is relevantly reliable while the other is not, they
are not actually peers. I will return to the (in)signifi cance of this point later in this chapter.
178 sanford goldberg
is a systematic peer disagreement is already to have evidence that puts rational pressure on
one to call into question one’s own reliability— even if in point of fact one is reliable on the
matter at hand . Or so I want to argue in what follows.
Suppose that two disputing parties, S1 and S2, disagree over whether p, and that this is
a case of systematic peer disagreement: the dispute over whether p is bound up in a
much larger (and long-lasting) controversy that persists even after full disclosure. Sup-
pose further that in point of fact S1 is reliable on the matter, whereas S2 is not. Our
present question is: given the systematic nature of the peer disagreement, is it reasonable
for S1 to regard herself as reliable on the matter?
One consideration suggesting that it would not be reasonable for S1 to maintain her
belief in her own reliability is this. Above I argued that when it comes to systematic peer
disagreements, neither H3 nor H4—the hypotheses that continue to assume that all
parties are reliable on the matter at hand—can provide a plausible explanation of the
facts of the disagreement. That leaves us with H1 and H2. Now H2—the hypothesis that
some but not all of the parties to the dispute are unreliable on the topic at hand—is con-
sistent with S1’s being reliable on the topic at hand. But there would appear to be a rea-
son why S1 should reject this hypothesis in favor of H1 (according to which all parties to
the dispute are unreliable on the topic at hand). For consider: insofar as there is a reliabil-
ity diff erence between the disputing parties, the disagreement is not a peer disagreement
in the sense defi ned at the outset. So if S1 regards this as a case of peer disagreement, she
has reason to think that the disputing parties are roughly equally reliable . So if I am correct
in thinking that systematic peer disagreements call into question the reliability of at least
some of the disputing parties, it would seem that S1 faces a forced choice between (on
the one hand) her belief that her interlocutors are her peers, and (on the other) her
belief in her own reliability on the matter at hand. If it remains reasonable for her to
continue to regard her opponent(s) as her peer(s), she should conclude that (there is a
serious chance that) she herself is unreliable on the matter at hand. 25
My contention, that S1 should call into question her own reliability on the topic at
hand, can be reinforced from another perspective. Let us start with a more general ques-
tion: how should one address queries as to the reliability with which one has arrived at a
belief or a judgment on a given occasion? Take an ordinary case of perceptual belief or
judgment. If subject S relies on her perceptual faculties in an ordinary situation, it is
downright unreasonable to think that S needs anything very substantial in order to rule
out the hypothesis that she has arrived at her belief in an unreliable way. On the contrary,
to rule out this hypothesis in an ordinary case of perceptual belief, it seems suffi cient that
circumstances are, or perhaps merely seem to S to be, ordinary. (After all, we typically
25 Does this mean that the fact of systematic peer disagreement gives us reason to suspect that all of us are
unreliable on the topic at hand? In general, I think that the answer is “Yes.” Still, I think that the point I
defend in the following two paragraphs is a better way to put the point: the fact of disagreement gives me
(you) a reason to think that at least one of the disputing parties is unreliable on the matter at hand, and I
(you) don’t know that it’s not me (you). (Again, I thank Jennifer Lackey for suggesting this formulation. I do
not claim she endorses it, though!)
disagreement, defeat, and assertion 179
assume that subjects are entitled to rely on their basic belief-forming processes.) Matters
are otherwise, however, if S has evidence of the possible unreliability of her perceptual
faculties on a given occasion on which she is relying on them. Here, it does not seem
unreasonable to think that S would need more in the way of supporting reasons, if she is
to rule out this possibility. (The demands on her are greater even if in point of fact she is
reliable on this occasion.) But this is precisely the situation that, for her part, a subject
participating in a systematic (apparently) peer disagreement is in. After all, I have been
arguing that cases of systematic (apparently) peer disagreement in philosophy are cases
in which all parties have a reason to think that at least some party to the dispute is unreli-
able. In this context, the hypothesis that one oneself is unreliable is a “live” one, if only in
virtue of the fact that one oneself is among those who are party to the dispute. In this
case, it is not unreasonable to think that one needs some reasons to think that it is not
one oneself who is unreliable.
Turning to the case at hand, then, what reasons could S1 off er in defense of the claim
that she is not unreliable on the matter at hand? This is not a one-off case, where S1 might
appeal to her long (and independently confi rmable) track record on related matters. There
is no independent access to the truth of the matter, so S1, like S2, has to go on whatever it
is that she goes on in order to reach a judgment on the disputed matter. In light of this,
consider the situation as it strikes (or should strike) S1. Given that this is a case of systematic
philosophical disagreement (and that S1, like S2, recognizes this), S1 (like S2) should con-
clude that chances are good that at least some party to this dispute is unreliable. In addition,
S1 (like S2) should also appreciate that, given the entrenched nature of the disagreement,
the unreliable party (whoever it is) is not in a position to discern her or his own unreliabil-
ity; otherwise she or he would have done so, and the disagreement would have dissipated.
Since both S1 and S2 regard the other as a peer, both S1 and S2 recognize the other as
(roughly) equally smart, (roughly) equally knowledgeable of the arguments and evidence
bearing on the question at hand, and so forth. Suppose now that S1 thinks that it is not
she herself, but S2, who is unreliable. In that case S1 would have to acknowledge that
someone equally smart, equally knowledgeable of the arguments and evidence, equally
attentive and motivated to get things right, and who would be highly motivated to discern her
own unreliability if she could , nevertheless failed to do so, even having been given a good deal
of time in which to do so. (Here I have in mind the length of time S2 has spent arguing and
thinking about these matters.) But more than this: S1 must acknowledge that it is not only
in the present case, but in the entire history of the dispute , that none of those who are among
the unreliable parties have discerned their own unreliability. And this conclusion, in turn,
should tell S1 something about the nature of the unreliability that is at issue here: this unre-
liability is not discernible by very many people as smart as she is, as knowledgeable of the relevant
arguments and evidence, who has had a good deal of time thinking about the relevant issues, who work
in a manner that is at least somewhat independent of others, and so on. But if it is in the nature of
the unreliability at issue here that it is not discernible under such conditions, then this
should tamper S1’s confi dence in her own assessment that it is not she (S1) who is unreli-
able. (The same reasoning goes for S2, of course.)
180 sanford goldberg
Might S1 try to rule out that she herself is unreliable by proposing a psychological
explanation of the intransigence of the opposition? To do so she might appeal to the
self-interest we each have to defend our own views, and to have it seem that one’s own
views are the best. But this candidate explanation faces an obvious diffi culty: it threatens
to explain too much. For one thing, we might wonder why it doesn’t apply to oneself as
well, to explain why one is “sticking to one’s guns” in the dispute—hardly a happy epis-
temic situation to be in. For another, if it is the self-interestedness of philosophers, and in
particular our need to have it seem (if only to ourselves) that our views are right, that
explains the fact of systematic peer disagreements in philosophy, then we would expect
that we would be likely to fi nd such disagreements wherever people had a vested inter-
est in having it seem that their views are right. Yet this is not what we fi nd. So, for exam-
ple, while it is hardly less true of, for example, empirical (experimentally minded)
scientists that they have a vested interest in defending their views, even so, we fi nd
decidedly less systematic disagreement, and decidedly more agreement on the basic
topics, in the empirical sciences than we do in philosophy. From this I infer that self-
interestedness alone cannot explain the pervasiveness of systematic disagreements in
philosophy; it seems that the fact of such disagreement tells us more about the (epis-
temically inhospitable nature of the) subject matter of philosophy, than about the psy-
chology of the folks who are rendering judgments regarding that subject matter. So S1
would be left with the question of what explains the disagreement.
3.3
The lesson here is generic. In cases of recognized systematic p-relevant peer disagree-
ment, each of the parties has reason to endorse the following proposition:
DEF (There is a serious chance that) at least one of the disputing parties is unreli-
able regarding whether p, and I don’t know that it’s not me.
I submit that the rational pressure to endorse the hypothesis that someone or other is
unreliable, as the best explanation of the disagreement itself, increases in direct propor-
tion to the extent, prevalence, and duration of the peer disagreement itself. 26 This hypoth-
esis becomes a better explanation as the disagreement appears not to be resolvable
(anytime soon, or perhaps ever). Once we have drawn this conclusion, though, symmetry
26 Discussing whether “counterbalanced” attitudes might cancel each other out, and thus support the
conclusion that the original fi rst-order evidence alone determines what it is rational to believe, Kelly 2010
responds that “The addition of counterbalanced psychological evidence does make a diff erence to what it is
reasonable for us to believe. For, once the counterbalanced evidence is added to our original evidence, a
greater proportion of our total evidence supports an attitude of agnosticism than was previously the
case”(2010: 143). Although I am putting my point in terms of defeaters, rather than evidence, a related point
could be made in terms of evidence: namely, that under conditions of systematic p-relevant peer disagree-
ment, there is an increase in the evidence in favor of the hypothesis that at least some side is (undetectably)
unreliable in the matter whether p, and this evidence can reach the point where agnosticism is the only
doxastically justifi ed attitude. (While Kelly himself appears to come to much the same conclusion in a sche-
matically described case at p. 144, he goes on to argue that this will not have such skeptical consequences as
one might suppose, pp. 145–50. I am not as sanguine as he is on this score.)
disagreement, defeat, and assertion 181
considerations of the sort I discussed in 3.2 appear to put each of the disputing parties in
a position in which she cannot rule out the hypothesis that she herself is among the unreli-
able ones. It is this, I submit, that constitutes a defeater regarding each of the parties’
respective beliefs regarding whether p. 27 Simply put, belief in DEF calls into question the
goodness of the basis on which one believes that p, so that if this proposition is believed, it
satisfi es defeater condition (ii) above, and in any case it ought to be believed, and so satis-
fi es defeater condition (iii). It satisfi es these conditions even in cases in which one’s belief
that p is in point of fact reliably formed.
In sum, I claim that (1) is true whatever one thinks on the question of the epistemic
signifi cance of peer disagreement in one-off cases. This is because the case for (1) pro-
ceeds by way of an inference-to-the-best explanation of the fact of disagreement in
cases in which the disagreement is systematic—and the argument for this does not
assume anything that even the most steadfast (Non-Conformist) view in the epistemol-
ogy of disagreement should want to deny. 28
4
Let us now move on to (4) of the MASTER ARGUMENT . This is a claim regarding
peer disagreements in philosophy, to the following eff ect:
4. Some cases of philosophical disagreement regarding whether p are cases of
systematic p-relevant peer disagreement.
To establish (4), I need to establish that some philosophical disputes are cases of peer
disagreement in which the disagreement itself is non-local , widespread , and entrenched .
Consider in this light such long-standing disputes as that between internalists and exter-
nalists in epistemology, 29 realists and anti-realists, presentists and four-dimensionalists,
27 Compare as well the foregoing treatment of when disagreement gives rise to defeaters with the treat-
ment in Bergmann ( 2009 : 343–50). The view Bergmann defends is that “If in response to recognizing that
S disagrees with you about p (which you believe), you either do or epistemically should disbelieve or seri-
ously question or doubt the claim that you are, on this occasion, more trustworthy than S with respect to p,
then your belief that p is defeated by this recognition; otherwise, not” (343).
28 Here is an objection: insofar as the truth-value of (1) itself is in dispute, and is part of a systematic peer
disagreement, the result would be that my argument is self-defeating—since in that case we would have (from
the very argument I have given for (1)) a defeater for belief in (1). (With thanks to Nick Leonard, Baron
Reed, and Russ Shafer-Landau, each of whom—independently of one other—raised a version of this objec-
tion with me in conversation.) Although this objection merits an extended reply, it is possible to make clear
the contours of my response. If it is true that the disagreement over (1) is part of a systematic peer disagree-
ment, then the proper response, I submit, would be one of adopting a sort of Pyronnian skepticism regarding
both (1) and the issue(s) regarding which there is systematic peer disagreement. In that case, the argument I
am off ering here would be akin to the ladder of the Tractatus —something that one must kick away once one
ascends to appreciate the point that is being made. I will be returning to develop this idea in a future paper.
29 This is the example discussed in Kornblith 2010 . According to the recent Philosophical Papers survey
(carried out in 2009), if we restrict ourselves to philosophers who claim an AOS in Epistemology, almost an
equal number of people endorse or lean towards internalism as endorse or lean towards externalism (36.8%
to 35%), and a slightly less percentage choose “other” (28.1%). See <http://philpapers.org/surveys/results.
pl?affi l=Target+faculty&areas0=11&areas_max=1&grain=coarse> accessed October 2011.
182 sanford goldberg
cognitivists and non-cognitivists in ethics, or proponents of and skeptics regarding the a
priori. Consider as well disputes which, though of a more recent vintage, still appear to
be systematic (as they are becoming entrenched): the debate over the norm of assertion
(both whether there is such a thing, and if so, what its norm is); the debate over the
source(s) of normativity; between proponents and critics of an invariantist semantics for
“knows”; between individualists and anti-individualists regarding attitude individua-
tion; between proponents and critics of intuition as a source of basic knowledge; between
motivational internalists and motivational externalists in ethics; or between those who
endorse, and those who deny, representationalism regarding sensory content. With a lit-
tle more thought, more examples could be found.
The foregoing are topics on which there has been very active debate. But disagree-
ment in philosophy is not limited to those topics that have been actively debated. On
the contrary, there have been disagreements on topics which, though perhaps not always
rising to the level of active debate, nevertheless are areas on which there is nothing close
to consensus on many fundamental issues. Consider recent discussions on the episte-
mology of modality, or in metaethical discussions of the semantics of “ought,” or regard-
ing the legitimacy of John Rawls’ use of the veil of ignorance, or of the conceivability of
zombies, or of the proper understanding of or rationale for secularism. I suspect it will
be uncontroversial that these are controversial matters. Indeed, I suspect that one is
hard-pressed to come up with many examples of substantial philosophical claims about
which there is broad (if not unanimous) agreement. 30
There can be little doubt but that most or all of these debates can be framed so as to
bring out their status as peer disagreements. It is true that there are those philosophers
who are a bit smarter, or more capable, or more widely read in philosophy than the rest
of us. But it is dubious whether very many of the long-standing debates in philosophy
are such that the smarter, more competent, better-read folks are entirely or even dispro-
portionately represented on one side of the disagreement. Most of us would regard our
adversaries as our peers, at least to a rough fi rst approximation. In addition, it would
appear that we have good reason to do so: there is some agreement, at least at a very gen-
eral level, on the sorts of methods we use in philosophy (although there is disagreement
regarding the legitimacy of some methods); we are roughly equally competent in logic
and other more formal philosophical tools; we even agree, at least in a good many cases,
on what pieces constitute “the” pieces to be reckoned with in a given area; and so
30 It is interesting to note that in the 2009 Philosophical Papers survey mentioned in n. 29 [for which see:
<http://philpapers.org/surveys/results.pl> accessed October 2011], of the thirty questions posed to phi-
losophers in the category of “target faculty,” only two of these questions were such that one of the answers
received at least 75% endorsement. In addition, only thirteen of the thirty questions were such that one of the
answers received even 50% endorsement . This means that for the other seventeen questions, no single answer
even got a majority—a fact that becomes even more impressive when we remember that endorsing an
answer included not only those who were active proponents but also those who were “leaning towards”
endorsing the view! This does paint a picture of widespread disagreement among “target faculty” in philoso-
phy. A more serious examination of this data would see how (avowed) expertise in a subfi eld does, or does
not, increase disagreement in that subfi eld.
disagreement, defeat, and assertion 183
forth. 31 It seems that we regard one another, and have reason to regard one another, as
peers, despite our disagreement. In this way peer disagreement in philosophy would
appear rampant.
Next, consider the claim that peer disagreements in philosophy are non-localized. It is
of the nature of some (and perhaps most or even all) philosophical topics that they inter-
sect with many other topics, creating a vast nexus of interconnections. This is certainly
true of most or all of the topics above. If you are an individualist about attitude individu-
ation, this will likely help shape your views in many of the topics in philosophy of mind,
philosophy of language, and perhaps epistemology; but so too your anti-individualist
opponent will likely disagree with you on those topics as well. If you are a motivational
externalist, this will likely aff ect your favored account of moral psychology, perhaps your
account of the conceptual content of “ought,” and perhaps your philosophy of mind as
well; though of course your internalist opponents will likely resist your views on most or
all of these points. More generally, if you disagree with a colleague over a given philo-
sophical topic on which you regard her as roughly your peer, you will also disagree about
the weight to assign to a good many, if not all, of the considerations and arguments each
side adduces as it attempts to make the case for its favored view. And these disagreements
in turn may refl ect still further disagreements about proper methodology: fans of zombie
arguments typically endorse appeals to intuition and accept a straightforward connection
between conceivability and possibility, whereas foes of such arguments often question
the “data” of intuition and question as well the alleged link between conceivability and
possibility. Thus it would seem that peer disagreements on such topics are non-localized.
Now consider the claim that peer disagreements in philosophy are widespread. I take this
to be more or less obvious: the debates above are not between a few individuals. Indeed, one
might think that there is a positive correlation between those debates that have labels associ-
ated with the various “sides” and those peer disagreements that are widespread.
Finally, the debates are more or less entrenched. In fact, this appears to be a character-
istic of all of the major debates in philosophy: they persist, with no side ever acquiring
the right to say that it has triumphed over the other. I do not deny that there are debates
that can be settled once and for all. Nor do I deny that debates that at one time are
entrenched might nevertheless be settled once and for all. But the track record of phil-
osophy is not particularly encouraging on this score.
In short, it would seem that we have strong reasons to think that at least some, and
arguably many, philosophical disagreements are systematic peer disagreements, and with
this my case for (4) is complete.
5
Before proceeding to conclude by characterizing the nature of the problem that the
MASTER ARGUMENT presents to us, it is worth noting that the argument, even if
31 I thank Matthew Mullins for making this suggestion.
184 sanford goldberg
sound, does not pose a problem unless we make one further assumption. This is the
assumption (which I will label “Philosophical Assertions in cases of Systematic Disa-
greement,” or “PASD” for short) to the eff ect that
PASD It is common (ordinary; part of standard practice) for philosophers who rec-
ognize that they are party to a systematic p-relevant peer disagreement
nevertheless to persist in asserting that p.
Suppose PASD is false. In that case it is not part of ordinary practice for philosophers to
assert anything when in contexts of systematic peer disagreement. Then even if the con-
clusion of the MASTER ARGUMENT is true—assertions made under conditions of
systematic peer disagreement are (all of them) unwarranted—this has no implications
for the practice or assessment of assertions within philosophy. Of course, if PASD is true,
philosophical practice does involve the assertion of propositions which, by the lights of
the MASTER ARGUMENT ’s conclusion, are (all of them) unwarranted. This is the
unhappy result of which I spoke at the outset of this paper. In light of the role of PASD
in securing the unhappiness of the MASTER ARGUMENT ’s conclusion, I want to
spend some time discussing its plausibility.
Let me begin with what I regard as a misguided reason to think that PASD is false.
One might think that PASD is false on the grounds that in philosophy we typically argue
for our views (rather than assert them). There are two points I’d like to make in reaction
to this misguided objection to PASD.
First, the reaction is based on a contrast which, as stated, is confused. It assumes, con-
fusedly, that one does not assert what one argues for. If this were true, then one would
never count as asserting the conclusion of one’s arguments. But surely one does assert the
conclusion of one’s argument (at least when one makes that conclusion explicit). What
is correct is that in arguing for a conclusion, as we do in philosophy, one’s assertion is not
bald : one is not encouraging one’s interlocutor to accept the claim in question merely on
the basis of one’s having said so . Even so, a non-bald (argument-backed) assertion remains
an assertion. Once this is clear, a related point becomes obvious. Take a case in which S
asserts that p, where it is mutually manifest that this assertion was made on the basis of an
argument S just gave. Even so, these facts do not ensure that S’s assertion of p is war-
ranted. On the contrary, the point of S’s argument is precisely to make manifest the basis
on which she asserts that p; her assertion that p is warranted only if this basis (the argu-
ment) is such as to render her assertion normatively acceptable (as satisfying the norm of
assertion). So even though it is true that claims in philosophy are typically backed by
argument, this does not show that such claims are not asserted, nor does it show that
such claims (when asserted) are warranted.
But there is another point to be made on this score. Even if it is often true that in phi-
losophy we argue for our views, it is not universally true. No one can possibly argue for
all of the claims she makes; some claims must be starting points for argumentation. My
point here concerns, not the nature or structure of justifi cation, but a limit on the pres-
entation of one’s views in speech . I mention this if only to forestall the thought that this
disagreement, defeat, and assertion 185
point can be resisted for example by endorsing a coherentist epistemology. According to
coherentism, there are no epistemic ( justifi catory) starting points. Even if this is true
(which I doubt), it does not bear on my point about the limitations of argumentation in
speech. Indeed, it is not hard to think of the sorts of assertions which at least some of us
philosophers make at the outset of our arguments, and for which we off er little, if any-
thing, in the way of an argument. These are typically the claims we fi nd “intuitive.” To be
sure, some of these claims are no longer particularly controversial: that the Gettiered
subject does not know is not something regarding which one expects to fi nd disagree-
ment (but see Weatherson 2003 ). However, some of the claims we make in philosophy
are made despite the facts that (a) we have no argument to back them up and (b) they
remain controversial. Indeed, it is hard to see how it could be otherwise. Suppose you are
trying to convince a philosopher of what you regard as the exhorbitant costs of her view,
and after doing so she remains unimpressed. “But don’t you see?” you exclaim, “These
implications of your view are absurd!” But she denies this; she maintains that they are
not absurd, and questions you on this. At a certain point, the dispute is left with the
familiar table-thumping. Alas, this is hardly an uncommon phenomenon in philosophy.
It seems that the two sides are trading assertions about the relative implausibility of each
other’s implications.
Nor are philosophical assertions (made under conditions of systematic peer disagree-
ment) limited to claims regarding the plausibility of (the implications of ) our fi rst-order
views. On the contrary, they include claims of philosophical substance. The following
situation should be familiar (and examples could be multiplied). A proponent of motiva-
tional externalism is at a conference at which she gives a paper. In the paper she advances
a new argument. The argument itself appeals to some familiar claims, and some not-so-
familiar claims. But among the familiar claims are some claims of substance. She is aware
that some of these claims are controversial, and will be rejected by some in her audience.
Still, she asserts them nevertheless. For how else could she advance her argument? Must
she wait until all relevant systematic peer disagreements have been settled before asserting
the premises of her argument? That appears to be a recipe for philosophical paralysis.
I have sometimes heard it said (in conversation) that there really are no straight (fi rst-
order) assertions of controversial matters in philosophy, only speculations and condi-
tional (or otherwise hedged) claims. 32 A characteristic claim in this vicinity is that
philosophers do not fl at-out assert their claims, but instead suggest them tentatively, or
with something like a “we have some reason to think that” operator in front of them. 33 I
agree that this is sometimes the case. But I fi nd it dubious in the extreme to think that all
cases of apparent assertions made in philosophy under conditions of systematic peer
disagreement are like this. Surely there are some cases in which a philosopher continues
to assert that p, despite the systematic p-relevant peer disagreement. 34 Here, two points
32 Peter Ludlow (among many others) has suggested this to me (in conversation).
33 This point was suggested to me by Cian Dorr. (I do not know whether he endorses this view.)
34 Indeed, some philosophers have even conceded as much in their own case.
186 sanford goldberg
of support can be made. First, it should be obvious to anyone who has participated in or
observed philosophical practice that there are (some, and arguably many) occasions on
which a claim is advanced under conditions of systematic peer disagreement without
any explicit hedge or “there are reasons to think” operator in play. For this reason, if the
hedging proposal is to work, it must postulate an implicit (linguistically unmarked) hedge
or “there are reasons to think” operator in play in all such cases. But such a global postu-
lation would appear to be fully theory-driven, and so ad hoc. What is more (and this is
my second point), there are independent reasons to think that such a postulation is not
warranted. In particular, the suggestion—that philosophical practice under conditions
of systematic peer disagreement always involves hedged rather than straight assertion—
appears to be belied by aspects of our practice. Why the vehemence with which some
(apparently fi rst-order, categorical) philosophical claims are made, even under condi-
tions of systematic peer disagreement? Why so much heat, if all we are doing is entering
hedged claims? Why do we go to such great lengths to try to defend our claims in the
face of challenge? Why not shrug off such challenges to our claim that p, with the
remark that, after all, we were merely claiming that there are reasons supporting that p?
(Relatedly: why is it that the typical response to challenges is to try to defend the claim
that p, not the (weaker) claim that there are reasons to believe that p?) Finally, if all we are
doing in philosophy is entering hedged claims, why is talk of our philosophical “com-
mitments” so prevalent? Refl ecting on this practice, I conclude that assertions are made
in philosophy, even in the face of systematic peer disagreement. PASD is true.
6
And so concludes my case for thinking that the MASTER ARGUMENT presents us with
a problem. In a nutshell: we appear to make assertions in philosophy, even in the face of sys-
tematic peer disagreement; yet a valid, apparently sound argument would lead us to con-
clude that these assertions are, each and every one of them, unwarranted. To be sure, this
argument depends on two further assumptions I have not explicitly defended here: fi rst,
that if there are relevant (undefeated) normative or doxastic defeaters bearing on S’s belief
that p, then S neither knows, nor is doxastically justifi ed in believing, that p; and second, that
if S neither knows, nor is doxastically justifi ed in believing, that p, then it is not warranted
for S to assert that p. Both of these are widely endorsed claims. The former is endorsed by
virtually every epistemologist whose theory gives a role to defeaters, and the latter is
endorsed by everyone who thinks that assertion has an epistemic norm of some sort or
other (rationality, justifi cation, knowledge, certainty, or what-have-you). 35 But given these
further assumptions, the MASTER ARGUMENT appears to present us with a problem.
35 Not everyone thinks that assertion is governed by an epistemic norm. Some think that assertion is not
governed by a norm at all; others, that while it is governed by a norm, that norm is not epistemic (the lead-
ing non-epistemic norm is truth; see Weiner 2005 ). However, the view that assertion is governed by an
epistemic norm remains the view of the vast majority of those working on assertion. (The real debate is over
what that norm is.)
disagreement, defeat, and assertion 187
Some might think that we should simply accept the conclusion. Recently, several
people (myself included) have argued that insofar as we believe our philosophical claims
in the face of systematic peer disagreement, these beliefs are by and large doxastically
unjustifi ed. 36 If such a view is correct, one might think that is not really that much fur-
ther to hold that our assertions of these believed claims are unwarranted (one and all). If
assertion is the outer manifestation of belief, perhaps we should expect that our philo-
sophical assertions (made under conditions of relevant peer disagreement) are system-
atically unwarranted.
Since I fi nd the MASTER ARGUMENT’s conclusion unhappy, I fi nd this reaction
unhappy. I might put the matter as follows. Philosophizing as an activity fl ourishes
(only?) in the context of dialogue and debate. If the practice is to fl ourish, 37 we should
want it to include a speech act by which one can add to the stock of propositions taken
to be true in the course of conversation. But this gives us a reason to want the practice to
include a speech act in which one presents a proposition as true in such a way as to
implicate one’s own epistemic authority. In short, we have reason to want the practice of
philosophizing to include assertions. At the same time, philosophy is a subject matter in
which epistemically high-quality belief is hard to come by, perhaps (in certain domains)
even practically impossible. The result is that the activity of philosophizing itself is some-
thing of a strange bird: it is a practice whose fl ourishing requires the performance of a
speech act whose norm we appear systematically to violate in engaging in that practice.
It may well be that one can decrease the strangeness of this practice by noting that, since
we are all aware of the contours of this situation (if perhaps only implicitly), we allow
each other to “get away with” making assertions under conditions when, strictly speak-
ing, such assertions are unwarranted. But to my mind this makes a mockery of the prac-
tice of assertion. It also makes a mockery of the activity of philosophizing. Is philosophy
really such a shady activity that it requires us to allow each other to “get away with” sys-
tematic violations of speech act norms? On the contrary, there is something slightly
paradoxical, even off ensive, in the thought that the fl ourishing of philosophical practice
comes at the cost of systematically unwarranted assertions. This is a conclusion we ought
to avoid if at all possible.
I should add one fi nal word by way of how I think that this problem ought to be
addressed. (Considerations of space prevent me from developing this at length, but I
will give a taste of what I think a solution looks like.) 38 My proposal—and I anticipate
that this will be very controversial!—would would be to deny (3), the claim that if S
neither knows, nor is doxastically justifi ed in believing, that p, then S is not warranted
36 This is a view that is recently advocated in Goldberg 2008 and Kornblith 2010 . Frances 2005 appears
to be broadly sympathetic to this view, as a special case of a much more general sort of skepticism that is
generated by disagreement.
37 Might it be objected that insofar as philosophical belief is doxastically unjustifi ed (in the face of sys-
tematic peer disagreement), philosophical practice ought not to fl ourish? I think not. The value of philoso-
phy might well lie in its production of some non-epistemic good. This is a theme to which I hope to return
in subsequent work.
38 See Goldberg (unpublished manuscript) for a detailed defense of the position I am about to describe.
188 sanford goldberg
in asserting that p. The burden on such a position is to show that this cost is not as dra-
matic as one might think. My strategy for doing so is to defend a context-sensitive norm
of assertion, as follows: while knowledge is the default norm, this default can be
defeated, and the standards provided by the norm can be raised or lowered in a given
context of conversation, according to relevant mutual beliefs in the context of the conversa-
tion . Suppose that in a given context it is mutually believed by all participants that
knowledge and other epistemically high-grade information is hard to come by in the
domain under discussion, and that it is also mutually believed that, even so, there is a
need for information that has a certain mutually acceptable level of epistemic creden-
tials. In that case, given what is mutually believed, the default (knowledge) norm is
defeated, and a less-demanding norm is in play. I would then want to argue that the
practice of philosophy is one such area where there is (broadly) mutual belief (at least
among professional philosophers!) to the eff ect that none of us have much, if any,
knowledge, or indeed even doxastically justifi ed belief, on controversial philosophical
matters; and the practice of assertion within philosophical contexts then refl ects this
mutual awareness. Now assume that this aspect of philosophical practice, whereby the
prospects for justifi ed belief is remote, is itself something regarding which there is
mutual awareness. In that case, even supposing that each of us (recognizes this aspect of
the practice and so) does not fl at-out believe the claims we make in the context of
philo sophical disagreement, our assertions are not insincere, since given what is mutu-
ally familiar in such contexts no one would expect fl at-out belief. 39, 40 This “solution”
to the problem is no mere “cheat,” since it appeals to something that (it is alleged) is a
perfectly general feature of speech exchanges: there are other contexts in which what is
mutually familiar in a given context can aff ect the standards imposed by the norm of
assertion. (Perhaps one can see this phenomenon in connection with the more theo-
retical areas of the social and physical sciences—at least the more controversial parts of
these.) Obviously, such a picture would need to be independently motivated; 41 and it
would need to address many questions (e.g. what to say about cases where the parties
have mistaken views about what is mutually believed in context? Would the assertion
of the various claims that comprise the account, in contexts of the sort of disagreement
with which such assertions are likely to be met, satisfy the account’s own standards?
And so on.) In this paper I can only claim to have motivated the problem to which
such an account is a proposed solution; whether it succeeds, however, is a matter to be
addressed on another occasion.
39 I thank Cian Dorr for the objection that lead me to make this point.
40 What then is the standard imposed by the norm of assertion in contexts of philosophizing about con-
troversial matters? Perhaps something like this: suffi ciently dialectically responsive to objections .
41 In Goldberg (unpublished manuscript) I try to motivate this in terms of Grice’s Cooperative Principle:
part of being cooperative is to say only that for which one has adequate evidence (Quality), where what
counts as adequate evidence is determined by mutual beliefs regarding the needs and expectations of the
various participants in the conversation.
disagreement, defeat, and assertion 189
References
Bergmann, M. (2006) Justifi cation Without Awareness (Oxford: Oxford University Press).
—— (2009) “Rational Disagreement after Full Disclosure,” Episteme 6 (3): 336–53.
Christensen, D. (2007) “Epistemology of Disagreement: The Good News,” Philosophical Review
116 (2): 187–217.
—— (2011) “Disagreement, Question-Begging, and Epistemic Self-Criticism,” Philosophers’
Imprint 11 (6): 1–22.
Elga, A. (2007) “Refl ection and Disagreement,” Noûs 41 (3): 478–502.
Feldman, R. (2006) “Epistemological Puzzles about Disagreement,” in S. Hetherington (ed.)
Epistemology Futures (Oxford: Oxford University Press), 216–36.
Frances, B. (2005) Scepticism Comes Alive (Oxford: Oxford University Press).
Fumerton, R. (2010) “You Can’t Trust a Philosopher,” in R. Feldman and T. Warfi eld (eds.) Dis-
agreement (Oxford: Oxford University Press), 91–110.
Gibbons, J. (2006) “Access Externalism,” Mind 115 (457): 19–39.
Goldberg, S. (2008) “Reliabilism in Philosophy,” Philosophical Studies 124 (1): 105–17.
—— (2011) “Putting the Norm of Assertion to Work,” in J. Brown and H. Cappelen (eds.)
Assertion (Oxford: Oxford University Press), 175–96.
—— (forthcoming) “What is the Subject Matter of the Theory of Justifi cation?” in J. Greco and
D. Henderson (eds.) The Point and Purpose of Epistemic Evaluation (Oxford: Oxford University
Press).
—— (forthcoming) “Mutuality and Assertion,” in M. Brady and M. Fricker (eds.) The Epistemol-
ogy of Groups (Oxford: Oxford University Press).
Goldman, A. (1986) Epistemology and Cognition (Cambridge, MA: Harvard University Press).
Kelly, T. (2005) “The Epistemic Signifi cance of Disagreement,” in T. Gendler and J. Hawthorne
(eds.) Oxford Studies in Epistemology , i (Oxford: Oxford University Press), 167–97.
—— (2010) “Peer Disagreement and Higher-Order Evidence,” in R. Feldman and T. Warfi eld
(eds.) Disagreement (Oxford: Oxford University Press), 111–74.
Kornblith, H. (2010) “Belief in the Face of Controversy,” in F. Feldman and T. Warfi eld (eds.)
Disagreement (Oxford: Oxford University Press), 29–52.
Lackey, J. (2010) “A Justifi cationist’s View of Disagreement’s Epistemic Signifi cance,” in
A. Haddock, A. Millar, and D. Pritchard (eds.) Social Epistemology (Oxford: Oxford University
Press), 298–325.
Weatherson, B. (2003) “What Good Are Counterexamples?” Philosophical Studies 115 (1): 1–31.
Weiner, M. (2005) “Must We Know What We Say?” Philosophical Review 114: 227–51.
Armchair philosophy has come under attack through experimentalist survey results.
These are said to uncover disagreement in people’s responses to thought experiments.
These responses allegedly refl ect cultural and socioeconomic backgrounds, not intuitive
perception of some objective order.
When people ostensibly disagree on a thought experiment, however, they respond to
the scenario as it appears to them . Since the same text can be read in diff erent ways, the
surveys may reveal no real intuitive disagreement, based on cultural or socioeconomic
background. Instead they may reveal only people talking past each other, as they vary in
how they read the text. Maybe it’s really these diff erent readings that manifest the diff er-
ences in background. What is more, disagreement that pits experts against casual
respondents may pose no real threat to expert intuitions.
That defense of intuitions has been off ered in the past. More recently, a natural sequel
to the earlier survey-based attack is philosophically deeper. Here I will respond to this
more sophisticated treatment. In the end our dialectic has a troubling upshot, which
I try to accommodate in defense of the armchair.
1 The dialectic up to now
Experimental philosophers have argued against armchair philosophy based on one main
lemma: that intuitions on philosophical thought experiments disagree extensively. 2
8
Can There Be a Discipline
of Philosophy?
And Can It Be Founded on Intuitions? 1
Ernest Sosa
1 This paper was originally published in Mind and Language (2011), 26 (4): 453–67, and is reproduced by
kind permission of Wiley-Blackwell.
2 They have so argued, based on that lemma, and this has been perhaps the main, best known attack on the
armchair. It is this line of attack that we take up in what follows. Other objections have of course been leveled
against the armchair, based for example on order eff ects. But these problems must still be shown to be serious
enough . The attack must somehow move from the premise that there are sources of unreliability (which there
are for all human sources of knowledge) to the conclusion that these sources of unreliability are problematic
enough to yield whatever practical conclusion one might wish to draw about the armchair.
can there be a discipline of philosophy? 191
Since intuitions disagree, they cannot all be perceptions of some objective philosophical
order. Not every disagreeing intuition can be a perception of a fact. Some at least must be
mis perceptions. With misperception common enough among intuiters, intuition sinks
into disrepute. In that case, intuitions are explained through the infl uence of culture,
perhaps, or socioeconomic status, or which graduate program one is in or hails from.
It is this line of attack that is put in doubt because intuition reports respond most
directly to the texts of thought experiments. What a text puts before a subject’s mind
depends on how it is read. Culture and socioeconomic status may infl uence how the
texts are read and not so much how subjects react to shared contents.
The critique from survey results has recently given way to a philosophically deeper
argument. Intuition is now said to be untestable as an epistemic source for armchair
philosophy. At least, so the argument goes, armchair intuition is insuffi ciently testable. It
is hopeless that way. 3 It has nothing like the sort of standing enjoyed by scientifi c
observation. Forms of scientifi c observation are all highly testable, and indeed test well.
This new critique is a natural sequel to the objection based on survey-revealed clashes.
If intuition is to be saved, we must consider whether a certain sort of intuition can attain
better epistemic standing. Perhaps expert intuition is defensible despite its disagreement
with street-corner opinion. But this will require testing such intuition for epistemic
effi cacy. And its failure to be properly testable would block that line of defense.
This further critique does raise important issues. In what follows, I will try to accom-
modate its insights while limiting the damage to the armchair. Eventually these objec-
tions to the armchair—both the survey-based objection, and the charge of hopeless
untestability—lead to a far more troubling critique whose premises are well known to
all philosophers already, with no need of experimental help.
Our topic is the epistemology of philosophy. Here fi rst is some relevant background.
2 The need for foundations
Philosophy wants to know what in general makes our attitudes justifi ed, if and when
they are. Epistemology in particular inquires into how our beliefs are justifi ed. 4 Some
owe their justifi cation to further beliefs on which they are based. A further belief can
give justifi cation only when it has justifi cation of its own, however, which ushers in the
familiar regress/circle/foundations problematic.
Beliefs cluster in rationally structured wholes. How so? One option is the founda-
tionalist pyramid with its asymmetry of support. Coherentists, by contrast, allow mutual
support. For the coherentist, our belief system is a raft that fl oats free, each belief deriv-
ing all its epistemic justifi cation from its place in the structure. Although it is good for a
3 Weinberg, J. (2007) “How to Challenge Intuitions Empirically Without Risking Skepticism,” Midwest
Studies in Philosophy 31 (1): 318–43 .
4 How they are epistemically justifi ed, how they attain the justifi cation that is constitutive of knowledge. The
confi dence of a hospital patient that he will recover, or of an athlete that he will prevail, may derive pragmatic
justifi cation by enabling success, without contributing to the epistemic standing of such confi dence as knowledge.
192 ernest sosa
body of beliefs to be so well integrated, however, it is not so good for it to be unmoored
from the world beyond. Such beliefs would be lacking, epistemically lacking. Mutual
basing cannot be all there is to justifi cation.
We can of course defi ne a kind of internal justifi cation available even to the brain in a
vat. But epistemic evaluation involves more than such internal status, even apart from
Gettier issues. Intellectual competence requires more than free-fl oating coherence.
There is a broader competence that more fully constitutes knowledge. 5 What in general
renders a belief competent? Often it is the belief ’s being based rationally on other beliefs
competently formed in their own right. But that can’t go on forever, nor can each of a set
of beliefs be competent by being based rationally on the others, and nothing more.
Some beliefs must be competent through something other than support from other
beliefs (at least in part through something other than that). 6
Perception and introspection are thought to give us what we need. Perceptual and
introspective beliefs need not be based on other beliefs. They can be based rationally
on states beyond justifi cation and unjustifi cation, such as sensory experiences. A sen-
sory experience can ground either or both of the following: (i) a perceptual belief
about the surroundings; (ii) an introspective belief about the presence of that very
experience. Perceptual and introspective beliefs are thus foundationally rational. They
are conceptual deployments evaluable as justifi ed based on other mental states, which
provide reasons for which they are held, and not just reasons why they are held. These
other states are regress-stoppers, by having only one foot in the space of reasons. They
provide justifi cation without in turn requiring it, or even sensibly allowing it.
3 Beyond the given to competence
Consider a belief whose content is a certain proposition, <p>. 7 What relation must an
experience bear to that belief in order to give it epistemic support? Foundationalists of
the given off er a twofold answer.
Take fi rst an experiential state E with that same propositional content <p>. B(p) is
said to derive justifi cation from being based rationally on E(p), as when B(p) is the belief
that one sees that p, and E(p) is a visual experience as if one sees that p. That is how
perceptual justifi cation is supposed to work.
Introspective justifi cation works diff erently. E might itself amount to the fact that p, as
an ache suff ered by S might amount to the fact that S aches (in a certain way). Alterna-
tively, it might constitute a truth-maker for that fact.
5 Consider the kindred view that intellectual seemings, or attractions to assent, are ipso facto “justifi ed.”
This falls short in a similar way.
6 Presumably not even internalists would think that any belief whatsoever held in the absence of reasons
(at that time or earlier) is ipso facto prima facie justifi ed, so they face the question of what the further source
of justifi cation might be, beyond rational basing. This will be seen to push them towards a more objective
conception that includes competence and not just blamelessness.
7 A belief with propositional content <p> might have as its conceptual content the thought [t] when [t]
is a mode of presentation of <p> to the subject at the time. Although a full treatment would need to go into
this distinction and its implications, here for simplicity we work with propositional contents only.
can there be a discipline of philosophy? 193
Our twofold answer faces the speckled hen problem on both its perceptual and its
introspective side. Someone might believe that his conscious experience is of a certain
sort, or that his surroundings have a certain perceptible feature, while his visual experi-
ence is of that sort, and does have the content attributed to the surroundings, and yet his
belief might fall short nonetheless. Why so? Because the subject’s ability to subitize is
limited to complexity of degree four or so. 8 If one takes the dots to number eight—
whether on the seen surface, or in one’s subjective visual fi eld—one may be right only
by luck. Neither the perceptual nor the introspective belief is then so much as compe-
tently formed: it oversteps one’s subitizing limits.
Even for proper understanding of perceptual and introspective foundations, therefore,
one must invoke the subject’s competence. Here the competence is reason-involving.
Only by subitizing based on concurrent experience can the subject properly form cer-
tain perceptual and/or introspective beliefs, which are thereby justifi ed. The diff erence
is that beliefs so based are reliably true. The subject is able competently to discern
whether there are three visible dots on the seen surface, or in his visual fi eld.
Must all foundational competence be thus reason-based? That is put in doubt by the
possibility of reliable blindsight. In the actual world blindsight has low reliability. But it
might easily have been more reliable. What are we to say about such easily possible
blindsight judgments? Are they all incompetent? They deploy concepts, as do contentful
seemings and beliefs generally. Are blindsight deployments all necessarily faulty, even
when they are nearly infallible?
Consider, moreover, basic arithmetic, geometry, and logic. How do we gain access to
such facts? Does anything mediate our access to them in the way visual sensory experi-
ence mediates our access to visible facts? Not plausibly. The simplest beliefs of math and
logic have no discernible basis, none beyond one’s inclination or attraction to assent.
Thus are we led to a conception of intuitions as seemings, as inclinations or attractions
to assent. Intuitions are seemings based on nothing beyond sheer understanding of the
question. Such seemings are conceptual: assent requires understanding, and understand-
ing requires concepts. This distinguishes seemings from sensory experiences. One can
experience contents that one could not entertain in thought, for lack of the required
concepts. In contrast, seemings are conceptual deployments, and rationally evaluable as
such.
4 The importance of competence
The foregoing suggests that the importance of experience in epistemology is vastly
overrated. Major categories, and distinctions among them—the a priori/a posteriori
distinction for one—should not turn on something so limited in epistemological
importance, so limited by comparison with competence . Nor of course should major
8 Whether the items are moving, or very big, or qualitatively diff erent from each other, etc., will
presumably matter.
194 ernest sosa
divisions—such as rationalism versus empiricism—be defi ned by reference to
experience. 9 More plausibly we can fi rst highlight our competent access to particular
contingencies through some combination of introspection, perception, testimony, and
memory; and then distinguish a posteriori knowledge as follows:
A posteriori knowledge is knowledge of particular contingencies or knowledge
through inference from such knowledge. 10
A priori knowledge then involves some source other than inference from particular con-
tingencies. This includes abstract general knowledge (not of particular contingencies).
Given the notion of competent access to particular contingencies, we can ask: is our
general, substantive knowledge always based on such particular data? Is it always ultimately
so-based through inductive reasoning? Advocates of empirical science can then line up on
one side, while on the other side gather rationalists claiming direct access to abstract, gen-
eral substantive truths, not mediated essentially by knowledge of particular contingencies.
We can thus recover the a priori/a posteriori, rationalism/empiricism oppositions
with no dependence on dispensable experience.
What distinguishes foundational seemings that are justifi ed? Shall we adopt a latitudi-
narian position for which all intuitive seemings are thereby, automatically justifi ed? This
would include all seemings based on nothing other than presentationally given, subjec-
tive states beyond justifi cation and unjustifi cation. 11 Unfortunately, this would entail
that biases and superstitions are all justifi ed, if they are imbibed from one’s culture with
no rational basis, which is how such beliefs are too often acquired. They are absorbed
through enculturation that need include no explicit verbal instruction. They enter rather
through the osmosis of body language, tone of voice, the perceived behavior fi rst of
elders and later of peer trend-setters, and so forth.
That poses the problem of justifi ed intuition: what distinguishes intuitions that are
epistemically justifi ed from those that are not? Recall that even beliefs based on the pheno-
menal given can derive justifi cation only through the subject’s relevant competence.
Accordingly, justifi ed intuitions can now be distinguished by invoking a competence that
does not involve the basing of beliefs on reasons. Rational intuition may instead sub-
personally enable us to discern the true from the false in the relevant abstract subject matter.
What distinguishes rationally justifi ed intuitions from intuitive though irrational biases
and superstitions? What’s distinctive of justifi ed intuitions, I suggest, is that they manifest an
epistemic competence, a rational ability to discern the true from the false, and to do so
reliably. 12
9 Of course, historical divisions might still be defi ned quite properly in terms of experience if that is how
the historical protagonists conceived of the matters in dispute.
10 With inference understood broadly, as rational basing.
11 Based on nothing but such states except only for the subject’s understanding of the proposition
involved. In what follows this exception will be left implicit.
12 Here and in what follows I assume that the “rational” justifi cation of a belief or a seeming need not
derive from its being based on reasons; rather, it can be present simply because that belief or seeming is
competently acquired, involves the deployment of concepts, and can be properly infl uenced by reasons.
can there be a discipline of philosophy? 195
5 Competence and hopeful testability
Consider again the faculty of blindsight. Even if we had no independent confi rmation
of its reliability, blindsight would be an epistemic gift: a way to discover truths concern-
ing, for example, the orientation of facing lines. That is not nothing, epistemically. It is
what it is: a reliable mode of access to a certain body of truths. Nevertheless, the deliver-
ances of that source can still fall short. Proper epistemic practice might preclude their
acceptance at face value. It might instead require suspension of belief, or at least suspen-
sion of endorsed belief. Suppose, again, that the source of these deliverances is not inde-
pendently testable. Suppose further that we have no theoretical understanding of its
modus operandi. How then could we possibly endorse such deliverances? Their source
is deplorably untestable, while only testable sources are subject to rational correction.
Only such sources are subject to rational calibration, so as to be given proper, reliable
scopes of application. 13
And the same goes for intuition as for blindsight. How does intuition’s low-test
standing bear on its use as a source of evidence in philosophy? If intuition is not so much
as suffi ciently testable, it has little chance of testing well. A pall is thus cast over continued
use of such methodology. Even if we should not abandon it forthwith, doubt still clouds
continued reliance on such a thin reed.
Why thin? Might not a source be highly reliable while not much amenable to proper
testing? A highly reliable source might admit paltry external corroboration, or none.
Furthermore, it might yield no deliverances of high-enough quality to be discernibly
13 Jonathan Weinberg has recently invoked such a concept of the hopeful in his renewed critical opposition
to the armchair (2007 ) . His concept is close kin to our concept of the suffi ciently testable (the closest kinship
being the limiting case of identity). Robert Cummins had earlier already launched a similar attack on arm-
chair intuitions, based on the supposed fact that they are not subject to independent calibration. ( Cummins,
“Refl ections on Refl ective Equilibrium,” in W. Ramsey and M. DePaul (eds.) The Role of Intuition in Philoso-
phy (New York: Rowman & Littlefi eld, 1999), 113–27. ) Our concept of the testable enough is also closely
related to the self-correctiveness invoked by methodologists of science for many decades prior to its most
famous use by C. S. Peirce, followed eventually by a descendant of that line of argument in the work of Hans
Reichenbach and Wesley Salmon: Reichenbach, Experience and Prediction (Chicago: University of Chicago
Press, 1938 ), and “On the Justifi cation of Induction,” The Journal of Philosophy 37 (1940): 97–103; Salmon,
“Hans Reichenbach’s Vindication of Induction,” Erkenntnis 35 (1991): 99–122 . It has been argued in this line
that the methods of science will uncover their own errors, at least in the long run. The attempt to show that
this is so has encountered serious objections, however, discussed for example in publications by Larry Laudan
and Nicholas Rescher: Laudan, “A Confutation of Convergent Realism,” Philosophy of Science 48 (1981):
19–48 . Also: Science and Hypothesis: Historical Essays on Scientifi c Methodology (Amsterdam: Springer, 1981),
ch. 14. Rescher, The Limits of Science , 2nd edn. (Pittsburgh: University of Pittsburgh Press, 1999 ); Nature and
Understanding: A Study of the Metaphysics of Science (Oxford: Oxford University Press, 2000); Human Knowledge
in Idealistic Perspective (Princeton: Princeton University Press, 1992). Compare, fi nally, the weaker requirement
in Hartry Field’s twist on the Benacerraf problem: that we can attain knowledge of a domain only provided
we do not believe that there’s no way for us to understand how our beliefs about that domain could be
formed reliably enough. (Field, Realism, Mathematics, and Modality (Oxford: Blackwell, 1989)). On pp. 232–3,
we are told that “we should view with suspicion any claim to know facts about a certain domain if we believe
it impossible to explain the reliability of our beliefs about that domain.” Also relevant are the last two sections
of Field’s “Recent Debates about the A-Priori” (in Gendler and Hawthorne’s Oxford Studies in Epistemology ,
i (Oxford: Oxford University Press: 2005), 69–89 ).
196 ernest sosa
reliable. Its workings might also be quite opaque to our theoretical understanding,
fi nally, while its deliverances neither cohere nor clash, not discernibly.
Say one has the gift of blindsight, again, with no theoretical understanding of its
workings. Say one can assess that faculty only by direct reliance on its own deliverances,
which seems viciously circular. Acceptance of such deliverances would then be restricted
to a fi rst-order animal level. One would lack any perspective from which to endorse
them. Only aided by such a (now missing) perspective, however, could one possibly
ascend to refl ective knowledge. Despite that, one might still attain animal knowledge,
with its required reliability, through blindsight access to the relevant domain of facts. 14
We have relied on a thought experiment involving a faculty of blindsight that is reli-
able though untestable. Anyone unpersuaded might compare early astronomy, based as
it was on a commonsense perception still poorly understood, one whose deliverances
about the night sky enjoyed little external corroboration. I hear the reply: “Well, then
those early eff orts had little epistemic worth.” Little indeed, in those early days, com-
pared to our astronomy today; little, but not zero. 15 And the same may be true of us as we
peer into dark philosophical issues. Here again proper reliance on a source is compatible,
surely, with how uncertain its deliverances may be when we target philosophical issues,
now by comparison with their certainty when we target elementary math or logic.
True, in a scientifi c discipline we would hope to surpass the animal level. We would
hope for external support, tight coherence, high-quality deliverances, and theoretical
understanding of our sources, such as the instruments whose readings we trust. True, that
hope is fulfi lled for our sources in the natural sciences. There we do attain high-level
refl ective knowledge. We go beyond mere animal trust in sources that happen to be reli-
able. In mature sciences we go well beyond our halting early eff orts. Such disciplines do
not emerge fully formed from anyone’s head, however; they develop gradually and col-
laboratively through much less testable means.
Mature scientifi c experimentation and observation, in all their amazing variety and
sophistication, rely on highly testable sources, whose reliability is corroborated when
they test positive. What of the use of intuition in philosophy, as in our intuitive responses
to thought experiments? How amenable is philosophical intuition to proper epistemic
testing and assessment? Does it gain enough support through our theoretical under-
standing of its workings? Low-level sources in empirical science, such as instrumental
observation, are of course independently testable, and do receive independent corrobor-
ation. Unfortunately, the same is not true of philosophical intuition. It is hard to see
what external sources might much confi rm or infi rm the deliverances of intuition, by
14 It might be thought that such intuition deserves the same dismissal as the ostensible clairvoyance of
BonJour’s Norman. In what follows I suggest where some relevant diff erences might be found, by analogy
with the use of our bare eyes on the night sky in the early days of astronomy.
15 Even data with slight positive standing may join with enough other data through explanatory coher-
ence to the point where what seemed initially slight can gain a substantial boost. In order to be fully eff ective
the x-phi critique must be that intuition is so unworthy as to add only something quite negligible, more so
than, say, the data provided to early astronomy by then untestable night vision of the sky.
can there be a discipline of philosophy? 197
giving us good independent access to how reliably or unreliably intuition delivers truth
rather than falsity. True, we do have examples where intuition stands correction: as
concerns sets, for example, or heaps, or the analysis of knowledge, or even about simul-
taneity in the light of physical theory. But such corrigibility is much more limited in
scope than our ability to calibrate scientifi c instruments.
That still does not show intuition-based philosophical methodology to be hopeless,
not if early astronomy was hopeful enough to enable our eventual development of a sci-
ence. If astronomy could rely in its beginnings on its main early source of data—namely,
bare-eyed perception of the heavens—even without independent corroboration of this
source’s reliability, it follows that such lack is not defi nitive. Will it be said that bare-eyed
perception was known to be reliable independently of its use on the night sky? Fair
enough, but then intuition is also known to be reliable independently of its use on the
diffi cult subject matter of interest to philosophers. 16
6 What is the real problem for intuition?
Let’s assume for a moment that we can successfully rebut the argument from the wide-
spread disagreement that surveys allegedly reveal. And let’s assume that we can also repel
the attack based on the supposed untestability of armchair intuition. Can we then relax
into our armchairs, and carry on with analytic business as usual?
Not clearly. Consider again the divide between professional philosophers on one side
and street-corner respondents on the other. Disagreement across this divide was sup-
posed to create a problem for intuition generally. If that problem is real, then armchair
philosophy faces a much more serious problem, which we all recognize only too well
already, at least implicitly.
The disagreement across the expertise divide is not so alarming. We can plausibly
downgrade the signifi cance of any disagreement that pits refl ective experts against
unrefl ective passersby. Moreover, the disagreement across the expertise divide was
meant to expose problems for just one component of philosophical methodology:
namely, philosophical intuition.
16 Extrapolation is of course involved in both instances, and that does carry risk. We trust our
understanding-based insight not only in logic and math, not only about the plethora of things obviously
known intuitively, such as that certain shapes diff er, that a given shape is distinct from any color, etc. Extrapo-
lating from these to trust in our judgments about Gettier cases carries risk, no doubt, but so does extrapola-
tion from eyesight used at noon for arm’s length perception of one’s hand, to eyesight used at midnight for
perception of the night sky. More worrisome is the possibility that extrapolation from mathematical or logi-
cal intuition to philosophical intuition turn out to be like extrapolation from established good eyesight to
trust in our hearing. This is indeed a possibility that we would need to ward against. More worrisome yet is
this possibility: that extrapolation from math and logic to philosophy might turn out to be like extrapolation
from shape perception to color perception. This suggests that even when two such domains might seem quite
closely related—they both involve the eyes, good light, etc.—extrapolation from one to the other might still
be fraught. And of course it must be granted that cognition is risky business. We can only do our best to feel
our way while taking due precautions. Refusing to move carries its own risks.
198 ernest sosa
Unfortunately, there is a bigger problem of disagreement for armchair philosophy.
This problem is much more troubling for two reasons. First of all, it concerns disagree-
ment among the experts themselves, at the highest levels of expertise. And, secondly, it
concerns not just one component of armchair methodology, but also the most complete
and carefully conducted methodology available in our fi eld, which includes intuitions,
but also inference, dialectical discussion—through seminars, conferences, journals, and
books—in a collective endeavor of broad scope, over centuries of inquiry.
Sadly, it is not just any narrow, observation-like method of intuitions that has a prob-
lem of disagreement. Even the broadest, most complete method available, one applied
with care and dedication through broad cooperation, still yields a troubling measure of
disagreement. So even this best and most complete method is in danger of sinking into
disrepute.
That anyhow is the more troubling argument to which we are led as we move beyond
the surveys. Let us consider what scope remains for philosophy as a discipline, even if its
disciplinary status is still more potential than actual.
Is there such a thing as knowledge within the discipline of philosophy? By that I mean
not just whether particular philosophers have knowledge that counts as philosophical.
I mean to ask rather this: whether anything is accepted as established within the discipline at large.
Let’s leave aside logic and the history of philosophy. Let’s leave aside any purely nega-
tive knowledge, such as the knowledge that justifi ed true belief is not necessarily equiv-
alent to knowledge. Let’s leave aside, fi nally, any essentially disjunctive knowledge, such
as, perhaps, that either libertarianism or hard determinism or compatibilism is true.
Leaving all of that aside, little established knowledge can be discerned in our discipline.
Unfortunately, on one important question after another, a troubling number of us fail
to agree with the rest. No suffi cient agreement or consensus often forms, none of the
sort required for a fact to be established in the discipline at large.
Pre-scientifi c stages are similarly problematic even in fi elds that reach maturity in our
sciences of today. Pre-scientifi c ages fail extensively to attain consensus. Such dissensus is
overcome only through determined scientifi c inquiry, with its distinctive methodology.
It remains to be seen whether cultivation of philosophical subfi elds will produce the
consensus required for established results and respective sciences.
Widespread disagreement in a subject area could take either of two forms: fi rst, dis-
agreement on answers to agreed-upon questions; second, lack of known agreement on
the questions. We need to consider the extent to which progress in philosophy must
overcome the second rather than the fi rst sort of disagreement. If disagreement concerns
mostly the questions, then our lack of testability is of a quite distinctive sort.
Compare a simple domain of phenomena, that of the temperature of a liquid in a
container. We gain some limited access to that domain by inserting a hand in the liquid.
This source of seemings and judgments might become independently testable with the
development of thermometers. Locking up the thermometers would then push us back
to a stage where our hand-insertion source was less testable. But such diminished
testability is superfi cial, and resolvable through renewed access to the thermometers.
can there be a discipline of philosophy? 199
Analogously, lack of agreement on questions in a philosophical subfi eld is a practical
matter of what attracts attention. If people fail to coincide clearly enough on the
questions, this denies us the ability to properly test our intuitions. Such practical lack
might be superfi cial, and remediable with due diligence. By attaining clear enough
coincidence on the same questions, on the same thought experiments, we would be
able to test our intuitions. Sustained dialectic, for example, might eventually yield the
required coincidence.
It seems an open question how much of our ostensible philosophical disagreement is
real and how much is based on an illusion of shared questions, an illusion that hides the
divergence of our questions. Either way low testability is of limited importance for assess-
ing philosophical sources, such as philosophical intuition. This may be seen as follows.
Take fi rst the case where the disagreement is in the questions. Lack of testability is in
that case superfi cial, and should be remediable in practice. It is like the lack of testability
for our hand-insertion source when the thermometers are locked up, while remaining
available in principle.
Take second the case where the disagreement is in the answers. Now the problem is
not that philosophical sources are untestable. They are testable all right, but they test
negative. Disagreement in the deliverances of a source tends to reveal the unreliability of
that source.
The problem for armchair philosophy is, therefore, not so much that intuition is
insuffi ciently testable. Evident coincidence on the questions brings hopeful testability,
making philosophical sources increasingly subject to the test of agreement. It is just
unfortunate that they have yet to pass this test, which must be passed for a discipline to
count as scientifi c.
The real present or looming danger is the actual or potential disagreement that per-
vades our fi eld. This is not disagreement that pits experienced philosophers against
street-corner respondents. It is rather the longstanding, well known disagreement
among the “experts” themselves. Let us turn next to this.
7 Is low-test philosophy entirely hopeless?
In considering whether intuition can be a source of evidence in philosophy, let’s focus on
philosophical methodology generally. Let’s compare intuition with the way of forming
opinions used by our best practitioners through the centuries. Philosophical methodol-
ogy would include not only bare individual intuition, but also argumentation, public
dialectic, and whatever forms of explanatory inference might be of use in our discipline.
We shift the focus thus to “philosophical method,” the best that can be found on off er
either historically or on the contemporary scene, from across the domain of philosophy.
How “hopeful” is such broader methodology? Has it been appropriately sensitive to
its errors, and capable of correction? Has it manifested these virtues to a suffi ciently high
degree? In fact our global philosophical method seems little more testable than its
200 ernest sosa
component intuition. What is it, more specifi cally, that drains hope from the method of
intuition, and from philosophical methodology more broadly? Is it not largely an inabil-
ity to overcome disagreement among apparent peers? Disagreement does I think deserve
much of the blame, if it is really substantive.
The same is true, moreover, not only of philosophy, but also of art, morality, and poli-
tics; and even of how we judge the character and motivation of our fellow humans. All of
these fall well short of the standards of objective agreement proper to scientifi c inquiry.
If our threshold of proper hope is set by the standards of scientifi c objectivity, then in
none of these domains do our judgments deserve trust.
The epistemic problems in such domains, I am suggesting, are pervasive, and charac-
teristic of the domains themselves, not of any particular method that one might single
out. Indeed, “philosophical method” seems not distinctive of philosophy. It amounts to
little more than thinking carefully, in medias res, through the use of deductive and inductive
inference, and with the help of imagination, counterfactual thinking, and public discussion . 17
We thus arrive at a general question of attitude. What can we reasonably hope for
when we face vital unscientifi c questions? Can we hope to develop scientifi c modes of
belief formation, and in eff ect scientifi c disciplines ? Consider artistic criticism, morality,
and politics. Consider how we know about our friends and loved ones, and the judg-
ments required for our life-guiding choices, big and small. Could there possibly be sci-
ences to replace our views concerning such subject matter? And even if there possibly
could be such sciences, what are we to do while we await their consummation? Should
we just hold ourselves generally aloof ? 18 That would be to check out of life. And if the
judgments required for living well admit a distinction between the good and the bad, we
can properly refl ect on this distinction, we can try to understand it, even if it is not quite
the distinction between the scientifi c and the unscientifi c.
Accordingly, the fact that philosophical methodology is less amenable to independent
test than is scientifi c methodology does not show it to be hopeless for its proper domain.
Philosophical intuition might after all enjoy a role analogous to scientifi c observation
even while substantially less hopeful. For it is in service of methods used where the
requirements for proper hope are substantially lower. 19
17 It might be thought that modal intuitions are never used outside philosophy. But whenever we face
alternative outcomes as we decide what to do, we surely rule out a plethora of them (automatically and
implicitly) simply because they are obviously impossible. Their obvious impossibility seems accessible to us
just through what we are calling “intuition.” Some such intuitions even prove questionable eventually, as did
simultaneity intuitions after Einstein.
It might be questioned whether we do any such ruling out. That in some sense we do so may be appreci-
ated, however, by comparison with the fact that we rely on the solidity of a fl oor in a room as we stride
confi dently into it, even if we give no conscious thought whatever to that relied-upon fact. It is in this sense
that we rule out walking through the wall and opt for the door.
18 Might we just sprinkle probability qualifi ers freely enough to escape our problems? I don’t see that this
would help much. Plenty of disagreement would still remain, even once it was clear what meaning was
imported by the qualifi cation.
19 A further thought deserves more sustained attention than we can give it here. Recall that “hope” includes
the degree to which the method assessed is free of self-undermining, since it includes a requirement of coher-
ence. And suppose we include the following among the ways in which a method can undermine itself:
can there be a discipline of philosophy? 201
We have staved off one kind of pessimism about our prospects for a discipline of
philosophy. We have defl ected the objection that the methods of philosophy, such as
intuition, are insuffi ciently testable. And we have resisted the inference that philosophy is
hopeless from the premise that it is polluted by disagreement. Even supposing we have
succeeded in that endeavor, that is not enough. We might still fall deplorably short. Thus,
we might agree with evident unanimity both on a set of questions that defi ne a subfi eld,
and on the right doxastic attitude to take to those questions. But the right doxastic atti-
tude might just be that of suspension. And we might still be in the dark on how to answer
our questions, even if we agree completely on the testable sources available to us. Our
sources might simply fail to deliver what we need in order to reach the desired answers.
“Mysterians” have drawn that pessimistic conclusion about important sectors of our
fi eld. 20 Their stance is equally dispiriting, though in a quite diff erent way from the stance
targeted above. The stance we have examined in this paper is not a mysterian claim that
the agreed upon, testable methods of our discipline fail (and will fail) to deliver on cer-
tain questions. We have instead examined a prior doubt concerning our methods:
namely, that they are hopelessly untestable, or produce too much disagreement. It is this
prior doubt and its alleged implications that we have here found reasons to resist.
Most recently the experimentalist critique has turned even more sophisticated but
also more concessive. 21 It is now granted that philosophy has made progress, not only
through its development of formal logics, but also through helpful distinctions broadly
recognized and used: the distinction between use and mention, for example, or between
semantics and pragmatics, or between epistemic and metaphysical possibility. These
developments, note well, are all accomplished by thinkers comfortably seated in their
armchairs.
In addition, it is now recognized that the earlier critique of armchair intuition relies
essentially on presuppositions that derive from the armchair. Thus, in concluding that
A method M undermines itself in proportion to how the deliverances of M are repeatedly shown to have been false by
later deliverances of M itself.
In that case, we are all aware of a reason why scientifi c method now seems surprisingly less hopeful than
it might have seemed.
We have seen how the use of intuition and of philosophical method must face troubling considerations
that apparently drain them of hope. And now we fi nd a similarly troubling concern about the use of scien-
tifi c method. If we wish to defend scientifi c method, we must fi nd some way to defuse the problem raised
initially by the pessimistic induction. This would seem to require philosophical refl ection at a high level of
generality. And something similar would be required in order to overcome the hope-draining considerations
adduced against the use of intuition in philosophy.
Both problems are of course familiar; yet, so far as I know, neither one has been laid to rest decisively:
neither the problem posed by peer-disagreement for philosophical method, nor the problem posed by the
pessimistic induction for scientifi c method.
20 Colin McGinn, The Problem of Consciousness: Essays Toward a Resolution (Cambridge, MA: Blackwell,
1991 ) and Problems in Philosophy: The Limits of Inquiry (Cambridge, MA: Blackwell, 1993). Noam Chomsky,
“The Mysteries of Nature: How Deeply Hidden?” Journal of Philosophy 106 (2009 ) and also Language and
Problems of Knowledge (Cambridge, MA: MIT Press, 1987), 152.
21 Once again Jonathan Weinberg has led the way, in his talk at a conference on experimental philosophy
at the University of Calgary in November of 2009.
202 ernest sosa
intuitions are distorted by cultural or socioeconomic bias, we presuppose a metaphysical
view of the subject matter, one that makes the distortion immediately plausible.
The substance of the critique is now that experimental inquiry has uncovered
unhealthy infl uences on our intuitive responses. We must therefore redouble our eff orts
to discern the true extent of such infl uences, so as to be able to protect against them in
further philosophical inquiry.
This seems to me a more reasonable critique, one deserving of more serious consid-
eration. However, it faces a dilemma.
Either experimental inquiry will uncover serious divergence in subject responses or it
will not.
If it does not , then the eff ects of the operative factors are not thereby problematic.
Suppose, in particular, that respondents are nearly all in agreement. Their agreement will
then be problematic only on the premise that they are getting it wrong anyhow, despite
the substantial agreement. And this will require that we have access to the truth on the
philosophical subject matter in question. This way of arguing presupposes, therefore,
that we already have philosophical access to the facts in that domain.
True, the source that delivers the seemings or beliefs in the target domain might have
been shown to be distorting in other domains, where disagreement is much more exten-
sive. But how would we know that the distortion carries over to the target domain? This
would seem to require independent access to the target domain after all.
That’s all, again, on the assumption that the experiments uncover no serious diver-
gence in responses within the relevant domain.
If serious divergence is revealed experimentally, on the other hand, how then can we
be sure that the respondents are interpreting the relevant texts in the same way? Note
that we will then start out with a substantial reason for suspecting that people are talking
past each other. Unless we can spot reasons to suspect relevantly diff erent positioning, or
divergence of competence, we should suspect divergence of meaning, as this will quite
possibly best explain the persistent ostensible disagreement.
True, this last thought will not apply so plausibly in the case where the ostensible dis-
agreement is with one’s past or counterfactual self. This may be due, for example, to
order eff ects. But why think that these eff ects cause serious enough distortions that require
special attention? We would still need an answer to this obvious question.
It is not at all evident, therefore, that or how the extent of experimentally revealed
divergence in responses would create a serious problem for the continued use of arm-
chair methods in philosophy. 22
22 My grateful thanks to Blake Roeber for various research assistance, and to Jennifer Lackey for also
pressing me on the comparison between intuition and clairvoyance.
PART III
New Concepts and New Problems in the Epistemology of Disagreement
This page intentionally left blank
Disagreement comes in many forms. It may divide people irreconcilably, but it may also
occur against the background of extensive agreement. It may be about what all con-
cerned consider trivial or about matters each regards as vital to a good life. It arises
within disciplines as well as among families and friends. Philosophers are not alone in
exploring how we should resolve disagreements—and live with them when they prove
recalcitrant. Some disagreements, of course, can be settled by discussion or by bringing
to bear new information. Others persist even when the parties are rational and share
what they consider all the available relevant evidence. These “rational disagreements,” as
they are sometimes called, are of concern in much recent work and will be in this paper. 1
But these and other disagreements are a special case of a wider phenomenon that is also
important: cognitive disparity. I begin with a sketch of some of its main varieties. With
those in view, I distinguish two important kinds of disparity that need more philosophi-
cal attention than they have so far received. The concluding part of the paper will bring
the results of the fi rst two parts to bear on disagreements concerning the self-evident.
1 Cognitive disparity and disagreement
Cognitive disparity, as I conceive it here, is a kind of diff erence—usually also yielding a
tension—between cognitive elements. Paradigms of these elements are beliefs, but other
truth-valued attitudes, as well as dispositions to form beliefs, are also cognitive elements
and may fi gure in cognitive disparities. Cognitive disparity may be intrapersonal,
constituted by internal cognitive diff erences or tensions or by potential diff erences or
tensions; it may also be interpersonal, constituted by the existence of diff ering or
9
Cognitive Disparities
Dimensions of Intellectual Diversity and the Resolution of Disagreements
Robert Audi
1 For recent discussion of the nature and implications of disagreement, see Richard Feldman and Ted A.
Warfi eld (eds.) Disagreement (Oxford: Oxford University Press, 2010) .
206 robert audi
confl icting cognitive elements in two or more persons. My interest here is mainly in the
interpersonal case.
One kind of cognitive disparity is a diff erence between two people in the strength of
their conviction regarding some proposition, p . Disparity, then, does not entail disagree-
ment, partly because the parties may both believe p . But such non-doxastic (non-belief-
entailing) disparity is raw material, perhaps the main raw material, for disagreement,
which is doxastic and a paradigm of cognitive disparity. It would be natural, for instance,
for the parties in question to form confl icting beliefs about the probability of p if they
should refl ect on the question of its likelihood. In many cases of disagreement, to be
sure, mutual discovery of the convictional disparity by rational persons will lead each to
come closer to the other in degree of conviction. 2
Convictional disparity, then—the kind just described—does not entail what I call doxas-
tic disparity, diff erence in belief content. 3 Another kind of disparity is indicated by such
locutions as “He would disagree with you about that.” There are at least two cases here. One
is that of doxastic disparity which exists at the time in question and would (other things
remaining equal) be expressed if the parties should discuss the matter of whether p is true.
Here we might speak simply of unexpressed disagreement . But the same locution may be used
to ascribe a cognitive disparity we might call implicit disagreement. I have in mind cases in
which the parties are disposed, on the basis of certain elements in their psychological make-
up—most importantly their “evidence base”—to believe incompatible propositions.
As these cases suggest, there are many kinds of disparity, and not all can be detailed
here. 4 In another kind of case, each party believes sets of propositions obviously entail-
ing (respectively) p and not- p , but at least one has not formed a belief of the entailed
conclusion. Thus, the moment p comes before their mind, they will (other things equal)
have confl icting beliefs. Other cases involve diff erences in people’s sensory or memorial
evidence. Suppose that I see a white area of a painting in red light that I do not realize is
focused on it and that you see the same area through a compensating lens. We will tend
to disagree about its color if asked what it is (I assume that in viewing paintings we do
not typically form beliefs identifying the colors of all the shapes we see). 5 Similarly, if I
happen to see a corner restaurant table occupied and you (a bit later) see it unoccupied,
neither of us need form beliefs about it; but if asked, after we leave the scene, whether it
was occupied during our dinner, we may each have memorially retained traces of the
2 For a treatment of the epistemology of disagreement that takes account of the degree of conviction in
question as well as doxastic content diff erences, see Jennifer Lackey, “What Should We Do When We
Disagree?” Oxford Studies in Epistemology, iii (2010) .
3 Where one person believes p and the other withholds it but does not believe not- p , I prefer not to speak
of a diff erence in content, since withholding is a case of non-belief regarding p . Granted, withholding is a
cognitive attitude toward a proposition, but it is not doxastic.
4 Some of the kinds of cases in question I have described in “Structural Justifi cation,” Journal of Philosophi-
cal Research 16 (1991), 473–92, reprinted in John Heil (ed.), Rationality, Morality, and Self-Interest (Lanham,
MD: Rowman and Littlefi eld, 1993), 29–48 .
5 I am distinguishing dispositions to believe, which are not beliefs, from dispositional beliefs, which are. The
distinction is developed in my “Dispositional Beliefs and Dispositions to Believe,” Noûs 28 (4) (1994), 419–34.
cognitive disparities 207
respective sense impressions and then form the correspondingly confl icting beliefs.
Some implicit disagreements, of course, represent unimportant cognitive disparities; but
others are signifi cant, and a disparity insignifi cant at one time can become signifi cant at
another, as where suddenly we must try to agree on an emergency plan to fi ll a vacancy.
My description of convictional disparity does not entail that the parties diff er in
probability ascriptions. If, however, we think of belief as coming in degrees, as Bayesians
do, these two cognitive dimensions—believing and attributing a probability—may seem
equivalent. But clearly one could have great conviction that p and not attribute any
probability to it (one might not, and some people apparently do not, think in terms of
probabilities). One could also attribute a high probability to p but (perhaps to one’s sur-
prise) not have very strong conviction that p . I propose, then, to distinguish probabilistic
disparity from convictional disparity. Doxastic disagreement about probabilities is the
most prominent case of probabilistic disparity; but we must also note the implicit cases
in which the parties are only disposed, on the basis of their overall cognitive constitu-
tion, to attribute diff erent probabilities. These cases are especially important when one
person attributes (or is disposed to attribute) to p a probability greater than 1/2, and the
other attributes (or is disposed to attribute) a probability lower than 1/2. They then will
tend to disagree categorically about whether p and, perhaps, to diff er markedly in action
tendencies connected with p .
The connection between cognition and action suggests that we consider disparities
at the level of acceptance, conceived as diff erent from belief. To be sure, “accepting p”
can be used equivalently with “believing p.” But there is a sense of “accepting” in which
it contrasts with rejecting. Here it has a behavioral aspect. One case is accepting for the
sake of argument. But this is not the only case in which accepting does not entail believ-
ing, even if, in certain cases, it entails a disposition to believe. 6 We should, then, recognize
disparities in acceptances of p . As with beliefs, disparities in acceptances may be expressed
or unexpressed and implicit or explicit.
Even more diffi cult to explicate than acceptance is presupposition, but an adequate
account of disparity must include it. Two people can diff er markedly in their cognitive
constitution in virtue of what they presuppose. Clearly we cannot presuppose what we
believe false or have considered and (at or up to the time in question) withheld. But
what is presupposed need not be believed. On approaching a normal-looking masonry
6 For treatments of acceptance see L. J. Cohen, “Belief and Acceptance,” Mind 98 (1989), 367–89 ; William
P. Alston, “Belief, Acceptance, and Rational Faith,” in Jeff Jordan and Daniel Howard-Snyder, (eds.), Faith,
Freedom, and Rationality (Lanham, MD: Rowman & Littlefi eld, 1993) and “Audi on Nondoxastic Faith,” in
Mark C. Timmons, John Greco, and Alfred R. Mele (eds.), Rationality and the Good: Critical Essays on the
Ethics and Epistemology of Robert Audi (Oxford: Oxford University Press, 2007), 123–39 ; and my “Doxastic
Voluntarism and the Ethics of Belief,” Facta Philosophica 1 (1) (1999), 87–109, repr. in Matthias Steup (eds.),
Knowledge, Truth, and Duty (Oxford: Oxford University Press, 2001), 93–111, which contains other treat-
ments of the notion. Cf. the more restricted notion described by Jeremy Fantl and Matthew McGrath in
Knowledge in an Uncertain World (Oxford: Oxford University Press, 2009), 149–51 . It should be noted that
acceptances are also like beliefs in admitting of degrees of convictional strength, and there is no sharp dis-
tinction between a somewhat confi dent acceptance that p and a belief that p .
208 robert audi
staircase, I presuppose that it will hold me. If I should think of the matter, I would then
believe that such staircases are solid; but even if I had believed that generality on the
occasion, it does not follow that I had the specifi c belief, as opposed to the disposition to
form the belief, that this staircase will hold me.
There are also cases in which we presuppose propositions that (other things equal)
we would disbelieve on considering them. Teaching graduate classes in ethics long ago,
I presupposed that the participants would not hold the stereotypical but mistaken
view that utilitarianism is the claim that right acts promote “the greatest good for the
greatest number.” There came a point at which, when I considered this presupposi-
tion, I judged it false and ceased to make it. If what is presupposed is in some sense
accepted, it is not necessarily accepted in a sense implying consideration of the propo-
sition; nor does every case of acceptance imply presupposition, as evidenced by
accepting for the sake of argument. Consider also the case of listening to a female
radio announcer with a virtually normal-sounding tenor voice. One might presup-
pose, without forming the belief, that the speaker is male and be surprised to hear her
referred to as “she.” 7
Like “acceptance” and “presupposition,” “intuition” has both doxastic and non-
doxastic uses. 8 Both uses concern me, since I am identifying both doxastic disparities
and major kinds of non-doxastic disparities, and I take intuitional disparities to be
among the most important kinds. Consider, then, its seeming to one that p , where one
does not believe p , as with plausible hypotheses one entertains but cannot believe with-
out refl ection. It may, for instance, seem to me that accepting a proposition is never an
action unless it represents, not a positive cognitive attitude, but rather a kind of decision
to use p as a basis of reasoning or exploration of a topic. This phenomenal seeming does
not entail, though it is compatible with, believing the proposition. Given the existence
of non-doxastic seemings, we should countenance intuitional cognitive disparities of two
kinds: phenomenal and doxastic. Either can be merely implicit, in that the person has a
disposition to have them but does not.
A further point here is that non-doxastic intuitive seemings, though cognitive in
having propositional objects, are apparently diff erent from all the cases so far consid-
7 Granted, one might now say “I thought it was a male.” I think this shows not that one believed this (and
I, at least, would not say “believed” rather than “thought” here), but that “thought that” can be used to mean
roughly “presupposed that.” Compare “I thought you were my brother” where there has been mistaken
identity regarding a look-alike. Must I have had any similar ( de dicto ) belief or thought, as opposed to just
temporarily taking (perhaps presupposing) him to be my brother? Our cognitive vocabulary is rich and
subtle, and here I am opening up options rather than attempting a full-scale sorting.
8 Until quite recently, perhaps under the infl uence of major ethical intuitionists, “intuition” was most
commonly used either for a kind of non-inferential belief or applied noncommittally as to whether the
proposition in question is believed by the subject. See, e.g., G. E. Moore’s Principia Ethica (Cambridge, Cam-
bridge University Press, 1903) ; W. D. Ross, The Right and the Good (Oxford: Oxford University Press, 1930) ;
and John Rawls, A Theory of Justice (Cambridge, MA: Harvard University Press, 1971) , in which intuitions
are often identifi ed with “considered judgments.” For discussions of seemings see, e.g., George Bealer,
“Intuitions and the Autonomy of Philosophy,” in Michael R. DePaul and William Ramsey (eds.), Rethinking
Intuition (Lanham, MD: Rowman and Littlefi eld, 1998), 201–39 and Ernest Sosa, “Minimal Intuition,” also
in DePaul and Ramsey, 257–69.
cognitive disparities 209
ered in relation to disparities: the former, but not the latter, do not admit of justifi ca-
tion. Granted, we may properly say such things as “This argument shouldn’t seem valid
to you—it’s plainly not.” But this is like saying “You shouldn’t be seeing double—you
must need a new prescription.” Even with cognitive phenomena, defectiveness does
not imply admitting of justifi cation. That intuitive seemings (conceived as phenomenal
and non-doxastic) do not admit of justifi cation is important in that, if they do not, and
hence do not stand in need of it, they can be candidates to confer justifi cation (which
they appear to do at least in certain cases) as opposed to simply transmitting it. That, in
turn, is important in relation to disagreement in general. Some disagreements cannot
be settled so long as the parties have disparate intuitions; and if these do not admit of
justifi cation, resolving the disagreement requires that they be changed in a way that,
like presentation of new examples and alternative scenarios, does not depend on defeat-
ing their justifi cation.
A not uncommon kind of cognitive disparity is one person’s believing p while another
withholds it. I take withholding to be a matter of considering p and—for a reason such
as p ’s seeming uncertain—not believing it despite its having some plausibility for the
person considering it. Withholding may or may not be accompanied by an epistemic
belief, for instance a belief that there is insuffi cient evidence for p . Where one person
believes p and another withholds it, we might speak of partial doxastic disparity. The dis-
parity may be “implicit,” as where (other things equal) there would be this pattern if the
parties should consider p. It may also be explicit, in which case the relevant attitudes
exist at the time in question. As with other cases of disparity we have considered, where
just one has the attitudes in question (namely, believing and withholding), the disparity
may be either unilaterally implicit or explicit, or, where both have it, bilaterally implicit
or explicit. Moreover, withholding can be, say, positively skeptical or just cautionary. In
either case, to bring out the element of disparity, we could say that withholding is a kind
of rejection. I distinguish two cases here (there are others). In my terminology, soft rejec-
tion is, in the main, cautionary withholding or withholding with a sense of either insuf-
fi cient evidence or of the plausibility of a proposition one considers incompatible with
p ; and hard rejection is either disbelief in the light of considering p or, roughly, withhold-
ing it on a skeptical basis leading one, or inclining one, to believe something to the eff ect
that p is unknowable or defi nitely not credible.
One further case to be considered here concerns inference. Inference is commonly
a belief-forming episode in which, on the basis of thinking in some way of something
one believes, one comes to believe something else. 9 But inferences need not be belief-
forming. Witness logic book exercises. Moreover, people diff er in what they infer or
tend to infer from various propositions. One person might, for instance, tend to draw
categorical conclusions from premises that imply them with only high probability;
another might in such cases tend to draw only conclusions to the eff ect that the
9 Detailed discussion of the nature of inference is provided in my Practical Reasoning and Ethical Decision
(London and New York: Routledge, 2006), e.g., 166–8.
210 robert audi
proposition is highly probable. Where people diff er in either of these ways, I propose
to speak of inferential disparity. The disparity may be implicit, on either side or both.
Neither need actually draw the inference(s) in question; but they may diff er in that
one, but not the other, would infer q from p under certain conditions. This is an
implicit disparity.
Even where inferential disparity is explicit, neither party need believe p (since the
inference need not be belief-forming) or one of them might believe p and the other
withhold it. It appears, however, that we tend to believe what we would (at least
refl ectively) infer from something we believe . This is of course only a tendency; modus
tollens is a possible response to modus ponens that leads to one’s inferring something
one then (or later) comes to disbelieve. If, from premises I believe, I infer something I
see is clearly false, it should not take me long to cease believing at least one of my
premises. Whether, in such cases, there is momentary irrationality owing to one’s hav-
ing mutually incompatible beliefs or, instead, immediate doxastic change—and both
seem possible—is an empirical question we need not answer here. What we can say is
that a rational person confronted with a toxic mixture neutralizes it as fast as possible.
Such explicitly recognized inconsistency among our beliefs, if possible at all, is a toxic
disparity.
Another cognitive tendency important for understanding disparities concerns what
may be broadly called support relations between propositions. Important cases include
entailment, explaining, and being explained by. Where rational persons tend to infer q
from p , they also tend to believe (and may believe) something to the eff ect that p sup-
ports q (say, entails or renders it highly probable). Inferential disparity, then (relative to a
proposition or set of propositions that are or might be objects of cognition on the part of
two or more people), may indicate either doxastic diff erences, or simply tendencies to
form confl icting beliefs that realize those diff erences, regarding any of at least three
things: (1) connecting propositions —the kind expressing some logical or support relation
between premise(s) and conclusion, (2) conclusionary propositions— those that are or
would be inferred—and (3) inferential premises —propositions one believes or takes (or,
other things equal, would take) as a basis for drawing one or more inferences. Ordinarily,
premises are believed; but we can speak of taking p as a premise for indirect disproof, in
which case it is likely disbelieved. We may also speak of tentatively premising that p , and
of fi nding p plausible because of what it implies, as where it seems to explain best certain
plausible propositions we infer that we already believe. In the former case, we tend to
draw inferences from p ; in the latter, we tend to infer p from its apparently best explain-
ing what we infer from it. People may diff er in what they tend to take as premises for
inference even apart from whether they believe the propositions in question. Pragmatic
factors making propositions salient are important in this matter, as well as antecedent
beliefs and dispositions to believe. In the next section, inferential disparities—and infer-
ential agreements—will be important. It is enough here to indicate how inferences can
be confi rmatory and thereby anchors of belief, or disconfi rmatory and thereby agents of
doxastic change.
cognitive disparities 211
2 Two types of rational disagreement
With the kinds of cognitive disparities sketched in section 1 in view, I want to consider
an important example of disagreement. The focus will be ethical principles, but the
issues are quite general, and other examples—including epistemic principles—will be
implicitly addressed. Let us begin with some of the famous list of ethical principles for-
mulated by W. D. Ross. It includes these: if one promises to do something, one has a
prima facie obligation to do it (the promissory principle); and if one affi rms something
to one or more others, one has a prima facie obligation not to lie (the veracity principle). 10
Ross, like other moral philosophers, realized that a prima facie obligation might be
overridden, as where a stronger prima facie obligation to render emergency aid to a
child makes giving that aid one’s overall obligation at the time and justifi es breaking a
promise. But we can distinguish overriding of an obligation from eliminating it: the
promisee may release one, or the child may die, before one’s promise of aid can be ful-
fi lled. These elements eliminate rather than override. To support the idea that a promise
has normative force even when the resulting obligation of fi delity is overridden, Ross
noted that there is then a residual (prima facie) obligation to explain why one failed to
fulfi ll it.
The term “prima facie” is often used epistemically, but here its use is diff erent. It will
be clarifying to speak instead of promising, of affi rming things to others, and of similar
obligation-grounding phenomena, as entailing a defeasible reason for action. We can
think of an ethical theory as giving us an account of moral reasons for action—the kind
that entail obligations of the sort that justify moral judgments of right and wrong. It is
diffi cult to distinguish moral judgment from other normative kinds of judgment, but
that is not necessary here. My concern extends to other sorts of normativity in any case,
including the epistemic kind in question, of which the following is a representative spe-
cial case: if you have a clear and steadfast memory impression that you locked your car,
you have (defeasible) reason to believe that you did. Now Ross took the principles he
formulated to be self-evident—not in a sense implying obviousness, but in a sense
implying non-inferential knowability given “a certain mental maturity.” 11
This view of the self-evident raises an important issue: the status of disagreements
and other disparities involving principles of the kind in question. Let us call them
conceptually constitutive principles , since they are the kind that license classifi cations and
inferences whose mastery indicates understanding of at least one of their constituent
concepts, such as (in ethics) those of moral obligation and (in epistemology) of reason
to believe. If the principles are self-evident and partly constitutive in this sense, one
might think that disagreement about them is not common and that rejection of them
betrays some lack of conceptual understanding. In fact, however, there is disagreement
10 The wording is not exactly Ross’s; I have simplifi ed and I hope clarifi ed his view. But see The Right and
the Good , ch. 2 for evidence that he had in mind something similar.
11 Ross’s conception of self-evidence is indicated in The Right and the Good , ch. 2. My account of the
notion is provided in “Self-Evidence,” Philosophical Perspectives 13 (1999), 205–28.
212 robert audi
on them by people who seem to understand them quite well enough to disbelieve (or
withhold) the proposition at issue. 12 I do not mean disagreement on their status, say
on whether they are self-evident—this is higher-order disagreement. I mention it
only to set it aside. I also set aside cases in which it is far from clear whether the prin-
ciple is satisfi ed. If, under torture, I say “I promise to deceive everyone at the meeting,”
it is at best doubtful that I have made a promise at all. It may be better to consider the
veracity principle and see how there might be disagreement on it as applied to central
cases.
Is it true that in affi rming something to one or more others, we have a (defeasible)
moral reason not to lie? Anyone seriously appraising this will be concerned with what
counts as an affi rmation, as defeasibility, and as a lie. If I’m being jocular, are all my
declarative sentences affi rmations, or might some be simply hyperbolic embellishments
around a true narrative? And if one person tells another a falsehood that the speaker
knows will not be believed, do we have lying? Suppose, moreover, that lying is plainly
required to save an innocent person’s life—“Where is she?” asks the irate husband with
pistol in hand, mistakenly thinking himself cuckolded. There may certainly be a good
moral reason to lie here. Is there any reason, however minor, not to? It is perhaps natural
for some philosophers to think not and to reject the veracity principle. It might, after all,
be inappropriate to tell someone in such a position that there is reason not to lie, but that
point is pragmatic. One could also reject the veracity principle on the basis of a theory,
say one centering on the idea that moral reasons concern probable hedonically signifi -
cant consequences, and coupled with the view that lying simply does not always imply a
probability of hedonically negative implications.
Let us say that Rossians and their opponents disagree on reasons (hence about reasons
as such) — as well as on moral principles and other abstract items. This is a direct cogni-
tive disparity. It is consistent, however, with much cognitive agreement in the territory
where such reasons and principles apply. Consider determining what one is obligated to
do (where this is understood in terms of ascertaining moral reasons for action). One
would certainly tend to take account of what one has promised to do if it bears on the
situation in question. And what of a case in which one is going to negotiate a major
contract or resolve a dispute? In considering negotiation strategy, one assumes one is
obligated not to lie and would consider as wrongs lies told by the other party. Classifi ca-
tion of cases—and its counterpart, exclusion of cases—goes hand in hand with infer-
ence. Suppose we describe a normal case of promising, say to take a child to a park. On
hearing the description, one is likely to believe, or at least to be disposed to believe, that
keeping the promise is obligatory; and, if told that the person promised and failed to
show up, one is likely to infer that an obligation was violated or a wrong done, where
drawing the inference presupposes this in a way that implies one’s being at least disposed
to believe it. There are certainly exceptional circumstances, but a great deal of our clas-
12 This is how I view the disbelief of Rossian principles exhibited by Jonathan Dancy in, e.g., Moral
Reasons (Oxford: Blackwell, 1993), esp. chs. 4–6.
cognitive disparities 213
sifi catory and inferential behavior in moral matters is apparently norm-guided by Rossian
principles, much as our counterpart behavior in cognitive matters is apparently
norm-guided by epistemic principles of the kind illustrated in relation to memory.
These refl ections bring out that we need a distinction between disagreement on rea-
sons and disagreement in reasons. The latter is, most prominently, disagreement regard-
ing whether a particular factor is a reason, consideration, or explanation, or regarding
whether a relevant inference—say that one person did wrong—is appropriate (valid or
adequately probable) given the facts of the case. 13 Disagreement on reasons does not
entail disagreement in reasons even where the elements in question are the very factors
that fi gure in the abstract formulations—such as Rossian principles—regarding which
there is direct disagreement manifested in the parties holding mutually inconsistent
beliefs, as where one affi rms the principles, and the other denies them. Disagreement
on reasons is common in philosophy; disagreement in reasons, though also common in
philosophy, is less common and often less recalcitrant. The former is often doxastic and
direct. The latter may be, but is often not, doxastic or direct; and appeals to factors as
reasons often attracts little attention.
Let me illustrate. Suppose I go out of my way, spending an entire Saturday helping a
new student fi nd an apartment. Knowing that I rarely do such things, someone asks why
I did this. If my reply is “I promised to do it when we were recruiting him,” I am very
unlikely to get a response like “What reason is that to do it?” Promising to A is widely
accepted as a reason to A. The term “reason” is not crucial here; it is no more likely that
I would encounter a response like “Fine, but why should that make you think you ought
to do it?” I am unlikely to get any such response if I simply say to someone wondering
where I was that day that I was helping a student fi nd an apartment because, at recruiting
time, “I promised to.” Granted, I could be told, “You could have found an excuse,” but
this recognizes a reason and appeals to the possibility of my obligation’s being overridden.
Reason-giving is something like using tools; often one can know how to do it but not
how to describe what one does with it. One may even formulate a description inade-
quate to one’s own practice or reject a correct description devised by someone else.
Reasons do not have to be given or recognized using “reason” or any synonym; much
explanatory and justifi catory discourse appeals to them in other terms, and I intend
what is said about reasons here to apply to various other normative notions, including
those of justifi cation, rationality, and (in some uses) evidence. This makes room for both
disagreement on reasons and agreement in reasons to be manifested in much that is said
in a dispute using vocabulary quite diff erent from that of reasons. Here we fi nd much
agreement that is implicit and commonly inferential, a matter, for instance, of what clas-
sifi cations one is disposed to make and of what inferences one would draw under certain
13 In ch. 2 of The Good in the Right: A Theory of Intuition and Intrinsic Value (Princeton: Princeton Univer-
sity Press, 2004), I called agreement in reasons operating agreement to suggest coincidence in what one does
(intellectually) with the kind of factor in question. That discussion considers cases not discussed here and
supports the view I am proposing.
214 robert audi
conditions. We appeal to moral reasons in myriad ways. Just saying, for instance, “I can’t
tell her that; it would be a lie” implicitly recognizes the status of lying as providing a
(negative) reason.
It appears, then, that disagreement in reasons, as it might be manifested in rejecting
the presupposed rationalizing relationship embodied in my acknowledgment of making
a promise, is much less common than disagreement on reasons, which is often manifested
in philosophical disagreements. Whatever the diff erence in practice, the distinction
stands, and cognitive disparities at the level of giving and accepting reasons—the crucial
level of norm-guidedness—are not implied by disparities, and especially by philosophi-
cal disagreements, at the level of appraising reasons as such and principles in which their
force is expressed.
One implication of the distinction I am stressing is that disagreements on reasons, like
other kinds of theoretical disagreements, should not generally be taken at face value.
Claims about what is or is not a reason—even claims about strengths of reasons agreed
to be such—should and commonly are based on a sense of one’s own practices in using
the relevant concepts. But these practices may be complex, many-faceted, and learned
by example rather than description. It takes great skill to formulate a view of reasons
adequate to one’s own practice in giving them. Much the same applies to giving an
analysis of a concept. Disagreement on reasons (or in defi nitions or analyses), then,
should not be readily taken to imply disagreement in reasons and certainly does not
imply the kind of systematic disagreement in reasons (or in application of terms under
analysis) that might seem to go with the sharpness of the disagreement. A practical
implication of this is that neither deep nor pervasive cognitive disparity should be
inferred from a disagreement on reasons. We may use a tool quite similarly in practice
despite sharp diff erences in describing it.
The distinction between disagreement on, versus in, reasons suggests a kind of cognitive
disparity I have so far not described: internal disparity. Suppose I do not accept the Rosssian
promissory principle, or even disbelieve it. I may still virtually always (1) assume that I am
morally obligated to do a thing given that I have promised to; (2) consider wrong—or at
least wrong if not excused—broken promises I learn of; and (3) infer (or presuppose)—
when people tell me that they have promised a third party to do something—that they
ought to do it. I may certainly be disposed to exhibit this cognitive behavior even if I in
fact do not: my agreement with Rossians on the force of promissory reasons may be in this
way implicit. (1) indicates a classifi catory response to a reason for action (one grounded in
promising); (2) illustrates both that and a judgmental response; and (3) indicates an inferen-
tial (or at least presuppositional) response to promising as constituting a reason for action.
These responses are in tension with rejection of the promissory principle. Now I submit
that such cognitive disparities between our—in my view—basic cognitive behavior and
our high-level general beliefs and dispositions to believe are not uncommon, and that part
of the task of philosophy is to eliminate such disparities where possible. Here intuitions
about the classifi cation of cases and the appropriateness (validity or plausibility) of infer-
ences are important elements in getting a refl ective equilibrium.
cognitive disparities 215
To be sure, intuitions have some normative authority regarding principles and
concerning the overall status of reasons viewed in a general way; but their authority
seems greatest where their focus is concrete and particular. Whatever the epistemic pri-
ority in these two kinds of cases, my point is that, both within and between persons,
cognitive disparities may occur, on the one hand, between cognitions about reasons or
principles and, on the other, between cognitions regarding the cases and inferences that
are crucial for appraising those abstract elements. It takes great skill to formulate princi-
ples, or to specify what types of elements constitute reasons, in ways that do justice to
one’s own careful usage and capture one’s own (non-formal) inferential practices. This is
one reason why cognitive disparities that are manifested in disagreement on reasons
may be resolvable by clarifi cation, often through presentation of cases or through the
search for refl ective equilibrium. Disagreements, then, may not indicate as deep or as
sharp cognitive diff erences as they appear to show. This may be a reason not to give up or
even reduce confi dence in a view one holds upon initially discovering disagreement
from a person one respects. But much will depend on the subject, the kind of view, one’s
previous experience in relevantly similar cases, and the breadth and depth of one’s refl ec-
tion concerning the view. It is quite possible, moreover, to be reasonable in retaining
conviction only together with a resolution to explore the grounds for it.
There is a subtlety I have so far left out of the account. We have considered agreement
and disagreement on, as opposed to in, reasons. But within the category of agreement
(or disagreement) in reasons is agreement (or disagreement) on their overridingness. It
is often very diffi cult to get concurrence here. We should expect disagreement, even in
reasons, concerning such matters as whether mitigating a minor injury one might over-
ride a promissory obligation to take a child to a park. To be sure, the more detail we
bring to the case, the greater the cognitive convergence we might expect; but there is no
good reason to think that convergence will always occur or will be complete when it
does. A practical implication of these points is that, where there is cognitive disparity
about overridingness, the need for humility tends to be greater. This need not require
suspending judgment, but it often makes appropriate lesser conviction or forming a
higher-order belief that one could be mistaken, or both. 14
The matter of overridingness is still more complicated when two or more kinds of
reasons exist on each side, say where both benefi cence and fi delity pull against veracity
and gratitude. The determination of relative weights may be very diffi cult even after the
fact, but we are especially likely to fi nd it diffi cult prospectively, as we often must given
the need to make moral judgments to guide action. But compare this case with
estimating which side of a balance scale will be weightier when we put on each side bags
of potatoes, lettuce, lemons, peppers, mushrooms, and cereal (say six such bags for each
14 There may also be an analog of “splitting the diff erence,” as where we must quickly decide which of
two fi gures, say cost estimates, to work with and can rationally choose one between those arrived at by the
disputing parties. The split in a non-quantitative policy matter may be a diff erent kind of compromise, say
agreeing to allow revision of a submitted paper rather than accept it, as one party would, or reject it, as the
other would.
216 robert audi
of two people). These have diff erent densities, and the estimation of overall weight is
diffi cult even if we take each bag in hand. It is less diffi cult to compare the matching bags
pairwise. That might help, at least in that, if all are heavier in one set, this settles the
matter. With physical weights, of course, we have a ratio scale of measurement; with nor-
mative force, we do not. Still, with reasons, recognizability is more important than
measurability, and possible convergence on relative weight is more important than pos-
sible quantitative measurement. We cannot negotiate well if we do not each recognize
the relevant values; when we do recognize them, we can in some kinds of cases negotiate
well even if we weight them diff erently and cannot quantify them exactly.
A related and perhaps more important point is that if we may take normative proper-
ties to be consequential (and so strongly supervenient) on non-normative ones, we can
reasonably seek agreement by identifying and comparing the properties that are basic
for any normative attribution at issue. Rossian principles, like their epistemic counter-
parts, may be conceived as indicating a “naturalistic” ground on which prima facie moral
reasons are consequential, say promising or lying. It is also plausible to take overall reason
to be a consequential property. If so, then, by identifying as clearly as possible the ground-
ing facts for an obligation, we can reasonably try to move from agreement in prima facie
reasons to agreement in overall reasons. This may be diffi cult, but its impossibility does
not follow from disagreement, even irreconcilable disagreement, on reasons.
3 Disagreement on the self-evident
The self-evident has often been taken to be a realm in which one can “just see” the
truths deserving that designation. The prevalence of this view of self-evident proposi-
tions in part explains why “It’s self-evident” has been regarded as a “conversation-stop-
per” or at least as implying that support of the proposition by argument is either
unnecessary or impossible. We have seen, however, how disagreement on presumptively
self-evident propositions is possible. Moreover, since many philosophical theses may be
conceived as self-evident if true, it is not just Rossian moral principles and counterpart
epistemic principles that we should take into account.
To focus the issue, let me fi rst indicate how I conceive the self-evident. On my
account, self-evident propositions are truths such that (a) adequately understanding
them is suffi cient for justifi cation for believing them (which does not entail that all who
adequately understand them do believe them), and (b) believing them on the basis of
adequately understanding them entails knowing them. 15 This account of self-evidence
makes it easy to see how one can understand a self-evident proposition and still consider
15 My “Self-Evidence” contains a more detailed account of self-evidence than The Good in the Right , and
I have extended both treatments in “Intuitions, Intuitionism, and Moral Judgment,” in Jill Hernandez (ed.),
The New Intuitionism (London and New York: Continuum, 2011), 171–98 . I should add here that we might
also speak of full understanding to avoid the suggestion that adequacy implies suffi ciency only for some
specifi c purpose. Neither term is ideal, but “full” may suggest maximality, which is also inappropriate.
cognitive disparities 217
it without believing it. A central point here is that adequacy of understanding goes
beyond basic semantic comprehension. A bilingual person, for instance, could under-
stand a self-evident proposition well enough to translate a sentence expressing it into
another language, yet still fail to believe it. Take, for example, “The mother-in-law of the
spouse of a person’s youngest sibling is that person’s mother”; a bilingual person who
could quickly translate this may still need some refl ection to see that this is true. Mere
semantic comprehension of it, then, need not suffi ce for justifi cation of the proposition
it expresses. But when the truth of that proposition is seen through an adequate under-
standing of it, one can believe it non-inferentially, presumably on the basis of grasping
the concepts fi guring it in and apprehending their relations.
To say, however, that adequately understanding a self-evident proposition does
not entail believing it does not preclude acknowledging an important connection
between such understanding and self-evidence. An adequate understanding of a
self-evident proposition, p , does imply (at least in a rational person) a disposition to
believe it, indeed, one strong enough so that there should be an explanation for
non-belief given comprehending consideration of p . Two possible explanations—if
we set aside inadequate understanding—are constitutional skepticism and commit-
ment to a theory clearly implying not- p . In the case of disagreement on a complex
proposition such as a Rossian principle, fi nding such an explanation is often possi-
ble; but it may not be easy. Moreover, in some cases it may not be clear that the per-
son lacks adequate understanding. Understanding comes in degrees, and (as teachers
of philosophy know) it is possible to discuss a proposition (such as one concerning
utilitarianism) even with a measure of intelligence despite signifi cant inadequacy of
understanding. Let us explore how these points about the self-evident bear on
rational disagreement.
The most diffi cult problem is how to deal with disagreements one has with someone
who is, in the relevant matter, apparently an epistemic peer: roughly, a person who (a) is
as rational and as thoughtful as oneself (in the relevant matter, including the assessment
of whether p is true), (b) has considered the same relevant evidence, and (c) has done so
equally conscientiously. Much could be said about the notion of epistemic parity
(which, with various complications, can also be characterized for topics rather than for
individual propositions and even for persons). Here I use a rough account that will suf-
fi ce for our purposes. 16 By contrast with most descriptions of epistemic parity with
respect to a proposition, this one explicitly requires that the relevant parties consider the
proposition and do so equally conscientiously. If parity requires only sharing the same rele-
vant evidence and having the same epistemic virtues (or being equally rational in the mat-
ter, which is a similar condition), nothing follows about how fully these virtues are expressed ,
and there is room for the possibility that, for instance, despite equal epistemic ability and
equal possession of evidence, the parties have devoted very diff erent amounts of time or
16 This characterization is discussed and applied in my paper “The Ethics of Belief and the Morality of
Action: Intellectual Responsibility and Rational Disagreement,” Philosophy 86, 335 (2011), 3–29.
218 robert audi
eff ort or both to appraising the proposition. 17 In that case—one of epistemic asymmetry—
disagreement may be readily resolved by an equally conscientious consideration of the
relevant evidence.
One gap in the theory of disagreement may be fi lled by exploring possible diff er-
ences or possible agreements in reasons, as opposed to diff erences or agreement on them.
Moreover, particularly where we are concerned not just with disagreement but also
with the whole range of signifi cant cognitive disparities described above, it is important
that parity be understood to have a non-dispositional element, such as conscientious
consideration of evidence, which provides for manifestation of important disparities
that might not be evident in a peer disagreement in which no consideration, or only dif-
ferentially conscientious consideration, of the proposition occurs.
Since having an adequate understanding of a self-evident proposition implies having
a justifi cation for believing it but does not entail actually having this belief, one could be
committed to holding that a disputant has adequate understanding, the accompanying
justifi cation, and the disposition to believe the proposition, yet does not believe it. Take
the strong particularist’s case against Rossian intuitionism (a counterpart case may of
course be brought against the view that certain epistemic principles, such as the visual
principle formulated above, are self-evident). On the basis of certain plausible examples,
some have denied that promising to do something entails a prima facie moral reason to
do it. One might hold that only in particular cases can one tell whether promising yields
any obligation. Suppose that, quite reasonably, intuitionists do not allow that one can be
justifi ed both in believing p and in believing not- p. They must then deny either that the
particularist adequately understands the promissory principle (where adequate under-
standing implies justifi cation for believing the principle) or that the particularist’s argu-
ments in question are suffi cient to justify rejecting the principle. I have denied the latter
for the plausible arguments I am aware of that apparently support strong particularism. 18
But I have by no means suggested that the arguments have no plausibility, nor do I deny
that they provide some reason to believe their conclusion(s). Let me explain.
Lack of (objective) justifi cation for believing p does not imply that believing it is
ir rational or even reprehensible. We can be unjustifi ed when we make a natural mistake
that all but rigorous reasoners would make; irrationality, by contrast, is roughly a matter
of fl ying in the face of reason and is not entailed by failure to use it adequately. This is an
important point in appraising disagreement. Even if one is confi dent that a disputant is
17 Consider, e.g., a not atypical characterization by Feldman and Warfi eld, Disagreement , meant to capture
(as it surely does) a notion common in the literature: “[P]eers literally share all evidence and are equal relative
to their abilities and dispositions relevant to interpreting that evidence” (2). Cf. Lackey’s characterization
( Oxford Studies in Epistemology, iii, 274). She presupposes in the context (as do many studies of peer disagree-
ment) that consideration of the proposition by both parties has occurred and, often, has occurred over time
and in a way that requires some thought regarding the relevant evidence.
18 A detailed response to the particularism of Dancy is provided in my “Ethical Generality and Moral
Judgment,” in James Dreier (ed.), Contemporary Debates in Ethical Theory (Oxford: Blackwell, 2006), 285–304 ,
reprinted in Matjaž Potrč , Mark Norris Lance, and Vojko Strahovnik (eds.), Challenging Moral Particularism
(London and New York: Routledge, 2007), 31–52 .
cognitive disparities 219
unjustifi ed, one should be cautious in attributing irrationality. This point bears on
appropriate reactions to apparent peer disagreements, or disagreements approaching
them, as well: an unjustifi ed but not irrational disputant (or one holding an unjustifi ed
but not irrational position) can give one better reason to doubt the proposition(s) at
issue than an irrational one.
Beyond the possibility of being unjustifi ed owing to unsound reasoning that one
should, on adequate refl ection, see is defective, there is the possibility of rational excus-
ability. This is similar to moral excusability (and can also trace to brain manipulation
rather than unsound reasoning that one could not be reasonably expected to detect). In
both cases, there is an error; but, in terms of the very framework of reasons in virtue of
which it is an error, there is either something to be said for the erroneous element or
some account of why the disputant should not be criticized for missing the error.
The previous three paragraphs indicate how it is possible for a rational person to
understand a self-evident proposition adequately, acquire justifi cation for believing it,
yet fi nd plausible arguments against it and thereby excusably or, in some at least min-
imally rational way, deny it. If the self-evident had to be obvious, this would not hold.
But plainly the self-evident need not be obvious. The question remains, however,
whether one can justifi ably believe that a dissenting colleague is in the epistemically
undesirable position just described. That question is one I have addressed elsewhere. 19
My concern here is to note the variety of cognitive disparities, to indicate how they can
extend even to the self-evident, and to make room for their reduction by comparing
certain disagreements on reasons with considerable agreement, between the disputants,
in them. The latter is often a basis for resolving the former.
The results of our inquiry bear on understanding philosophical and conceptual dis-
agreements. Even apart from the rationalist view that many philosophical theses and
many conceptual claims are self-evident, 20 the distinction between doxastic disagree-
ment and other kinds of cognitive disparity bears on both how wide, deep, and stable an
intellectual diff erence is and on how best to resolve it. Even where there is explicit and
direct doxastic disagreement, the best resolution may be indirect: not, for instance, a
matter of general arguments favoring one side or the other—such as views about
reasons—but of consideration of the concrete cases in which reasons operate.
4 The fl uidity of cognition
Beliefs are paradigms of cognitions, but dispositions to believe, intuitive (propositional)
seemings, and belief-entailing elements such as judgments, are also cognitive. Certain
episodes, such as inferences, and certain processes, such as refl ecting on the nature of
19 In “The Ethics of Belief and the Morality of Action,” n. 16.
20 Laurence BonJour and George Bealer are among the philosophers who would consider many philo-
sophical theses a priori and—perhaps—self-evident in the broad sense I indicate below. See BonJour’s In
Defense of Pure Reason (Cambridge: Cambridge University Press, 1998) and Bealer, “Intuitions and the
Autonomy of Philosophy.”
220 robert audi
reasons, are also cognitive. Epistemology has focused on belief more than on any other
kind of cognition. This is natural both because belief is an element in knowledge and
because beliefs seem essential for action (at least for intentional non-basic action). It is
correspondingly natural to take the main case of cognitive disparity to be doxastic—
paradigmatically involving one party’s believing p and another party’s disbelieving it.
But we have seen many other cases, and it should be clear that how “far apart” people are
intellectually is not just a matter of what they believe. It is not even just a matter of what
they would believe on refl ection. Diff erences in degree of conviction may still persist
and can be important. The point is not just the pragmatic one that our tendency to act
on, and indeed draw inferences from, p is (other things equal) stronger in proportion to
the strength of our conviction toward p . It is also possible to be (epistemically) unjusti-
fi ed, and even irrational, in one’s degree of conviction that p .
It should also be stressed that there can be great diffi culty in determining what some-
one believes, particularly in a complex matter. Rejection of Rossian principles, or of
counterpart epistemic principles, may be based on the infl uence of an inadequate
understanding of them or on the pull of a competing theory. To be sure, even rejection
of them requires a minimal understanding of them. But suppose a rejection is accompa-
nied by a great deal of agreement with Rossians in the use of reasons whose moral
authority the principles affi rm. What, then, must the person in question believe about
the subject matter in order to count as rejecting the Rossian proposition? This may be
unclear until much investigation is done. But there’s the rub: the very eff ort to clarify
what someone believes in such a matter will likely evoke many new beliefs and may
result in the rejected proposition’s being supplanted by one that accords with the per-
son’s classifi catory and inferential practices. Beliefs arise, and are strengthened, weak-
ened, or abandoned, with refl ection, perception, imaginings, even daydreaming. Their
justifi cation, moreover, is not always proportional to the seriousness of the eff ort to
achieve clarity or knowledge of the subject matter in question. Careful study may
enhance one’s justifi cation in believing something one holds; but if, in considering the
cases for and against one’s view—as in arguing with a peer—one rationalizes away
counter-evidence, one may emerge with less justifi cation than before.
A further diffi culty in determining what people believe and how much cognitive
disparity separates them is that intuitions may appear to be doxastic when in fact they
are seemings and embody only a disposition to believe. Indeed, it is possible to consider
a proposition, say that there is an obligation to obey the law, and, in a single episode of
refl ection, to pass from non-doxastic seeming to belief and back again. A natural basis of
this is considering diff erentially favorable cases. Each can exercise an infl uence on
cognition. A cognition may or may not be stable across such refl ections. Correspond-
ingly, cognitive disparities, including disagreements, and whether interpersonal or
intrapersonal, may be more or less stable.
Social factors are also important in eff ecting or explaining change. Testimony is well
known to aff ect those to whom the testifi ers in question are credible. But sheer expos-
ure to people we identify with can not only produce new beliefs but—both through
cognitive disparities 221
that and through the way new beliefs infl uence standing ones—present us with
inferences, implicit classifi cations, and associations that may change our beliefs. They
may of course, also reinforce certain of our beliefs. Social factors commonly increase our
exposure to the use of reasons; that in turn may produce agreement in them even where
we disagree on them.
Both the fl uidity of cognition and its changeability under the infl uence of social fac-
tors indicate a need for rational persons to fi nd a mean between weakness and rigidity.
Too great cognitive fl uidity creates an unstable or crumbling structure; too little yields
stagnation or rigidity or both. Exposure to evidence and intuitive grounds should aff ect
cognition; discussion with others should often do so as well. Both interpersonally and in
our own thinking, we need a mean between rigidity and laxity. Retention of a cogni-
tion in the face of justifi edly assumed peer disagreement may bespeak rigidity; its ready
abandonment in such a case may reveal insuffi cient intellectual independence. The
former is sometimes avoidable by forming higher-order beliefs about the status of our
belief or of our evidence for it; the latter is sometimes avoidable by reducing our degree
of conviction or substituting for the abandoned belief one that preserves its defensible
content. There are many variables here, and there is no simple formula for optimal reso-
lution of such disagreements. 21
Where disagreement occurs between rational persons who have the same relevant
evidence and consider it equally conscientiously—between epistemic peers—philoso-
phers diff er regarding how, when the parties justifi edly believe this is their situation, they
should proceed. One response for the parties to such a disagreement is skepticism about
all the disputed views; another is resolve to stick to their views; still another is to seek a
kind of compromise view; and there are variants of each. This paper indicates that our
intellectual diff erences on a given subject—how far apart we are regarding it—cannot be
taken to be a matter just of our agreements and disagreements, and, especially in philo-
sophical and conceptual cases, certainly not of agreements and disagreements on reasons
or, closely related to this, disagreements regarding the kinds of important analyses or gen-
eral principles that guide us in thought and action. Cognitive disparity is a much wider
phenomenon than disagreement, conceived as a matter of mutually incompatible beliefs.
Particularly in normative matters, moreover, disagreement, and particularly disagreement
on reasons and principles, may lead to exaggerating the overall cognitive disparity
between the disputants. Recognition of the breadth of cognitive disparity opens up more
space for discerning both diff erences and similarities between individuals. Skeptics may
emphasize the diff erences; but anyone can appreciate the value of widening the territory
for discussion and for potential convergence in important matters. Mutual understanding
at its best requires ascertaining not just one another’s beliefs, but also our presuppositions,
dispositions to believe, inferential tendencies, and other cognitive elements. Mutual
21 “The Ethics of Belief and the Morality of Action” indicates many options; the literature on disagreement,
e.g., as represented in Feldman and Warfi eld, Disagreement and Nathan Ballantyne and E. J. Coff man’s subtle
“Conciliationism and Uniqueness” Australasian Journal of Philosophy 90 (2012), 657–70, indicates many others.
222 robert audi
understanding is consistent with great cognitive disparity, but that disparity can also
undermine it. We can explicitly agree on a great deal when we are actually quite far apart
because of cognitive disparities we have not realized. Ascertaining cognitive disparities
between us can focus—and sometimes enhance—disagreement; but, on the unilateral
side, it can also be a means of self-improvement in individuals and, on the multilateral
side, it can lead to reconciliation among them. 22
22 Earlier drafts of this paper were presented in a seminar at the University of Notre Dame and at the
Royal Institute of Philosophy in London. I benefi ted from those discussions and also want to thank Jennifer
Lackey and Lisa Warenski for helpful comments.
1 Introduction
It is widely agreed that rationality is perspectival in some way or other, that the degree to
which a given attitude or behavior is rational 1 depends on the egocentric point of view
of the individual in question. There is no wide agreement, or precise statement, how-
ever, concerning the nature of perspectives and what elements are included in them. To
some, for example, subjective Bayesians, a perspective is defi ned in terms of the totality
of degrees of belief. To others, a perspective must also include experiential inputs, and
there is dispute among experientialists whether only experiences with content get
included or whether pure sensations themselves have to be taken into account even if
the pure sensations lack propositional or qualitative content. Anti-subjectivists insist
that there is too much being included here. Irrational beliefs and unjustifi ed degrees of
belief shouldn’t be counted, some say, and others insist that beliefs formed on the basis of
something other than an interest in getting to the truth and avoiding error should not
be counted. In a word, controversy reigns.
There is an apparently unrelated issue in recent epistemology, one concerning the
general nature of normativity and the related concept of excusability (or some other
term for a secondary notion of epistemic propriety). In the glory days of epistemology,
infallibilist assumptions reigned to such an extent that we didn’t have to consider the
implications for the theory of rationality of, for example, someone not knowing the
rules to follow, or getting confused about which rule applies in a given condition, or
about a person conscientiously attempting to follow a rule and just not being up to the
task. In these ways, and others, there is a common viewpoint in contemporary epistemol-
ogy that whatever theory of normative status one adopts for cognition and practice, the
10
Perspectivalism and Refl ective
Ascent
J onathan L . K vanvig
1 Throughout, I will use variants of “rational” and “justifi ed” interchangeably, not intending to convey
any commitment to the view that they are identical but only for stylistic variety. I doubt there is a substantive
diff erence derivable from ordinary language for a distinction between the two, though there are philosophical
distinctions which one might choose to label by distinguishing these terms.
224 jonathan l. kvanvig
theory will have to be supplemented with some notion of excusability that is logically
distinct from the normative notion in question. 2 In a word, ambiguity of evaluation is
unavoidable, once one appreciates the implications of fallibility.
This latter idea is captured nicely in terms of Williamson’s anti-luminosity argument
( Williamson 2000 ). The insight behind Williamson’s argument is that there aren’t
infallible points from which to begin explaining the nature of epistemic normativity,
not even for clearly detectable sensation states such as feeling cold. The generalized
argument, which Williamson does not give, but which is equally forceful, extends to
phenomenal states more generally, such as being appeared to redly. If the argument is
granted, the conclusion is that there are no luminous starting points from which we can
begin to construct an account of whatever normative status we are investigating. And
the claim, current in the literature, is that once one grants the anti-luminosity argument,
ambiguity of evaluation is unavoidable, for it will always be possible for those trapped in
the non-luminous egocentric predicament to violate the normative rules excusably. Put
cryptically, anti-luminosity threatens unity in normative theory.
Of course, unity was already ruled out by the fact that there is a multitude of ways to
assess cognition and its products relative to alethic goals. We can investigate whether a
particular belief is good from the point of view of maximizing true beliefs and avoiding
false beliefs, getting to the truth here and now, or getting to the truth in the long run. We
can assess beliefs by whether one’s evidence is suffi cient merely for high probability, or
for epistemic support suffi cient to warrant closure of inquiry on the question at hand.
But the kind of disunity here that is claimed to be needed is one that arises from within
a given particular way of assessing cognition and its products relative to a specifi c,
assumed alethic goal. The claim is that, for any particular given way of assessing cogni-
tion that yields an adequate theory of justifi cation or rationality, an additional notion of
excusability will be needed relative to that specifi c notion of justifi cation or rationality.
It is not merely that there is some other way of assessing cognition and its products, but
that internal to the very theory in question is a demand for multiplicity, a demand for
some secondary normative notion in addition to the primary one (assumed here to be
teleologically related to the specifi c alethic goal already specifi ed). Thus, the idea is that
the theory of excusability itself can’t be a proper account of justifi cation or rationality,
and that once we come up with a proper account of the latter, we’ll have no recourse but
to insist on a distinction between either justifi cation or rationality on the one hand and
excusability or blamelessness on the other. The disunity is thus internal to the theory of
fallibilistic justifi cation or rationality rather than a mere example of the blooming of a
thousand fl owers in the open spaces of epistemological theory.
One further and apparently unrelated issue in contemporary epistemology concerns
the possibility of rational disagreement (as well as the related issues of discontent with
the opinions of cognitive superiors and perhaps occasional deference for the ideas of
2 See, e.g., DeRose ( 2002 ), Weiner ( 2005 ), Hawthorne and Stanley ( 2008 ). I discuss these views in
Kvanvig ( 2009 , 2011 ). For resistance on the ambiguity approach, see Thomson ( 2008 ).
perspectivalism and reflective ascent 225
cognitive inferiors). Some say that such aren’t possible once one has controlled for
diff erences in total evidence and for known diff erences in competency. 3 Others say that
there is no need to insist that all parties of such disputes are irrational, but only that at
least one is. 4 If disagreement remains even after controlling for the diff erences just men-
tioned, then at most only one of the two is satisfying the rules of rationality, but so long
as one is satisfying these rules, one is rational, even if it is equally obvious to each that
they are satisfying the demands of rationality. This position gives fodder to the earlier
position about multiplicity in the theory of rationality, for if it is equally clear to both
that they are satisfying the demands of rationality, then we might want to blunt the force
of the charge of irrationality against one of the parties to the dispute by noting that the
irrationality in question is excusable.
The point of this essay is to draw these three threads into a common cord to show
what a fully fallibilistic approach to rationality ought to look like. The result will tell us
something about what an adequate account of a perspective is, why rational disagree-
ment can always arise even when controls are in place for total evidence and compe-
tency, and why fallibility does not fl y in the face of a strong preference for full unity in an
account of normativity. The goal then is to provide an account of normativity that has
no need of an independent notion of excusability (though of course there will still be
excusable actions and beliefs) and no tendency to sniff the air for the scent of irrationality
when people disagree.
2 Grounds for excusability
A natural starting point when thinking about epistemic normativity is to consider analo-
gies with moral and legal responsibility, and if we do so, grounds for introducing a sec-
ondary notion of excusability can arise quite easily. Most especially, such grounds arise
when we think of epistemic rationality as being governed by principles that resemble
strict liability laws. A strict liability law is one that turns only on questions of causal
responsibility, holding a person liable for such consequences with no regard whatsoever
for whether the person was at fault, or morally responsible, for the damages. In criminal
cases, a strict liability law is one for which proof of a violation involves no mens rea
requirement, that is, the prosecution bears no burden of proving that the person had a
“guilty mind” with respect to the action in question. Thus, a conviction can be obtained
even if the defendant was understandably ignorant of the factors that made their behavior
criminal. In this way, a person may fail to be culpable, or blameworthy, and yet be found to
have violated the law. Even the weakest mens rea requirement, that of criminal negligence,
need not have obtained in cases where the law in question is a strict liability law.
Here all the aspects for needing disunity in a normative theory are present. One nor-
mative notion is defi ned in terms which leave open the possibility of failing to live up to
3 Representatives of this view include Christensen ( 2007 ), Elga ( 2007 ).
4 A view of this sort is defended in Kelly ( 2005 ).
226 jonathan l. kvanvig
the demands of that normative notion in a way that is perfectly understandable, or
excusable, or blameless. In the usual criminal case, there must be a concurrence of actus
reus and mens rea , between the external elements of a crime and the mental elements as
well. 5 The usual common law standard for criminality is expressed in the Latin phrase
actus non facit reum nisi mens sit rea , meaning “the act does not make a person guilty unless
the mind is also guilty.” If we have a strict liability law, however, this standard does not
apply, leaving open a judgment of guilt even though the action in question might not be
blameworthy.
One type of ground for positing epistemic excusability, then, is when epistemic
principles are conceived on the model of strict liability laws. One way to put this
point is to think of epistemic principles as completely perspective-free. For example,
one might think of the epistemic principles involving logical or mathematical theo-
rems in this way, insisting that anything provable from no premises whatsoever can
never be rationally disbelieved. Conceived in this way, it is easy to see why we would
want some notion of excusability in addition to that of rationality, since we want to be
able to say something nice and comforting about poor Frege’s mistakes in logical
metatheory.
There is a better response, however, and it is to fi nd fault with the perspective-free
conception of epistemic principles involved in this argument for such disunity in nor-
mative theory. When we think of epistemic principles in terms of principles of logic, we
become tempted to the strict liability model and the subsequent demand for a notion of
excusability that is diff erent from that of rationality itself. The lesson to learn here isn’t
that we need an independent theory of excusability but rather that we need to think of
epistemic principles in terms other than that of strict liability laws.
In one sense, there is no question that disunity will be required in any theory attuned
to ordinary language. As is well known, there is, at least, the “ought” of practical delib-
eration in ordinary language as well as the “ought” of general desirability (see Wedg-
wood ( 2007 )), an epistemically constrained notion of what is required of us and one
insensitive to this factor (see Sher ( 2009 ), Thomson ( 2008 )). More generally, there is the
possibility that fi rst-person assessment might come apart from third-person assessment,
and perhaps second-person assessment as well (see Darwall ( 2009 )), and there are the
arguments for relativism and contextualism about normative language that generate
disunity in a theory as well. 6 Our present concern, however, bypasses most of these
concerns and can accommodate the remainder. The present issue is about what kind of
epistemic appraisal is fundamental, if any, from the perspective of some particular
5 Here the Latin term actus reus has the unfortunate implication that the act can be guilty independent of
whether the mind is also guilty. It is instructive that in Australia now, these Latin terms have been replaced
with the language of Fault Elements (corresponding to the mens rea requirement) and Physical Elements
(corresponding to the actus reus requirement), keeping the proper perspective that the external, physical ele-
ments involved in action need imply no guilt of any sort.
6 For excellent discussion of these various grounds for disunity in theory, and strong resistance to such,
see Thomson ( 2008 , especially chs. X and XI).
perspectivalism and reflective ascent 227
understanding of the teleological dimension of cognition. 7 The challenge of disunity is
that, once we fi nd a fundamental notion, we will be forced to posit a further, secondary
notion of propriety relative to the purportedly fundamental notion. The idea behind
disunity is thus that, once we specify a given norm for epistemic appraisal, we will have
no choice but to posit another norm to handle cases in which a person lacks reason to
believe that the norm has been satisfi ed or has reason to think that the norm is incor-
rect. So, for present purposes, the threat of disunity arises when we fi rst assume that
there is a fundamental notion and then have endorsed a particular account of this fun-
damental notion, thereby generating some governing norm for fundamental epistemic
appraisal. Many of the issues involved here stand as challenges to the idea that there
could be a fundamental notion of normative appraisal. For example, relativism and
contextualism invite such a conclusion, and the idea that there is an ambiguity between
fi rst-, second-, and third-person appraisal suggests it as well, as does the idea that there
are two generic “oughts,” one of general desirability and one regarding practical delib-
eration. The concern of this essay presumes away such threats in order to focus on a
more specifi c argument for disunity, one arising even after one has concluded that
there is a fundamental notion of epistemic appraisal.
In considering what an alternative approach might look like, it is worth noting here
the varieties of mens rea possibilities in the law. The weakest type of mens rea requirement
is that of negligence, with recklessness being a slightly stronger requirement, followed by
knowledge, and fi nally intention or purpose. In crude and oversimplifi ed terms, the dif-
ference between a law with only a negligence requirement and one with a recklessness
requirement is that recklessness requires knowing the risks in question, though not
desiring that they be realized, while negligence requires neither such knowing or desir-
ing. The distinction between laws in which negligence plays a role from strict liability
laws is typically explained in terms of whether a reasonable person so situated would
have recognized the risks involved.
These characterizations are nowhere close to an adequate degree of precision needed
if we wished for full understanding, but we can bypass these issues here, since our inter-
est is more in where epistemic principles or norms fall than in the category scheme
itself. What we have seen to this point is that if we think of the relevant principles in
terms of strict liability laws, we confuse the rationality of ideal agents with the rational-
ity of ordinary agents, yielding a need to introduce an additional notion of excusability
to account for legitimate confusion about the ideal principles of logic, explanation, and
evidence. The question I wish to pursue here is what happens when we move past strict
liability conceptions of epistemic principles, introducing some type of mens rea require-
ment, somewhere on the continuum from weak to strong requirements encompassing
the categories beginning with negligence and ending with full intention and purpose.
7 This teleological dimension is usually understood in terms of getting to the truth now and avoiding error
now, but for the present, I’ll rest content with the vaguer characterization in the text, since it may be that the
goal of cognition is something other than truth. Perhaps the goal is knowledge itself, or understanding. We
need not pursue these issues at this point. For discussion of these issues, see David ( 2005 ), Kvanvig ( 2005 ).
228 jonathan l. kvanvig
In order to address this issue, it will be helpful to focus our understanding of the
nature of such requirements in the law. As noted already, the distinction in the law is
between physical and mental features of wrongdoing, between actus reus and mens rea .
Thinking of the former in terms of the physical elements of wrongdoing helps us avoid
the temptation to think of the behavior in question as itself prohibited by the law, except
in the case in which the law is a strict liability law. A better approach is to think of the
physical elements—the behavior in question—as something that the law has an interest
in extinguishing. In short, it is behavior that is disvaluable in legal terms. We thus sepa-
rate the theory of legal value from the theory of legal obligation, allowing an account of
legal wrongdoing that adverts both to the theory of value and the continuum of mens rea
options. In the simplest case in which a law is a strict liability law, there is a convergence
between what is disvaluable and wrongdoing; in other cases, wrongdoing is a function
of what is disvaluable and the continuum in question.
An alternative, and perhaps more common, way to think of the relation between actus
reus and mens rea aspects is to view the behavior as itself prohibited, with the mens rea
clause interpreted in terms of an alternative normative notion of excusability. In our
context, however, such an interpretation is problematic, since it makes it too easy to
show that no unifi ed account of normativity (within a specifi c domain) is possible. That
is the position we are evaluating, so when there is an alternative account that leaves this
question open, we should adopt it rather than endorse an account that settles the issue
by fi at. On the alternative account, we begin with a theory of legal (dis)value, and inter-
pret what is prohibited in terms of doing something disvaluable while satisfying the relevant
mens rea standard . It is not important here whether the laws themselves are formulated
in terms of such language or whether they are formulated in other terms. What matters
is whether this way of regimenting the various factors involved in legal liability is possi-
ble and theoretically fruitful, or whether the only defensible picture is one that immedi-
ately and self-evidently entails a need for multiple normative notions.
Our question, then, is what the implications are for this way of thinking of the con-
tinuum of mens rea involvements in wrongdoing within the context of fundamental
epistemic normativity. In particular, I want to address the issue of how these various
involvements lend credibility, or not, to the multiplicity claim regarding epistemic nor-
mativity, according to which, whatever primary notion of appraisal is used, there will be
a need for an additional, secondary notion such as excusability or blamelessness in order
to provide a complete account of epistemic appraisal.
3 Mens rea and epistemic appraisal
Begin with the strongest mens rea requirements, one that says no belief is irrational
except when people both intend and know that they are fl outing the relevant truth-
related considerations, or even merely know that they are doing so. With respect to such
conceptions of epistemic appraisal, two implications arise. Such a requirement can easily
perspectivalism and reflective ascent 229
be seen to undermine any need for theoretical disunity, since it leaves no room for a
need of an independent notion of excusable behavior. After all, the person in question is
aware of what’s wrong with their cognitive behavior. Yet, if such requirements are
needed for irrationality, it will be rare that we fi nd irrational beliefs. Yet, from the point
of view of fundamental normativity regarding what we should do or think, it shouldn’t
be this easy to pass epistemic muster. One needn’t violate the norms or know that one
has done so in order to have an irrational belief.
Something similar can be said mens rea involvements for anything stronger than neg-
ligence, though some explanation is needed to make this obvious regarding the reckless-
ness standard. In legal contexts, recklessness requires a prior awareness of what counts as
legally disvaluable behavior. In the epistemic context, however, the usual approach to
epistemic value involves a double goal, perhaps the goal of getting to the truth as well as
the goal of avoiding error. In such a context, the risks involved in recklessness would
have to be cast, not only in terms of what is disvaluable, but also in terms of what is valu-
able. So a recklessness standard would fi rst have to specify an appropriate balance
between the two goals, and then specify what level of risk is tolerable in failing to
achieve an appropriate balance. The problem here is that requiring that any person
capable of rationality of this sort would then have to know in advance what it takes to
balance appropriately the competing goals in question, and such knowledge is rare and,
perhaps, non-existent. Epistemologists sometimes presume a particular balance between
these goals, perhaps that they are equally important, while others weight one as more
important than the other (see Chisholm ( 1991 )), but it is too much to claim that they
have knowledge here. So, arguably, nobody knows what an appropriate balance is, even
if there are some who have rational opinions on the matter. Moreover, even if some
sophisticated epistemologists have such knowledge, it isn’t broadly shared enough to
undergird an account of irrationality in terms of recklessness of the sort requiring such
prior knowledge. Given such a requirement, the recklessness standard leaves little need
for excusable but irrational beliefs since nearly all beliefs would be judged rational on
such an approach.
We are left, then, with approaches that take the notion of epistemic rationality to be
appropriately modeled after laws with weak mens rea requirements. It is easy to see here
why it is tempting to think that an independent notion of excusability may be needed.
Suppose, through no fault of your own, you simply don’t see certain logical conse-
quences that a normal, reasonable person would see in your circumstances. Maybe, for
example, you were just given by your captors that horror to all clear thinkers every-
where, the blue logical-confusion-inducing pill, and modus ponens no longer seems valid
to you but affi rming the consequent, at least sometimes, does. So cognition operates in
ways that yield irrational results, because they involve affi rming the consequent, even
though these results are excusable.
But once again, just as before, the need here is chimerical. The problem with such a
conception is that it doesn’t adequately honor the perspectival character of rationality,
and thus provides a poor argument for disunity in the theory of rationality. We only need
230 jonathan l. kvanvig
excusability because we are substituting the perspective of a normal, reasonable person
for that of your own drug-induced perspective. After all, the appropriately perspectival
notion here takes a change in how things seem to you to be central to the explanation of
why certain attitudes are rational and others are not. It may be that for certain purposes,
we want a diff erent normative theory that is not perspectivally sensitive in this way, and
such a theory might be appropriately modeled on laws whose only mens rea require-
ment is that of negligence. But for the fundamental normativity that addresses the pri-
mary concern of what to do and what to think, we need a perspectival theory that
doesn’t evaluate one person’s rationality from the perspective of another person.
As we have seen, however, stronger mens rea requirements make it too easy to pass the
test for rational belief, and thus cannot be embraced. The question is how to fi nd a posi-
tion without the fl aw of substituting a diff erent perspective for the relevant one, and
which doesn’t require the kind of awareness involved in stronger requirements such as
recklessness or full intentionality to violate the norms. The appeal of such a theory
should be obvious by now, since an approach that makes rationality distinct from excus-
ability tends to be a view that is insuffi ciently attentive to the perspectival character of
rationality. It is this point that lies at the heart of the New Evil Demon Problem ( Cohen
and Lehrer 1983 ), where inhabitants of evil demon worlds would thereby have to have
irrational but excusable beliefs (since both the known risk of error, and what a reason-
able person would judge in each situation, is the same in the demon world and in the
actual world), in contrast to our own situation, in which the same beliefs by the same
individuals are both rational and excusable. The central lesson of this problem is that
such approaches to rationality are insuffi ciently perspectival.
Such a criticism provokes the philosophical tendency to solve a problem by drawing
a distinction. Here, one might say that there is a subjective notion of rationality that
treats denizens of demon worlds just as actual world inhabitants, and there is another,
objective notion of rationality that distinguishes the two in terms of rationality or justi-
fi cation (see, e.g., Goldman ( 1988 )). In our context, however, such a maneuver must be
rejected. If we want to understand the fundamental notion of epistemic appraisal, one
that honors the perspectival character of rationality, we shouldn’t say that denizens of
demon worlds are any more irrational than we are, since the perspectives are the same. In
this context, drawing a distinction of the sort in question doesn’t undermine the proba-
tive value of a counterexample, as can easily be seen by considering the device of pre-
serving the idea that the earth used to be fl at by distinguishing objective from subjective
truth. At most, such a drawing of a distinction only shows how to preserve an alternative
approach to rationality while maintaining logical consistency among one’s commit-
ments. It doesn’t show that the example isn’t strong enough evidence against an approach
to reject it. And here, the example retains this power even in the face of the distinction,
since the distinction leaves untouched the obvious point that the best analogy with fun-
damental moral and legal responsibility for an account of epistemic appraisal is one that
makes rationality perspectival enough to require more mens rea involvement than any
such objective notion allows.
perspectivalism and reflective ascent 231
So what we need in order to defend unity regarding fundamental epistemic norma-
tivity is something weaker than the recklessness and full mens rea requirements charac-
teristic of some types of legal responsibility, but strong enough that mere ignorance of
the standards isn’t suffi cient for rational belief. One way to avoid this result appeals to the
negligence standard incorporating what a rational person would recognize in the situa-
tion in question, but we have seen that this approach is insuffi ciently perspectival. So
what is needed is an approach that is more attuned to the perspectival character of
rationality than a negligence standard is, but not in such a way that stronger require-
ments such as recklessness or full mens rea are required.
This conclusion, however, proves fodder for a very strong argument that unity in the
theory of rationality is simply not possible, and thus that we must choose between ade-
quate perspectivality in our theory and unity. In the next section, I’ll fi rst explain how
our conclusion about where to locate appropriate perspectivality in our theory in terms
of the levels of mens rea requirements in the legal context leads to the dilemma and then
how the dilemma can be avoided.
4 Between recklessness and negligence
The conclusion of the last section commits us straightforwardly to an implication of our
own fallibility, to the eff ect that whatever the norms of rationality are, they are not auto-
matically known to us and we are not immune from error regarding them. This fact
forces on us the conclusion that unknown violations of such norms are possible, and
unknown violations of norms appear to leave us with the possibility of an irrational
belief that is nonetheless excusable in some sense, thereby undermining any possibility
of the unity we seek in the theory of fundamental epistemic normativity. Hawthorne
and Stanley give voice to the fundamental argument for such multiplicity:
In general, it should be noted that intuitions go a little hazy in any situation that some candidate nor-
mative theory says is suffi cient to make it that one ought to F but where, in the case described, one
does not know that situation obtains. As Tim Williamson has emphasized, cases of this sort will arise
whatever one’s normative theory, given that no conditions are luminous. . . . In general, luminosity
failure makes for confusion or at least hesitancy in our normative theorizing. . . . After all . . . excusable
lapses from a norm are no counterexample to that norm. ( Hawthorne and Stanley 2008 : 585–6)
Hawthorne and Stanley note rightly the hesitance that arises when norms are unknow-
ingly violated, and this hesitancy can incline one to accept the following type of argu-
ment: irrationality is displayed by violating a norm, but when the violation is not a
known violation, there is something excusable about the violation, so there needs to be
a secondary notion of propriety for any theory, since no theory can deny the possibility
of unknown violations of norms (on pain of having to endorse the idea that we have
infallible recognition of the governing norms).
This argument, however, can be resisted without abandoning fallibilism. Begin with
the following characterization of the relevant epistemic principles: each principle involves
232 jonathan l. kvanvig
a conferrer of rationality which establishes an upper limit on the degree of rationality a
belief in the target claim may have, a limit encoded in the epistemic operator that governs the
target in question, and an enabler of rationality, which is a function of diminishers and
defeaters of rationality and requires that no diminisher that is present passes the threshold
suffi cient for defeat. Consider, for example, a classic Chisholmian principle:
If I’m appeared to F-ly and have no grounds for doubt that something is F, then it is reasonable
for me to believe that something is F.
Here the conferrer of rationality is the appearance state, the enabler is the absence of
grounds for doubt, the target is a proposition of the form something is F , and the level of
rational support provided by the conferrer for the target is at the level suffi cient for
rational belief.
It is important to note here that the conferrers in these principles are potential proper
bases for doxastically rational belief , and thus must be conceived atomistically rather than
holistically, since it is psychologically unrealistic to expect belief to be based on an entire
system of beliefs. This restriction allows the norms to perform their guidance function,
playing an explanatory role in the transitions from one doxastic state to another. In brief,
the norms plus the context allow the agent to be guided by the epistemic conditionals
which link the conferrer with the target (typically in the form of an ordinary indicative
conditional). Without this restriction on norms, we could not explain the diff erence
between conforming to the norms and following the norms, and the lesson of the
restriction is that both rationality and irrationality come in many forms. There is no
single norm to which all such gradations answer to, though of course there is the general
theory of rationality that implies the particular shade of rationality present.
The conferrers of rationality (and the corresponding explainers of irrationality)
establish an upper limit to the degree of rationality (or irrationality) for a given target.
These limits set the prima facie level of rationality or irrationality for a given belief. To
determine the ultima facie status of a belief, we turn to the enabling condition, which is a
function of two ideas. The fi rst is that of a diminisher of level of rationality or irrational-
ity, and the second involves a threshold on the level of diminution needed for defeat of
the support off ered to the target (or defeat of the degree of irrationality that is generated
by a given explanation of such). 8
8 A note about irrationality and explanations of such is in order, though it would detract from the main
line of discussion in the text. We can say this much, however. Irrationality involves either the violation of a
specifi c norm (as when the conferrer and enabler conditions are satisfi ed, and one believes the denial of the
target) or failure to conform to a specifi c norm (as when one withholds or takes no attitude at all when the
conferrer and enabler conditions are satisfi ed) or when one’s circumstances fail to provide a ground for any
attitude whatsoever and one takes one anyway.
Degree of irrationality is, at fi rst pass, a function of two things. The fi rst involves the distance between
one’s attitude and the level of epistemic support for p . The second involves the distance away from a with-
holding that involves pure indiff erence between p and ~ p . There’s a complication that I will pass over here:
the degree of irrationality involved in taking the attitude of pure indiff erence when there are no grounds
for taking any attitude. A full account of degrees of irrationality will need to determine what level to assign
here, but the issues take us too far afi eld to pursue in the present context.
perspectivalism and reflective ascent 233
This conception of epistemic norms already allows an initial notion of excusability
without sacrifi cing unity. First, note that some unknown violations of norms provide
no basis for positing an independent notion of excusability. It is not unknown violations
as such that are excusable, but rather unknown violations when persons have been
especially careful and refl ective about what to make of their evidence. It may be worse
to knowingly violate a norm than to unknowingly do so, but that fact can be accom-
modated by noting that the level of irrationality is diminished in the case of unwitting
violations. To get full excusability, we need something stronger, such as when people
carefully and deliberately consider the evidence and what to make of it, coming to a
conclusion that violates a norm of the sort described above.
In such cases, the resources available given the above conception of norms allows the
following explanation of the excusability of such beliefs. In the case of unknown viola-
tions of norms, the fact that the violation is unknown can diminish but not defeat the
irrationality involved. We might say, of such cases, that the person in question has an
excuse but that the excuse isn’t strong enough to make the belief excusable. In the case
of careful and meticulous eff orts to follow the evidence where it leads, we might have a
case where the belief is prima facie irrational and yet fully excusable. In such a case, the
belief is only prima facie irrational because ultima facie not irrational.
In this way, we have some ability to include a notion of excusability into a standard
approach to epistemic norms. This notion of excusability, however, may not quite be
what Hawthorne and Stanley discuss, since they are concerned with a notion of excus-
ability that comes into play in a situation which a theory identifi es as “suffi cient to make
it the case that one ought to F.” In the situation just discussed, the situation is suffi cient to
make it the case that one prima facie ought not hold a particular belief, but perhaps
Hawthorne and Stanley intend the stronger idea that the situation is supposed to make
it the case that one ultima facie ought not hold the belief.
Of course, the objection can’t quite be as just described. If we stipulate that the situation
is suffi cient to make it the case that one prima facie ought not hold a particular belief,
then the discussion has to end at this point. The stipulation, however, is dialectically unto-
ward. What we really have is a case where a given theory claims that the situation is suffi -
cient to make it the case that one prima facie ought not hold a particular belief, and faces
apparent counterexamples involving certain kinds of ignorance of the conditions the
theory claims are suffi cient. The question is what to make of the apparent counterexam-
ples. Pointing out that the obtaining of the conditions doesn’t entail knowing that the
conditions obtain may be suffi cient to keep the examples from generating an outright
refutation of the theoretical proposal in question, but pointing this out doesn’t do any-
thing to rid the apparent counterexamples of their probative value to challenge the ade-
quacy of the theory. It is, after all, a compelling counterexample to the theory that you
should always do what Grandma says to note how refl ective and competent people can
come to see it as obvious that certain things Grandma says are wrong are, in fact, not
wrong; and merely pointing out that every theory has to allow the possibility of unknown
and unrecognized violations of norms doesn’t rescue the Grandma theory.
234 jonathan l. kvanvig
These points dovetail nicely with a better way of formulating a worry along the lines
of that raised by Hawthorne and Stanley without triggering this Grandma theory
response. The worry is that the notion of excusability delineated above isn’t strong
enough to do the work needed of such a notion, since it merely functions to block a
certain negative characterization of the rationality of the belief in question. When peo-
ple carefully and meticulously follow the light of reason as best they see it, the objection
goes, we need a positive epistemic notion to describe the second case, not one that is
merely equivalent to lessening or denying ultima facie irrationality.
Once we recognize the perspectival character of rationality, however, the best way
to honor the motive behind this objection is to let the positive epistemic character-
ization of the careful and meticulous be given, when deserved, in terms of rationality
itself rather than in terms of some other normative notion. The challenge of the
Hawthorne/Stanley objection is that this result can’t be achieved on pain of embra-
cing luminosity, but that is a mistake. Luminosity can be avoided by conceiving of the
theory of rationality as involving a hierarchy of epistemic principles. At the base level
are the kinds of principles cited above, the default explainers of rational status for
non-refl ective individuals. Once refl ection on these principles occurs, however, dif-
ferent epistemic principles will be involved in the explanation of the rational status of
the beliefs in question. Once refl ection has occurred, the relevant conferrers and
en ablers of rational status with respect to a given target proposition may be diff erent
from what they would have been had the refl ection not occurred. Moreover, should
refl ection occur concerning this refl ective perspective as well, higher-order epistemic
principles would be involved in explaining the rational status of the target proposi-
tion. Thus, in addition to the conferrers, targets, epistemic operators, and enablers
used in the characterization of epistemic principles above, there would also be in the
antecedents of such principles a reference to a level of refl ective ascent achieved. An
example of such a principle would be:
If S ’s senses report that p and refl ective level n provides a link for S between p and q , where n is
the highest level of refl ective ascent achieved, and there are no defeaters present to undermine
the link between p and q , then it is rational for S to believe that q .
The theory of rationality itself would thus be constituted by this hierarchy of principles,
including a closure clause to rule out any other ways of achieving rationality other than
through these principles. It is worth noting that such a theory can be fully fallibilist con-
cerning our access to these principles: we have infallible access to none of them. Even so,
when refl ection occurs concerning the principles, those very principles need not be the
explainers of rational status, since the refl ection in question puts one at a metalevel
removed from the principle in question. The relevant metalevel principle attends to the
diff erence that such refl ection makes to the total perspective of the individual in ques-
tion on the target proposition, refusing to enforce a Procrustean approach that relies on
exactly the same epistemic principles to account for the rational status of both pre-
refl ective and post-refl ective belief.
perspectivalism and reflective ascent 235
Such a conception is also free to embrace anti-luminosity about the principles in
question. When one refl ects on the principles that guide rational refl ection, one is
refl ecting on principles at a level lower than the ones that explain the rationality of the
refl ection itself. For any particular epistemic principle, there is no more reason to think
rational false beliefs aren’t possible for that principle than for more ordinary claims
about the world. Moreover, even the most careful and diligent attempt to get to the
truth need not be thought suffi cient for rational belief, on this hierarchical picture.
Irrationality can arise here in virtue of unnoticed incoherence among one’s beliefs and
in virtue of defeaters that are present and which are not incorporated into a refl ective
perspective that rids them of their power to defeat.
Irrationality among the careful and diligent might also arise when no epistemic norm
is available to undergird the cognizing in question. For example, the types of connec-
tions drawn by people with mental diseases, such as paranoia and schizophrenia, may be
fully careful and diligent and yet irrational. I should note, however, that this conclusion
may overreach. There is a predilection among epistemologists to identify too many
regrettable features of cognitive systems in terms of one favorite term, “irrational,” much
as vocabulary-challenged (and morally insensitive) teenagers have only the term “gay”
to describe anything they don’t like. The point is that not every defect of a belief needs
to be thought of as a kind of irrationality, and it is quite easy to see how to describe the
system of beliefs of highly intelligent paranoid schizophrenics both in terms of a high
level of rationality as well as in terms of some other defect, such as being nearly totally
disconnected from reality and delusional.
Nothing in our present project requires a verdict on this issue, however, and so we can
leave it open for present purposes. What is relevant for present purposes is that this reply
in terms of such a hierarchy of principles is a possible one to the Hawthorne/Stanley
concern. Moreover, it is the right response, since it refuses to compromise on the plati-
tude that rationality is perspectival in character. Alternatives to it impose a limitation on
this perspectivality, in a way that causes things, as Hawthorne and Stanley describe it, “to
go hazy.” Such haziness is the product of failing to honor the full perspectivality of
rationality and the hierarchical conception of epistemic principles that explains rational
status shows how to eliminate the haze.
The point to note here is that there is thus room between a recklessness standard and
a negligence standard as ordinarily understood for adopting a fully perspectival and uni-
fi ed approach to epistemic appraisal, one in no need of a notion of secondary propriety
when such a theory judges a given belief or attitude irrational or unjustifi ed. On such an
approach, one can exhibit such irrationality even if one isn’t aware that one is doing so
and isn’t aware that the risks one is taking with respect to the relevant epistemic goal are
too high to be justifi ed.
The result is an approach to epistemic appraisal that honors its perspectival character,
and provides theoretical underpinning for allowing our normative epistemic theory to
speak with a single voice rather than often resorting to hedging to avoid refutation by
examples that incline us to think the theory mistaken. This approach also provides a
236 jonathan l. kvanvig
useful background for addressing the controversial issue of where and when disagreement
rationally compels a change in attitude. To this issue we now turn.
5 Disagreement and deference
The lessons for disagreement, whether by intellectual peers, superiors, or inferiors, are
straightforward. One tempting mistake is to address these questions while assuming the
epistemic insignifi cance of refl ective ascent, arguing that if deference is warranted it will
have to be so on the basis of information unrelated to the issue of disagreement itself.
No such universal rejection of the epistemic signifi cance of refl ective ascent is war-
ranted, however, and once we grant this point, it is easy to see why a refusal to adjust
one’s opinion in the face of disagreement need not be justifi ed in terms of information
unrelated to the issue of disagreement itself. What remains to be addressed is the deep
and important question about when and where and to what degree refl ective ascent
makes an epistemic diff erence.
One primary reason for an aversion to the epistemic signifi cance of refl ective ascent is the
infl uence of William Alston’s infl uential paper on levels confusion in epistemology ( Alston
1981 ). Alston recommends not confusing, for example, the question of whether one knows
with the question of whether one knows that the conditions for knowledge have been met.
Confusing these diff erent questions, which occupy diff erent levels in the hierarchy of epis-
temic issues, risks global skepticism, since the skeptic can always raise a further question at
ahigher metalevel, regardless of the propriety of belief at the original level. The solution,
Alston argues, is to keep the issues separate, acknowledging that one can know that one has
hands even if one doesn’t know that the information available confi rms that one has hands
(and perhaps even if one isn’t in a position to have this meta-knowledge).
I endorse this recommendation of Alston’s, but want to note as well that using this
recommendation to reject universally the epistemic signifi cance of refl ective ascent is to
aim to secure a bridge too far. Alston’s point is grounded in the idea that absences at the
metalevel need not undermine the epistemic standing of beliefs at the object level.
Endorsing the idea of the epistemic signifi cance of refl ective ascent doesn’t confl ict
with this point, but instead involves recognizing that presences at the metalevel can, in
certain circumstances, make enough of a diff erence to the total perspective of the indi-
vidual in question to change what would have been without such ascent.
The recent history of the theory of defeasible reasoning supports this assessment. The
theory of defeat distinguishes rebutters from undercutters . Where E is one’s evidence for p , the
former is a piece of additional information that is evidence against p , suffi cient to defeat
the power of E to make fully rational a belief in p . Undercutters are not evidence against p ,
but are rather evidence against the idea that E has suffi cient evidential power to make fully
rational a belief in p : in short, it is evidence that E isn’t adequate evidence for p .
Early discussion of the distinction between rebutters and undercutters masked the
metalevel character of undercutters. Note, for example, Pollock’s early account of
perspectivalism and reflective ascent 237
undercutters in Pollock ( 1986 ), where he claimed that an undercutter for the context
we are using, is evidence that it is false that if E were true, p would be true (in symbols, it
is evidence that ~( E→p )). One way to interpret this claim about undercutters is to see
Pollock as endorsing a connection between the notion of evidential support and the
truth of the counterfactual in question, a view buttressed by the fact that he describes
undercutters as defeaters that attack “the connection between the evidence and the
conclusion, rather than attacking the conclusion itself ” (p. 39). Such a view is implaus-
ible, however, since the counterfactual in question is false whenever one’s evidence con-
sists of truths and one’s belief is justifi ed and yet false. Moreover, suppose the connection
between having evidence and Pollock’s counterfactual is contingent, so that one’s evi-
dence E is adequate evidence for p and yet it is false that if E were true, p would be true
as well. If so, then acquiring evidence for the falsity of the counterfactual in question
should not count as acquiring a defeater for p , since the evidential support relation fails
to be threatened by that information. For these reasons, it is best not to think of the con-
nection between the generic account of undercutters that I use in the text and the more
specifi c version Pollock uses as a defi ning connection of undercutters.
It is worth pointing out that Pollock never claimed that the connection was defi ni-
tive, but rather only holds that evidence undermining the connection between evidence
and conclusion can be understood as evidence for the falsity of the counterfactual in
question. Here, there is no reason to inquire whether this account can be sustained, since
the only point at issue is whether undercutters count as metalevel evidence, and Pollock’s
account sustains that point rather than threatening it.
This distinction between two kinds of defeaters thus confi rms that metalevel
information sometimes undermines object-level justifi cation, since an undercutter is by
defi nition a metalevel feature of the total perspective of an individual relative to which
object-level justifi cation is present or absent. A universal restriction against allowing
information from one level to aff ect epistemic standing at some lower level is thus
inconsistent with the obvious point that sometimes justifi cation is lost because of under-
cutting defeaters.
One might hope to grant this point and yet resist the epistemic signifi cance of refl ec-
tive ascent by noting that undercutters are not metalevel factors that involve refl ection.
So one might hope to avoid the epistemic signifi cance of refl ective ascent by pointing
out the diff erence between refl ective ascent and higher-order evidence: to have evi-
dence that there is evidence is one thing, and to engage in refl ective ascent is something
diff erent. 9 The point noted is correct—undercutters are not metalevel features arising
from refl ection. Yet, the argument for the epistemic signifi cance of refl ective ascent isn’t
aff ected by this point. The argument depends on the perspectival character of justifi ca-
tion, noting the obvious point that, once refl ection has occurred, a change in total per-
spective results. Once this change occurs, a commitment to the perspectival character of
9 For approaches to disagreement that focus on the issue of higher-order evidence, see Kelly ( 2010 ),
Feldman ( 2009 ).
238 jonathan l. kvanvig
justifi cation should leave open whether the same beliefs are justifi ed relative to the new
perspective that were justifi ed relative to the old, pre-refl ective, perspective. This argu-
ment could be avoided if there were some legitimate universal ban on metalevel intru-
sion into the story of object-level justifi cation, but there is no such legitimate ban. So
even though undercutters are not a product of refl ection (or, at least, not always), that
point doesn’t undermine the argument for the epistemic signifi cance of refl ective
ascent.
Once this point has been granted, one loses grounds for any straightforward or gen-
eral conclusion to the eff ect that rational disagreement among equally competent and
informed peers is impossible. 10 One’s ability to refl ect on the situation and assess whether
the general competence and informedness of one’s interlocutor is decisive in the present
circumstances can’t be ruled out as epistemically signifi cant for what is reasonable to
believe, and if it can’t, there will be no general conclusion available that equal compe-
tence and informedness by someone who disagrees requires revision of belief by either
or both parties to the dispute. Equally true is that disagreement with cognitive superiors
and the better informed can’t by itself rationally force revision of belief, for the same
reasons. Finally, there is also no justifi cation for a dismissive attitude toward all disagree-
ment by those less informed or less intellectually competent, on the same grounds. It is
perhaps true that it is a common vice among intellectuals to resist the pressures of dis-
agreement more than is wise, given an interest in getting to the truth and avoiding error,
but such resistance, even if unwise, need not be irrational or unjustifi ed, and a fully per-
spectival approach to rationality and justifi cation can explain why.
It is worth noting here that embracing the epistemic signifi cance of refl ective ascent
doesn’t require holding that the refl ective perspective is always defi nitive of rational sta-
tus. An example will help illustrate this point. Suppose you have a rational belief and
then refl ect on your situation, coming to the conclusion that your belief isn’t rational.
Nothing about perspectivalism or the epistemic signifi cance of refl ective ascent forces
us to say that the combination of a rational base level belief is incompatible with a
rational metabelief that the base level belief isn’t rational. One might develop a particu-
lar version of this approach that has this implication.
It is worth comparing this idea of a fully perspectival approach to the use of metalevel
information with a related view recently endorsed by David Christensen ( Christensen
2010 ). Christensen defends a “bracketing” approach to metalevel information, according
to which one must fail to give one’s evidence its proper “due” in order to be rational
( Christensen 2010 : 198). For example, if one has a proof of p , but learns that one has just
been given a logical-confusion-inducing drug, the proof can’t by itself make one’s belief
in p rational if one continues to believe it, ignoring the new information. Or, again, if one
has formed a belief on the basis of a compelling inference to the best explanation (IBE),
and learns that one has been slipped the dreaded IBE-confusion drug, the evidence
involved in the explanation, in spite of its virtues, can’t be given its proper due any longer.
10 Feldman ( 2009 ) and Kelly ( 2010 ) off er defenses of such a view as well.
perspectivalism and reflective ascent 239
Christensen claims that cases that require bracketing are diff erent in kind from
ordinary cases in which undercutting defeaters are present. He says:
Nevertheless, it seems to me that this second case is very diff erent from the one in which I learn
I’ve been drugged. In the second case, my reason for giving up my former belief does not fl ow
from any evidence that my former belief was rationally defective. ( Christensen 2010 : 198)
Christensen thus claims that some undercutters show that a belief previously held was
“rationally defective,” thereby requiring bracketing of present information in a way that
isn’t present in cases of ordinary undercutting, as when one learns that there is a black
light shining on an object regarding which one has previously formed a color belief.
The notion of not giving evidence its proper due, and the related claim concerning
rational defects in beliefs is clarifi ed best by an example Christensen uses, where E is adequate
evidence for H , and D is a claim that requires bracketing, such as that an evidence-
assessing-confusion drug will be slipped into one’s coff ee tomorrow. Christensen says:
I can now see that, should I learn ( E & D ), I’ll have to bracket E , and not become highly confi -
dent in H . But I can also see that in not becoming highly confi dent of H , I’ll be failing to give
E its due, and I can see that in that situation, H is actually very likely to be true! This accounts
for the sense in which the beliefs it would be rational for me to form, should I learn ( E & D ), are
not beliefs I can presently endorse, even on the supposition that I will learn ( E & D ). ( Christensen
2010 : 202).
This bracketing picture of higher-order evidence is, we might put it, a position slouch-
ing toward a full perspectivalism. One can know in advance that if I learn E & D , H will
be likely to be true, just as it would if one only learned E . But upon learning E & D ,
things change. It is no longer rational to endorse either of these conditionals, and it is
not rational to believe H either. But the explanation here is not that one is failing to give
E its epistemic due, after learning E & D . It is, rather, that learning E & D has harmed one
epistemically by destroying the evidentiary basis for the conditionals in question. One
thus learns, in this situation, not only E & D , but also that the grounds for the condition-
als ( E→H and E &D→ H ) are not adequate for rational belief.
Of course, a fully perspectival approach will need to note that an adequate basis for
these conditionals could be restored by further refl ection. For example, one might
remember one’s prior epistemic condition of yesterday, and recall one’s rational belief
then that these conditionals are true, and that one’s knowledge today of E & D is thus
misleading evidence regarding H .
Christensen recognizes these points, including the point of how close he is to a full
perspectivalism, noting that he has no argument against such a position ( Christensen
2010 : 203). He resists such a description in favor of the bracketing picture for three rea-
sons. First, he wishes to highlight the way in which the fi rst-order evidence involves
epistemic ideals that won’t be highlighted apart from the bracketing picture, and second,
that the bracketing picture allows a special focus on the role that higher-order evidence
plays in the story of rationality. These fi rst two grounds are easy to accommodate within
240 jonathan l. kvanvig
a full perspectivalism. The ideals of being logically and explanatorily omniscient are
fully preserved even if we grant that fallible cognizers can sometimes rationally contra-
vene such principles, and it is central to a fully perspectival account of rationality to
highlight the epistemic signifi cance of refl ective ascent.
Christensen has a third reason as well for preferring the bracketing picture. He says:
Finally, it seems to me that we should continue to recognize a sense in which there is often
something epistemically wrong with the agent’s beliefs after she takes correct account of HOE
[higher-order evidence]. There’s something epistemically regrettable about the agent’s being
prevented, due to her giving HOE its proper respect, from following simple logic, or from
believing in the hypothesis that’s far and away the best explanation for her evidence. ( Chris-
tensen 2010 : 204)
On this point, full perspectivalism is in complete agreement. Inferences that preserve
truth are, in some sense, failsafe practices to follow in belief formation, and acknowledg-
ing the relativity of rationality to full perspectives has the regrettable consequence that
rationality is not as intimately connected to truth as would occur if rationality were a
function of logical or (objectively) probabilistic inference patterns. Moreover, when we
note that, in some sense, the point of epistemic assessment has to do with the goal of get-
ting to the truth and avoiding error, it is easy to see why there would be, in the air, a scent
of epistemic failure when the implications of a fully perspectival approach to rationality
are acknowledged. The proper response to these points, however, is not to abandon full
perspectivalism or to reject the idea of bracketing central to Christensen’s view, but
rather to note that the appearance of epistemic wrong arises because of imprecision in
specifying how rational assessment is related to the goal of getting to the truth and
avoiding error. In particular, as Foley has shown, it is crucial to distinguish the goal of
truth over error from the goal of truth over error now ( Foley 1986 ). Careful precision on
this point will not eliminate epistemic regrets about having to endorse forms of cogni-
tive assessment that fall short of ideal connections to the truth in order to be rational.
In short, the considerations that Christensen raises for adopting a bracketing picture
that falls short of full perspectivalism give us no reason to resist the latter view. Instead,
they highlight many of the considerations that augur well on behalf of a move from
more restricted admissions of the perspectival character of rationality to a fully perspec-
tival approach to it. Rationality is related in important ways to ideal arguments that are
either truth-preserving or probability-preserving, but is not controlled by it, precisely
for the reason of the rational signifi cance of refl ective ascent. The bracketing of argu-
ments and evidence that Christensen rightly notes is, to change the metaphor, full per-
spectivalism that has not yet come out of the closet.
6 Conclusion
A fully perspectival approach thus has much to recommend it. It can avoid Procrustean
approaches to the rational signifi cance of disagreement, and it opens a path on which
perspectivalism and reflective ascent 241
one can fi nd a unity in one’s theory that is immune from the demand for a distinction
between primary and secondary propriety or between acceptability and excusability. By
acknowledging the epistemic signifi cance of refl ective ascent, we can preserve the per-
spectival character of justifi cation in a way that avoids these common discontents in the
theory of epistemic rationality or justifi cation. There remains, of course, the large and
diffi cult task of saying precisely when and to what extent these metalevel features aff ect
object-level epistemic status. But even in the absence of a full investigation of these
issues, the benefi ts noted can be sustained by the mere recognition of what is involved in
a total perspective, relative to which rationality or justifi cation is understood.
References
Alston, W. (1981) “Level Confusions in Epistemology,” Midwest Studies in Philosophy , 5:135–50.
Chisholm, R. (1991) Theory of Knowledge , 3rd edn. (Englewood Cliff s, NJ: Prentice-Hall).
Christensen, D. (2007) “Epistemology of Disagreement: The Good News,” The Philosophical
Review 116: 187–217.
—— (2010) “Higher-Order Evidence,” Philosophy and Phenomenological Research 81 (1):
185–215.
Cohen, S. and K. Lehrer (1983) “Justifi cation, Truth, and Knowledge,” Synthese , 55: 191–207.
Darwall, S. (2009) The Second-Person Standpoint: Morality, Respect, and Accountability (Cambridge,
MA: Harvard University Press).
David, M. (2005) “Truth as the Primary Epistemic Goal: A Working Hypothesis,” in Contempor-
ary Debates in Epistemology (Oxford: Blackwell), 296–312.
DeRose, K. (2002) “Assertion, Knowledge, and Context,” The Philosophical Review , 111:
167–203.
Elga, A. (2007) “Refl ection and Disagreement,” Noûs , 41 (3): 478–502.
Feldman, R. (2009) “Evidentialism, Higher-Order Evidence, and Disagreement,” Epistemic , 6 (3):
294–313.
Firth, R. (1978) “Are Epistemic Concepts Reducible to Ethical Concepts?” in A. Goldman and
J. Kim (eds.) Values and Morals: Essays in Honor of William Frankena, Charles Stevenson, and Rich-
ard Brandt (Dordrecht: Kluwer), 215–29.
Foley, R. (1986) The Theory of Epistemic Rationality (Cambridge, MA: Harvard University Press).
Goldman, A. (1988) “Strong and Weak Justifi cation,” in J. E. Tomberlin (ed.) Philosophical Perspec-
tives , ii (Atascadero, CA: Ridgeview Publishing), 51–71.
Hawthorne, J. and J. Stanley (2008) “Knowledge and Action,” Journal of Philosophy 105 (10):
571–90.
Kelly, T. (2005) “The Epistemic Signifi cance of Disagreement,” in J. Hawthorne and T. Gendler
(eds.) Oxford Studies in Epistemology , i (Oxford: Oxford University Press), 167–96.
—— (2010) “Peer Disagreement and Higher-Order Evidence,” in F. Feldman and T. A. Warfi eld
(eds.) Disagreement (Oxford: Oxford University Press), 111–75.
Kvanvig, J. L. (2005) “Truth and the Epistemic Goal,” in M. Steup and E. Sosa (eds.) Contempo-
rary Debates in Epistemology (Malden, MA: Blackwell), 285–95.
—— (2009) “Knowledge, Assertion, and Lotteries,” in D. Pritchard and P. Greenough (eds.) Wil-
liamson on Knowledge (Oxford: Oxford University Press), 140–60.
242 jonathan l. kvanvig
—— (2011) “The Rational Signifi cance of Refl ective Ascent,” in T. Dougherty (ed.) Evidential-
ism and Its Critics (Oxford: Oxford University Press).
—— and C. P. Menzel (1990) “The Basic Notion of Justifi cation,” Philosophical Studies 59:
235–61.
Pollock, J. (1986) Contemporary Theories of Knowledge (Totowa, NJ: Rowman and Littlefi eld).
Sher, G. (2009) Who Knew? Responsibility Without Awareness (Oxford: Oxford University Press).
Shope, R. (1979) “The Conditional Fallacy in Contemporary Philosophy,” Journal of Philosophy
75: 397–413.
Smith, M. (1994) The Moral Problem (Oxford: Blackwell).
Thomson, J. J. (2008) Normativity (Chicago: Open Court).
Wedgwood, R. (2007) The Nature of Normativity (New York: Oxford University Press).
Weiner, M. (2005) “Must We Know What We Say?” The Philosophical Review 114 (2): 227–51.
Williamson, T. (2000) Knowledge and Its Limits (Oxford: Oxford University Press).
At the center of work in the epistemology of disagreement is a debate regarding what is
rationally required when one disagrees with someone whom one takes to be an epistemic
peer about a given question. A and B are epistemic peers 1 relative to the question whether
p when A and B are evidential and cognitive equals with respect to this question—that
is, A and B are equally familiar with the evidence and arguments that bear on the question
whether p , and they are equally competent, intelligent, and fair-minded in their assessment
of the evidence and arguments that are relevant to this question. 2 Two central views have
emerged in the literature on this issue, which I shall call non- conformism and conform-
ism. Nonconformists are those who hold that disagreement itself can be wholly without
epistemic signifi cance; thus, one can continue to rationally believe that p despite the fact
that one’s epistemic peer explicitly believes that not- p , even when one does not have a
reason independent of the disagreement in question to prefer one’s own belief. According
to nonconformists, then, there can be reasonable disagreement among epistemic peers. 3 In
contrast, conformists are those who hold that disagreement itself possesses enormous epis-
temic signifi cance; thus, unless one has a reason that is independent of the disagreement
itself to prefer one’s own belief, one cannot continue to rationally believe that p when one
is faced with an epistemic peer who explicitly believes that not- p . According to conform-
ists, then, there cannot be reasonable disagreement among epistemic peers. 4
11
Disagreement and Belief
Dependence
Why Numbers Matter
Jennifer Lackey
1 This term is found in Kelly ( 2005 ), who borrows it from Gutting ( 1982 ).
2 More accurately, since strict evidential and cognitive equality characterizes epistemic clones rather than
peers, A and B are epistemic peers with respect to the question whether p when A and B are roughly eviden-
tial and cognitive equals regarding this question.
3 Versions of nonconformism are endorsed by van Inwagen ( 1996 and 2010 ), Rosen ( 2001 ), Kelly ( 2005 ),
Moff ett ( 2007 ), Wedgwood ( 2007 ), and Bergmann ( 2009 ). It is possible that Kelly’s (2005) is an exception
here in not allowing for reasonable disagreement among epistemic peers, but nothing in the arguments that
follow turns on this.
4 Proponents of conformism include Feldman ( 2006 and 2007 ), Christensen ( 2007 and 2011 ), and Elga
( 2007 and 2010 ).
244 jennifer lackey
While there is considerable dissent regarding the appropriate response to peer
disagreement, there is nonetheless widespread consensus among both nonconformists
and conformists regarding a certain class of disagreements that rationally require no
doxastic revision whatsoever. In particular, it is taken for granted that if a peer’s belief in
an instance of disagreement is not independent of other beliefs that require one to
engage in doxastic revision, then it does not itself necessitate any further doxastic revi-
sion on one’s part. So, for instance, Adam Elga—who espouses conformism—claims
that “an additional outside opinion should move one only to the extent that one counts
it as independent from opinions one has already taken into account” (Elga 2010: 177).
Such a thesis, Elga claims, is “completely uncontroversial” and “every sensible view on
disagreement should accommodate it” (Elga 2010: 178). In a similar spirit, Thomas
Kelly—who is a proponent of nonconformism—argues that “even in cases in which
opinion is sharply divided among a large number of generally reliable individuals, it
would be a mistake to be impressed by the sheer number of such individuals on both
sides of the issue. For numbers mean little in the absence of independence” ( Kelly
2010 : 148). Let us call the thesis suggested by these passages Belief Independence and
formulate it as follows: 5
Belief Independence: When A disagrees with peers B, C, and so on, with respect to a given
question and A has already rationally taken into account the disagreement with B, A’s disagree-
ment with C, and so on, requires doxastic revision for A only if the beliefs of C, and so on, are
independent of B’s belief. 6
Belief Independence not only cuts across the debate between nonconformism and con-
formism, it also has a great deal of intuitive appeal. For instance, suppose that Harry,
Wally, and I are all epistemic peers when it comes to directions in our neighborhood and
we disagree regarding the location of the Indian restaurant at which we are meeting
friends: I say it is on Church Street, and Harry and Wally say that it is on Davis. Suppose
further, however, that Wally’s belief is dependent on Harry’s testimony about the
whereabouts of the restaurant. Surely, it is argued, once I revise my belief in the face of
disagreement with Harry, I am not rationally required to also revise my belief in light of
my disagreement with Wally—to do so would be to “double count” Harry’s testimony.
For given that Wally’s belief itself depends on Harry’s, these two beliefs collapse into one
relevant instance of disagreement. Similar considerations apply when there is depend-
ence on a common source. For instance, suppose that Harry and Wally both believe that
the Indian restaurant in question is on Davis, not because one of them depends for this
belief on the other, but because they both depend on the testimony of Marty. Once
again, it is argued, if I doxastically revise my belief about the whereabouts of the restau-
rant in the face of disagreement with Harry, I do not need to revise my belief any further
5 I should mention at the outset that this is simply one version of Belief Independence that will be
discussed in this paper. Several diff erent versions of varying strength will also be considered.
6 For other explicit endorsements of Belief Independence, see Goldman ( 2001 ) and McGrath ( 2008 ).
I will discuss the former’s view in some detail later in this paper.
disagreement and belief dependence 245
in light of disagreeing with Wally (or vice versa). For both beliefs fi nd their origin in the
common source of Marty’s testimony, and so all three of these beliefs—that is, that of
Harry, Wally, and Marty—reduce to a single instance of peer disagreement.
Despite both the widespread acceptance and intuitive plausibility of Belief Inde-
pendence, I shall argue in what follows that there is no interpretation of this thesis that
turns out to be true. Otherwise put, I shall argue that, contrary to Kelly’s claim in the
quotation above, numbers do matter in cases of disagreement, even in the absence of
independence. I shall proceed by distinguishing several notions of belief dependence
that may be at issue here—likemindedness, source dependence, testimonial dependence,
non-independence, and collaborative dependence—and show that none of them pro-
vides a plausible way of understanding Belief Independence. In particular, I shall show
that where one disagrees with two (or more) epistemic peers, the beliefs of those peers
can be dependent in the relevant sense and yet one cannot rationally regard this as a sin-
gle instance of disagreement when engaging in doxastic revision. I shall then suggest
that the considerations put forth in this paper provide further reason to accept a broadly
justifi cationist approach to the epistemology of disagreement that I have developed else-
where, 7 and reveal why such a view is preferable to both nonconformism and
conformism.
1 Likemindedness
The most obvious question to be answered at this point is what notion of belief depend-
ence is at issue with respect to the satisfaction of Belief Independence. By way of answer-
ing this question, let us take a closer look at what Elga says just prior to introducing the
thesis:
Imagine a cluster of advisors who you know exhibit an extreme form of groupthink: they always
end up agreeing with one another. Now, you may well respect the opinions of that group. So
you may well be moved if you fi nd out that one of them disagrees with you about a particular
issue. But suppose that you then fi nd out that another member of the group also disagrees with
you about that issue. That news does not call for any additional change in your view. For you
knew in advance that the group members all think alike. So hearing the second dissenting opin-
ion gives you no real new information.
In contrast, suppose that you receive an additional dissenting opinion from an advisor who
formed her opinions completely independently from your fi rst advisor. In that case, the second
dissenting opinion does call for additional caution. The diff erence is that in this case you did not
know in advance what conclusion the second advisor would reach. ( Elga 2010 : 177)
The fi rst point to notice about this passage is that Elga seems to be confl ating, on the
one hand, knowing in advance what a peer will believe on a given question with, on the
7 See my (2010a and 2010b). I should note that the considerations adduced in this paper support a
broadly justifi cationist approach to the epistemology of disagreement rather than the specifi c justifi cationist view
I develop in my (2010a and 2010b). This distinction will become clearer at the end of this paper.
246 jennifer lackey
other hand, such a belief being independent of other views that require one to revise
one’s belief. For instance, I may know in advance of asking my two friends that, given
their general political beliefs, they both believe that we should withdraw US troops from
Iraq. But these two friends may have formed their respective beliefs entirely independ-
ently of one another and, indeed, from wholly diff erent sources. So, knowing in advance
what my friends believe about the war in Iraq does not have anything to do with their
beliefs being independent, either from one another or from a common source.
Moreover, my knowing the views of my epistemic peers in advance of actually
disagreeing with them does not have any bearing on whether I am rationally required to
revise my belief in the face of disagreement with them. For if they are two diff erent
epistemic peers who independently hold a given belief, then even Elga would say that
there are two diff erent instances of disagreement, both of which should require doxastic
revision. Of course, if my knowing what my friends believe about a particular issue
before asking them involves my having already rationally revised my belief in light of both
instances of disagreement , then surely I do not need to revise again upon hearing them
actually disagree with me. But this simply means that I do not need to “double count”
any single instance of disagreement—it has nothing to do specifi cally with either Belief
Independence or knowing in advance of the disagreement what my epistemic peers
believe.
There is, however, a related suggestion in the above passage from Elga that is worth
considering in relation to Belief Independence. Recall that he talks about “[knowing] in
advance that the group members all think alike.” In the previous paragraph, I focused on
the “knowing in advance” part of this claim, arguing that it is not relevant to Belief Inde-
pendence. But there is also the part of this claim that emphasizes members of a group
“all think[ing] alike.” For instance, I was recently talking to a student who put forth the
following argument: with respect to certain topics, once he takes into account the
testimony of one Democrat disagreeing with him about the question whether p , the
testimony of many more Democrats so disagreeing with him does not rationally require
any further doxastic revision on his part regarding this question. Why not? Because, goes
the argument, with respect to a certain class of issues, there is tremendous likemindedness
among members of certain political parties. It is extremely likely that most Democrats
believe, say, that we shouldn’t have tax cuts for the very wealthy, and so once this is
learned, additional dissenting opinions provide, as Elga says, “no real new information.”
But as I argued above, the fact that an instance of disagreement is not providing one
with new information does not have any direct bearing on Belief Independence. In
order for likemindedness to have a connection with Belief Independence, the crucial
question that needs to be asked here is why there is the likemindedness in question. And
by way of answering this question, notice that there is no necessary connection between
a group of peers being likeminded and the members of this group having formed their
beliefs dependently in any way. For instance, each Democrat relevant to a given instance
of disagreement may have arrived at his or her belief about tax cuts for the wealthy
entirely on his or her own. In such a case, there may be tremendous likemindedness with
disagreement and belief dependence 247
no dependence whatsoever. Similar considerations apply to just about any other area
where there is shared opinion. Unless there are unusual circumstances—such as cer-
tain information being available through only a specifi c means—it is clearly possible
for even a large group of likeminded individuals to have formed their beliefs entirely
independently of one another and thus for there to be no common source grounding
all of them.
We have seen, then, that likemindedness, by itself, has no necessary connection with
belief dependence. So let us look elsewhere for a notion of belief dependence that may
succeed in rendering Belief Independence true.
2 Source dependence and testimonial dependence
One thought that may explain the apparent relevance of likemindedness to Belief Inde-
pendence is the fact that people who are likeminded tend to rely on, and avoid, the same
sources of information. For instance, it may be argued that Democrats typically read The
New York Times , and avoid FOX news, Rush Limbaugh, and Ann Coulter. Given this,
likemindedness may often give rise to, and arise from, source dependence , which can be
characterized as follows:
Source Dependence: A’s belief that p is source dependent on X if and only if A’s belief that p is
grounded in X .
It should be noted that the “grounded in” relation between the relevant beliefs and the
common source should not be understood in merely causal terms. To see this, suppose
that Fred and George are witnesses to the same event, say a car accident, in the following
way: Fred is physically present and observes the car accident visually. Fred is then wired
to electrodes so that, once he has formed the relevant belief about the car accident’s
occurrence on the basis of his visual experience, George is then permitted to see a
recorded video of it. In this way, George’s evidence with respect to the car accident is
causally dependent on Fred’s evidence. Nevertheless, it intuitively seems that Fred and
George are independent witnesses to this event. So, the causal dependence of George’s
evidence on Fred’s evidence is not suffi cient for George’s evidence to be epistemically
dependent on Fred’s evidence in the relevant sense. A third subject, Ron, should treat
testimony from Fred and George as having independent epistemic signifi cance. That is
to say, Ron needs to take into account the testimony of both Fred and George.
Even when we understand source dependence in epistemic rather than causal terms,
we still do not have an acceptable version of Belief Independence. In particular, where
one disagrees with two (or more) epistemic peers, the beliefs of those peers can be
source dependent in this epistemic sense and yet one cannot simply treat this as a single
instance of disagreement when engaging in doxastic revision. To see this, suppose that
Abby and Betsy are both bloodstain pattern analysts called to testify on behalf of the
prosecution at a murder trial. Suppose further that this branch of forensic science is rela-
tively new and thus there is only one main textbook on which their relevant knowledge
248 jennifer lackey
is based. After reviewing the evidence at the murder trial, both Abby and Betsy maintain
independently of one another’s testimony that the amount of medium velocity impact
spatter at the scene of the crime reveals that the victim died from blunt force trauma,
rather than from a simple fall, as the defendant claimed. Now given that the beliefs of
both Abby and Betsy that the defendant is guilty depend heavily on the textbook from
which their knowledge of bloodstain pattern analysis is grounded, their beliefs are
source dependent on this textbook. Yet surely hearing both experts testify to this con-
clusion at the murder trial has more epistemic force than hearing merely one of them so
testify. 8 Accordingly, if one believed that the defendant were innocent of the murder in
question, the testimony provided by both Abby and Betsy requires, at the very least,
more revision of one’s belief than if one were to take into account the testimony given
by only one of these experts.
What this reveals is that there are at least two broad notions of source dependence,
one where the dependence is partial and another where the dependence is complete.
Let us characterize these notions as follows:
Partial Source Dependence: A’s belief that p is partially source dependent on X if and only if A’s
belief that p is partially grounded in X .
Complete Source Dependence: A’s belief that p is completely source dependent on X if and only
if A’s belief that p is completely grounded in X .
The case involving Abby and Betsy is one of partial source dependence—their respective
beliefs about the guilt of the defendant are grounded partially in the shared source of the
textbook on bloodstain pattern analysis and partially in their general expertise in this area of
forensic science. And as we saw above, partial source dependence reveals that Belief Inde-
pendence, as stated, is too strong. In particular, Abby’s belief that the defendant is guilty is not
independent of other instances of disagreement that require doxastic revision—namely,
Betsy’s testimony—yet disagreement with Abby clearly requires doxastic revision beyond
that required by Betsy’s testimony alone. Thus, let us modify Belief Independence as follows:
Belief Independence2: When A disagrees with peers B, C, and so on, with respect to a given
question and A has already taken into account disagreement with B, A’s disagreement with C,
and so on, requires doxastic revision for A only if the beliefs of C, and so on, are not wholly
dependent on B’s belief.
Rather than requiring belief independence of every sort, this modifi ed version of Belief
Independence focuses on only complete belief dependence , that is, on those beliefs that are
8 Given that both Abby and Betsy form their beliefs on the basis of the same body of evidence presented
at the murder trial, there is a sense in which their beliefs are source dependent even without factoring in
their reliance on the textbook on bloodstain pattern analysis. However, if making use of the same body of
evidence suffi ces for source dependence, then every instance of peer disagreement will necessarily involve
source dependence. For, it is part of peer disagreement, as it is generally understood, that peers disagree in
the relevant sense only when they both have access to the same evidence. Since it is absurd to conclude that
every instance of peer disagreement involves source dependence in a problematic sense, I take it that more
is needed for this notion than mere reliance on the same body of evidence.
disagreement and belief dependence 249
wholly dependent on other relevant beliefs. Accordingly, let us modify the case of Abby
and Betsy so that their beliefs that the medium velocity impact spatter shows that the
defendant in question is guilty are grounded entirely in the testimony of another blood-
stain pattern analyst, Cathy. Now, if complete source dependence is to render Belief
Independence2 true, then the testimony of Abby, Betsy, and Cathy should be counted as
merely one relevant instance of disagreement in the face of an opponent’s belief that the
defendant is innocent. In other words, there is no evidential diff erence whatsoever
between, on the one hand, Cathy disagreeing with an epistemic peer and, on the other
hand, Cathy, Abby, and Betsy disagreeing with this same epistemic peer. Indeed, if Belief
Independence2 is correct, then the same amount of doxastic revision is required if Cathy
disagrees with an epistemic peer and if one million of those wholly dependent on her
testimony disagree with this peer. But is this correct?
By way of answering this question, notice that there is a distinction between what
we might call autonomous and non-autonomous complete source dependence. The
autonomous version of this dependence involves a subject exercising agency in her
reliance on a source of information, critically assessing its reliability, monitoring for
defeaters, and comparing the content of the belief that she forms with her background
beliefs. This, I take it, is the minimum required for rational belief formation. But in such
cases, it is not at all clear that Belief Independence2 is true, for there are a number of
ways in which the autonomy of the dependence, even when it is complete, can render
the epistemic signifi cance of a peer’s belief in an instance of disagreement not wholly
reducible to that of the source on which it depends. For instance, when Abby depends
on Cathy’s testimony for her belief that the medium velocity impact spatter at the
scene of the crime shows that the defendant in question is guilty, her background
beliefs about, say, impact spatter, bloodstain pattern analysis, and forensic science in
general will all play a fi ltering role in her acceptance of this testimony. If Cathy had
instead said that the high velocity impact spatter at the crime scene shows that the vic-
tim was bludgeoned, then Abby’s belief that such spatter indicates an event like a shoot-
ing rather than a bludgeoning would provide her with a defeater for Cathy’s testimony.
Similarly, if Cathy had reported that though a great deal of medium velocity impact
spatter was found at the crime scene, the defendant is nonetheless innocent, Abby’s
belief that this kind of blood spatter is not consistent with the simple fall explanation
told by the defense would once again provide her with a defeater for Cathy’s testimony.
Indeed, Abby may be so reliable at discriminating among blood spatter testimony in
general that she would accept Cathy’s testimony that the defendant is guilty only if this
were in fact the case. The upshot of these considerations is that by virtue of Abby
autonomously forming the belief that the defendant is guilty on the basis of Cathy’s
testimony, her belief has additional support provided by the monitoring of her back-
ground beliefs. In this way, if Cathy and Abby were both to disagree with, say, Dan—
another bloodstain pattern analyst who has reviewed the relevant evidence—over the
defendant’s guilt, the epistemic force of both instances of disagreement goes beyond
that provided by Cathy’s testimony alone.
250 jennifer lackey
It is worth noting that Abby’s background information does not provide positive sup-
port for her belief in the defendant’s guilt but, rather, functions as a fi ltering device for
negative evidence. In particular, her knowledge about bloodstain pattern analysis and
forensic science in general does not ground her belief in the defendant’s guilt—Cathy’s
testimony does. There is, then, no worry here that the source dependence in question is
merely partial, rather than complete. But this is compatible with Abby’s background
beliefs enabling her to critically monitor the testimony that she receives from Cathy,
thereby providing additional epistemic force to their disagreement with Dan.
Of course, we can imagine a case in which no explanation is given on behalf of the
testimony in question, and yet the hearer nonetheless forms the corresponding belief.
So, for instance, suppose that Cathy says nothing at all about the impact spatter at the
scene of the crime, but simply asserts to Abby that the defendant is guilty. Here, even if
Abby autonomously forms the belief in question, she does not have background infor-
mation specifi cally relevant to the defendant’s guilt or innocence to add support to it
beyond that provided by Cathy’s testimony. Given this, should the epistemic force of
Abby’s disagreement with Dan over the defendant’s guilt reduce to that of Cathy’s dis-
agreement? In other words, how many instances of disagreement must Dan take into
account here, one or more than one?
Before answering these questions, the fi rst point to notice is that it may be argued
that it is not even clear that such a scenario falls within the scope of the debate. For
recall that Belief Independence2 is a thesis that is defended with respect to disagree-
ments among epistemic peers. And recall further that A and B are epistemic peers rela-
tive to the question whether p when A and B are evidential and cognitive equals with
respect to this question. Now even if Abby, Cathy, and Dan are all bloodstain pattern
analysts with extensive experience in forensic science, if Cathy and Dan form their
beliefs on the basis of impact spatter at the scene of the crime, and Abby forms her
belief entirely on the basis of Cathy’s testimony, then it may be argued that it is not clear
that Abby is in fact an epistemic peer with Cathy and Dan relative to the question of
the defendant’s guilt. This would then render these sorts of cases involving complete
source dependence irrelevant to the debate at hand. However, if the mere fact that one
individual’s belief is independent and another’s belief is dependent prevents them from
being epistemic peers, then it is diffi cult to make sense out of the inclusion of a thesis
like Belief Independence in the fi rst place. For proponents of Belief Independence
wouldn’t have to endorse such a principle since there simply wouldn’t be instances of
belief dependence relevant to disagreements between epistemic peers . Given this, being an
epistemic peer seems best understood a bit more loosely, such that direct evidence on
behalf of p and testimonial evidence on behalf of p can turn out to be roughly equal in
the sense required for epistemic peers.
With this point in mind, there are reasons to think that Belief Independence2 does
not fi nd support from such cases of complete source dependence. To see this, notice that
even if Abby does not have any relevant beliefs about the defendant’s guilt, she may still
have background information about Cathy as a source of information. For instance, she
disagreement and belief dependence 251
may know that Cathy is a bloodstain pattern analyst with extensive training in forensic
science and ample experience analyzing impact spatter at crime scenes. She may also
have evidence about Cathy’s specifi c credentials, her particular track record of analyzing
impact spatter, and her epistemic or moral character. And, fi nally, she may have the gen-
eral ability to distinguish competent forensic scientists from incompetent ones. All of
this enables Abby to function as an “epistemic fi lter” for the testimony that she receives
from Cathy, blocking some of the epistemically bad and allowing the epistemically
good to pass through. Otherwise put, Abby may not be able to independently assess the
proposition in question—whether the defendant is guilty or not—but she may be able
to independently assess whether Cathy is a reliable testifi er, a trustworthy source of
information, a competent forensic scientist, and so on. All of this evidence goes beyond
that provided by Cathy’s assertion alone that the defendant is guilty. Compare two
scenarios: one where an expert, A, asserts that p and another where A asserts that p ,
followed by another expert in the same area, B, asserting, “A is highly reliable and should
be trusted.” Surely the testimony in the second scenario has more epistemic force than
that found in the fi rst one, not only because of the assertion itself, but also because of the
fact that B regards A as a reliable and trustworthy source of information. But then similar
considerations should apply in the case of disagreement—though Abby believes that the
defendant is guilty entirely on the basis of Cathy’s testimony, the information that she
possesses about Cathy as a reliable and trustworthy source of information adds epistemic
force to the disagreement that they both have with Dan.
Indeed, this point can be made even stronger by supposing that Abby is in a better
epistemic position than Cathy is in when it comes to discriminating among forensic
scientists, both in general and in the particular case at hand. Abby may know that Cathy
is an extremely competent bloodstain pattern analyst, while Cathy herself is plagued
with self-doubt about her scientifi c abilities. And Abby may be very reliable at distin-
guishing between reliable forensic scientists and unreliable ones, while Cathy is quite
unreliable at so distinguishing. In such a case, it is even clearer that the overall epistemic
situation is stronger relative to the question of the defendant’s guilt when Cathy and
Abby disagree with Dan than it is when merely Cathy disagrees with him. For Abby
has evidence both about Cathy’s abilities as a bloodstain pattern analyst and about
forensic scientists in general that Cathy herself simply lacks. Moreover, this additional
information that Abby possesses could function as relevant defeaters or defeater defeat-
ers that would simply be unavailable to Cathy. If, for instance, Cathy followed her asser-
tion about the defendant’s guilt with a typical self-undermining comment, Abby
would have a defeater defeater for the proposition in question, while Cathy would be
left with only a defeater. 9
9 It may be argued that the fact that Abby is in a better position than Cathy is in with respect to discrimi-
nating among forensic scientists prevents them from being epistemic peers in the relevant sense. However,
the standard conception of being an epistemic peer is characterized relative to a particular question . Thus, Kelly
writes: “Let us say that two individuals are epistemic peers with respect to some question if and only if they
satisfy the following two conditions:
252 jennifer lackey
Still further, the mere fact that Abby fl at out asserts her belief adds epistemic force to the
disagreement with Dan beyond Cathy’s testimony alone. Consider the following: if Cathy
asserted that the defendant is guilty in the face of disagreement with Dan, and Dan knew
that Abby didn’t believe Cathy, it is fairly clear that this would count as evidence against
Cathy’s testimony. The fl ipside of this, then, is that when Abby accepts Cathy’s testimony
and believes that the defendant is guilty on this basis, her asserting that this is the case pro-
vides additional support on behalf of this proposition. That is, when Abby asserts to Dan that
the defendant is guilty, she is vouching for the truth of the claim in ways that diff er from
Cathy’s merely asserting this same proposition. Abby is inviting recipients of her testimony
to trust her , not Cathy, and she bears responsibility for the truth of this claim. 10 Of course, if
challenged, Abby may support this assertion by saying, “Cathy told me so.” But if in the fi rst
instance she fl at out asserted, “The defendant is guilty” rather than the qualifi ed “Cathy
believes that the defendant is guilty,” or “ According to Cathy, the defendant is guilty,” or
“Cathy told me that the defendant is guilty,” then she is taking part of the epistemic burden
of this claim onto her own shoulders, and this cannot be entirely passed off to Cathy. 11 The
epistemic force of Abby’s disagreeing with Dan, then, goes beyond that of Cathy’s testimony
alone. Thus, even if Abby, Cathy, and Dan can be regarded as epistemic peers, autonomous
complete source dependence fails to provide a true interpretation of Independence2.
It is worth noting that I am not arguing that Dan’s disagreeing with Cathy and Abby,
where the latter autonomously and completely depends on the former, is evidentially
equivalent to Dan’s disagreeing with Cathy and Abby, where their beliefs are formed
entirely independently. My claim is, rather, that Dan’s disagreeing with Cathy and Abby
in such a way is not evidentially equivalent to Dan’s disagreeing with Cathy alone. Accord-
ingly, I am not arguing that Dan needs to revise his beliefs in the face of two full instances
of disagreement in such a case, but only that he needs to revise his beliefs in the face of more
than merely one instance of disagreement .
(i) they are equals with respect to their familiarity with the evidence and arguments which bear on that
question, and
(ii) they are equals with respect to general epistemic virtues such as intelligence, thoughtfulness, and
freedom from bias” ( Kelly 2005 : 174–5).
Similarly, Elga says that “you count your friend as an epistemic peer” when “you think that she is about
as good as you at judging the claim. In other words, you think that, conditional on a disagreement arising,
the two of you are equally likely to be mistaken” ( Elga 2007 : 487). Given this, I am assuming that Abby and
Cathy can be epistemic peers by virtue of their being equals relative to the question of the defendant’s guilt,
even if there are other epistemic asymmetries between them, such as in their ability to discriminate among
forensic scientists. If this ability is subsumed by Kelly’s requirement that the parties to the dispute be equals
with respect to their familiarity with the evidence and arguments that bear on the disputed question , then my
other arguments will be suffi cient to show that autonomous complete source dependence fails to provide a
true interpretation of Independence2.
10 This echoes some language found in Hinchman ( 2005 ), though I do not endorse his epistemology of
testimony. For objections to his view, see my (2008).
11 It should be noted that I am not claiming that none of the epistemic burden can be passed off to
Cathy—I am simply denying that all of the burden can be so passed. This claim is, then, compatible with the
view found in Goldberg ( 2006 ), in which he argues that the ability to engage in partial “epistemic buck
passing” is characteristic of testimonial knowledge.
disagreement and belief dependence 253
These considerations reveal that when source dependence is complete but autono-
mous, there can be an epistemic diff erence between, on the one hand, the original
instance of disagreement and, on the other hand, this instance and those dependent on
it. This can happen in at least three diff erent ways: the dependent party in question can
(1) monitor the incoming testimony for defeaters, (2) possess beliefs about the reliability
and trustworthiness of the testimonial source, either in particular or in general, and
(3) bear the responsibility of off ering a fl at-out assertion in the fi rst place. For instance,
less doxastic revision is required when Dan disagrees with only Cathy than when he
disagrees with Cathy and Abby, not only because Abby may be herself monitoring for
defeaters and possess additional information relevant to Cathy as a testimonial source,
but also because she vouches for the truth of this claim through the fl at-out assertion
that she off ers to Dan. To my mind, this conclusion accords better with intuition than
does that of endorsing Belief Independence2. For recall that if this thesis is correct, the
same amount of doxastic revision is required on Dan’s part if Cathy disagrees with him
and if Cathy and one million peers wholly dependent on her testimony disagree with
him. But this seems absurd. If one million epistemic agents with even slightly diff erent
backgrounds dependently though autonomously believe that p , then so many peers
monitoring and assessing the incoming testimony surely provides more powerful evi-
dence for believing that p than simply one such agent believing that p . Given this, Kelly’s
claim that “numbers mean little in the absence of independence” is false.
Let us now turn to a consideration of Belief Independence2 when non-autonomous
complete source dependence is concerned. This type of source dependence involves a
subject blindly relying on a given source of information, much like a very young infant
accepts whatever her parents tell her. There is no critical assessment of the source or the
information in question, and no rational agency involved in the uptake of the belief. In
other words, the dependence is complete and blind all the way down. For instance, if in
the above case Abby’s belief about the defendant’s guilt is non-autonomously source
dependent on Cathy’s testimony, then Abby reporting this belief is analogous to a parrot
repeating what Cathy asserts. A parrot assesses neither the source of its report, nor the
content of the proff ered testimony. Here it seems clearly correct to say that the two cases
of testimony collapse into one relevant instance of disagreement with Dan. Otherwise
put, for Dan to doxastically revise in the face of both the testimony of Abby and Cathy is
to double count a single case of disagreement. But recall, once again, that the entire
debate is framed around disagreement amongst epistemic peers . If Abby is merely parrot-
ing what Cathy told her, then she is hardly evidentially and cognitively equal with either
Cathy or Dan. Thus, she is not an epistemic peer with either of them. So while non-
autonomous complete source dependence may be a category where double counting
applies, the fact that the parties to the disagreement fail to be peers makes it irrelevant to
Belief Independence2.
But what, it may be asked, if Abby does not depend on Cathy but, rather, they each
completely and non-autonomously depend on a third source in their shared beliefs? For
instance, suppose that both Cathy and Abby depend on another bloodstain pattern
254 jennifer lackey
analyst—Edward—in their respective beliefs that the defendant is guilty. Given that their
beliefs have the same grounding, they are clearly epistemic peers with one another; yet,
intuitively, Abby’s belief that the defendant is guilty does not add epistemic weight to a
disagreement with Dan beyond that contributed by Cathy’s belief with the same content
(and vice versa). Doesn’t this sort of case, then, reveal that non-autonomous complete
source dependence can provide a true interpretation of Belief Independence2? 12
Notice, however, that in order for this sort of case to be relevant to Belief Independ-
ence2, Cathy and Abby must be epistemic peers not only with one another, but also
with Dan. This, in turn, requires that Dan’s dissenting belief itself be completely and
non-autonomously dependent. Now there are two diff erent options for this depend-
ence: on the one hand, the source of Dan’s belief may be someone other than Edward. In
this case, Dan’s grounding for his dissenting belief will diff er from that of the beliefs held
by Cathy and Abby, thereby preventing him from being evidential equals—and, there-
with, epistemic peers—with them. On other hand, the source of Dan’s belief may be
Edward as well. In this case, Dan’s grounding for his dissenting belief will once again
diff er from that of the beliefs held by Cathy and Abby, since Edward will have testifi ed
that the defendant is guilty to Cathy and Abby and that the defendant is innocent to
Dan. 13 Not only is such a scenario extremely odd, it also prevents Dan from being evi-
dential equals and, therewith, epistemic peers with Cathy and Abby. Thus, neither inter-
pretation of this scenario is one that provides a true reading of Belief Independence2.
It may be argued, however, that we can envisage a situation in which the relevant
evidence is available to all of the parties to the disagreement, thereby rendering them
epistemic peers in the sense in question. For instance, suppose that while Cathy and
Abby completely and non-autonomously depend on Edward in their respective beliefs
that the defendant is guilty, Dan completely and non-autonomously depends on yet
another bloodstain pattern analyst, Fran, in his belief that the defendant is innocent.
Suppose further that Cathy and Abby are fully aware of Fran’s beliefs on the matter—
including that she disagrees with Edward—and Dan is fully aware of Edward’s beliefs on
the matter—including that he disagrees with Fran. Given that the parties to the dis-
agreement have access to all of the relevant evidence on the matter, it may be argued that
they are clearly epistemic peers with one another. Of course, the basis for the beliefs held
by Cathy and Abby (i.e. Edward’s testimony) is diff erent from the basis of Dan’s belief
(i.e. Fran’s testimony), but this often happens when people disagree. So this diff erence in
12 I am grateful to David Christensen for this question.
13 In order for two subjects to be epistemic peers, Richard Feldman also requires the satisfaction of a
condition that he calls “full disclosure,” which can be formulated as follows:
Full disclosure: A and B are in a situation of full disclosure relative to the question whether p when A and B
have knowingly shared with one another all of their relevant evidence and arguments that bear on the ques-
tion whether p .
Given this, Cathy, Abby, and Dan will realize after full disclosure in such a case that Edward provided
confl icting testimony to them, which will thereby give all three of them a reason to regard Edward as an
unreliable source of information. This raises an additional problem for cases of disagreement involving non-
autonomous complete source dependence.
disagreement and belief dependence 255
basis shouldn’t prevent them from being evidential equals and epistemic peers. If this is
correct, then perhaps a very modest version of Belief Independence2 is defensible,
which can be expressed as follows:
Belief Independence2*: When A disagrees with peers B, C, and so on, with respect to a given
question and A has already taken into account disagreement with B, A’s disagreement with C,
and so on, requires doxastic revision for A only if the beliefs of C, and so on, are not
non-autonomously wholly dependent on B’s belief.
According to this thesis, then, since the beliefs of Cathy and Abby are non- autonomously
and wholly source-dependent on Edward’s testimony, their disagreement with Dan
reduces to a single instance of disagreement, thereby providing a true interpretation of
Belief Independence2. 14
By way of response to this objection, notice that the central question motivating the
theses about belief independence is this: given a case of peer disagreement involving
belief dependence, does a dependent belief have epistemic force beyond that of the
source on which it depends? In other words, how many instances of disagreement must
one take into account when there is belief dependence, one or more than one? In the
example involving complete non-autonomous dependence above, however, it seems
clear that the answer to this question is neither one nor more than one but none . In par-
ticular, none of the parties to the disagreement should take into account any of the oth-
ers’ beliefs in such a case. For recall that complete, non-autonomous dependence
involves blind acceptance, both of the source of the belief in question and of the belief
itself. Thus, a completely non-autonomous, dependent subject would trust a given
source regardless of its truth-conduciveness and would believe a certain proposition
without any heed to its epistemic status. When we have a disagreement between Dan,
on the one hand, and Abby and Cathy, on the other, then, they are all such radically poor
epistemic agents that none of them should be engaging in doxastic revision in the face
of disagreement with the others. So this is not a case where A needs to doxastically revise
in the face of disagreement with epistemic peer B, but not with dependent epistemic
peer C. Rather, this is a case where A needs to doxastically revise in the face of disagree-
ment with neither B nor C. Of course, given what poor epistemic agents they all are,
Dan, Abby, and Cathy need to engage in doxastic revision for general epistemic reasons,
but not for reasons having to do specifi cally with peer disagreement. Otherwise put, in
order for there to be a question about the rational response to peer disagreement, there
has to be some minimal receptivity to evidence from the relevant parties in the fi rst
place. If there isn’t, then the epistemic problems with such agents go far beyond what fall
under the specifi c domain of the epistemology of disagreement. So, this sort of case
turns out to be irrelevant to the debate at hand. 15
14 I am grateful to Dan Korman for this point.
15 It is also worth noting that the radically limited scope of Belief Independence2* hardly captures what
proponents of this thesis initially had in mind.
256 jennifer lackey
Thus far we have focused generally on source dependence. But similar remarks apply
to testimonial dependence, more specifi cally. In particular, there are at least two diff erent
notions of testimonial dependence relevant here, paralleling the concepts of source
dependence discussed above. First, there is partial testimonial dependence, which can be
characterized as follows:
Partial Testimonial Dependence: A1’s belief that p is partially dependent on A2’s testimony if and
only if A1’s belief that p is at least in part grounded in A2’s testimony.
As was the case with partial source dependence, partial testimonial dependence also falsifi es
the original Belief Independence thesis. To see this, suppose, for instance, that you and I
disagree with our friend, Camille, over whether Armenia borders Georgia—you and I say
no, and Camille says yes. While my belief about Armenia’s bordering country is partially
based on my general geographical knowledge, it also in part depends on your testimony.
Now, surely our both disagreeing with Camille requires more doxastic revision on her part
than if you were simply disagreeing with her. That is, despite the fact that my belief that
Armenia does not border Georgia is not independent of other instances of disagreement
that require Camille to doxastically revise her belief since it is partially dependent on your
testimony, the extent to which my belief is grounded in my general geographical know-
ledge provides evidence over and above that provided by your disagreement alone.
Hence, partial testimonial dependence similarly reveals that Belief Independence2
and, accordingly, complete testimonial dependence, ought to be the focus of the discus-
sion. This type of dependence can be characterized as follows:
Complete Testimonial Dependence: A1’s belief that p is completely dependent on A2’s testimony
if and only if A1’s belief that p is entirely grounded in A2’s testimony.
Now, as was the case with its source counterpart, complete testimonial dependence—
whether it is autonomous or non-autonomous—threatens the possibility of the parties
to the disagreement being epistemic peers. Suppose, for instance, that my belief that
Armenia does not border Georgia is completely dependent on your testimony—I have
absolutely no background information about this geographical region. In such a case,
Camille’s belief that Armenia does border Georgia is either also entirely dependent on
your testimony or it is not. If it is, then you would have testifi ed that Armenia does not
border Georgia to me and that it does to Camille, thereby preventing Camille and me
from being evidential equals. If it is not, then our respective beliefs have diff erent testi-
monial sources, which once again rules out Camille and me being evidential equals.
Either way, then, Camille and I fail to be epistemic peers. To the extent that this problem
can be fi nessed, the other considerations discussed in relation to source dependence
kick in; namely, that autonomous dependence, even when complete, brings with it
additional epistemic force in the form of the hearer monitoring the incoming testi-
mony for defeaters, possessing beliefs about the reliability and trustworthiness of the
testimonial source, either in particular or in general, and bearing the responsibility of
off ering a fl at-out assertion in the fi rst place.
disagreement and belief dependence 257
The upshot of these considerations is that the proponent of Belief Independence2
faces the following dilemma:
Dilemma: On the one hand, the dependence at issue in Belief Independence2 may be complete
and autonomous. But then the epistemic signifi cance of a peer’s dependent belief in an instance
of disagreement is not wholly reducible to that of the source on which it depends. On the other
hand, the dependence at issue in Belief Independence2 may be complete and non-autonomous.
But then the dependent party to the disagreement is not an epistemic peer, and thus the case is
irrelevant to the satisfaction of the thesis in question. Either way, Belief Independence2 turns out
to be false.
We have, then, yet to fi nd a form of belief dependence between epistemic peers that
allows for a true interpretation of Independence2.
3 Goldman’s concept of non-independence
In an infl uential paper on expert disagreement, Alvin Goldman focuses on the question
of how a novice ought to weigh the competing testimony of two experts, and his con-
siderations bear directly on the issues relevant here. Specifi cally, he provides a very
detailed account of belief dependence that is a potential candidate for underwriting
Belief Independence2. His view, then, is worth considering in some detail.
To begin, consider the following: “If two or more opinion-holders are totally non-
independent of one another, and if the subject knows or is justifi ed in believing this, then
the subject’s opinion should not be swayed—even a little—by more than one of these
opinion-holders” ( Goldman 2001 : 99). Goldman characterizes the notion of belief
dependence, or, as he calls it, non-independence , that is operative here in terms of condi-
tional probability. In particular, where H is a hypothesis, X(H) is X’s believing H, and
Y(H) is Y’s believing H, Y’s belief being totally non-independent of X’s belief can be
expressed in the following way:
[NI]: P(Y(H)/X(H)&H) = P(Y(H)/X(H)&~H)
According to NI, Y’s probability for H conditional on X’s believing H and H’s being
true is equal to Y’s probability for H conditional on X’s believing H and H’s being false.
In other words, Y is just as likely to follow X’s opinion whether H is true or false. In such
a case, Y is a non-discriminating refl ector of X with respect to H. 16 “When Y is a non-
discriminating refl ector of X, Y’s opinion has no extra evidential worth for the agent
above and beyond X’s opinion” ( Goldman 2001 : 101). For instance, in the case of a guru
and his blind followers, Goldman writes: “a follower’s opinion does not provide any
16 I will follow Goldman and talk about Y being just as likely to follow X’s opinion whether H is true or
false. However, if Goldman wants this likelihood to be specifi cally tied to the subject’s ability to discriminate
the true from the false, it may be more accurate to talk about Y being just as inclined to follow X’s opinion
whether H is true or false. Otherwise, the likelihood of Y’s following X’s opinion could be aff ected by fac-
tors totally disconnected from Y’s discriminatory abilities, such as features in the environment, and so on.
258 jennifer lackey
additional grounds for accepting the guru’s view (and a second follower does not provide
additional grounds for accepting a fi rst follower’s view) even if all followers are precisely
as reliable as the guru himself (or as one another)—which followers must be, of course, if
they believe exactly the same things as the guru (and one another) on the topics in
question” ( Goldman 2001 : 99). The blind follower is, then, a non-discriminating refl ec-
tor of the guru with respect to the question at hand and thus Goldman claims that
disagreement with the follower does not call for doxastic revision beyond that required
by the guru’s belief.
In order for Y’s opinion to have additional worth for the agent above and beyond X’s
opinion, Goldman argues that Y’s belief needs to be at least partially conditionally inde-
pendent (CI) of X’s belief, which can be expressed as follows:
[CI]: P(Y(H)/X(H)&H) > P(Y(H)/X(H)&~H)
According to CI, Y’s probability for H conditional on X’s believing H and H’s being true
is greater than Y’s probability for H conditional on X’s believing H and H’s being false.
In other words, Y is more likely to follow X’s opinion when H is true than when H
is false. Goldman claims that Y’s agreement with X regarding H provides evidence in
favor of H for a third party, N, only if N has reason to think that Y used a “more-or-less
autonomous casual [route] to belief, rather than a causal route that guarantees agreement
with X” ( Goldman 2001 : 102). Such an autonomous causal route is exemplifi ed in cases
where (1) “X and Y are causally independent eyewitnesses of the occurrence or non-
occurrence of H,” or (2) “X and Y base their respective beliefs on independent experi-
ments that bear on H,” or (3) Y’s belief in H goes partly through X but does not involve
uncritical refl ection of X’s belief ( Goldman 2001 : 102). In light of these considerations,
Belief Independence2 can be modifi ed as follows:
Belief Independence3: When A disagrees with peers B, C, and so on, with respect to a given
question and A has already taken into account disagreement with B, A’s disagreement with C,
and so on, requires doxastic revision for A only if the beliefs of C, and so on, are not non-
independent, or are at least partially conditionally independent, of B’s belief.
Independence3 can be read as making explicit what is meant in Independence2 by B’s belief
whether p not being wholly dependent on beliefs that fi gure in other instances of disagree-
ment that require doxastic revision for A. More specifi cally, there is such “whole depend-
ence” when there is non-independence in Goldman’s sense; there is the absence of such
“whole dependence” when there is partial conditional independence in Goldman’s sense.
While there is much that is insightful about Goldman’s account of non-independ-
ence, it is subject to problems of its own. To see this, notice that Goldman’s account of
non-independence is characterized in terms of Y being a non-discriminating refl ector
of X with respect to H , where this is understood in terms of Y being as likely to follow X’s
opinion whether H is true or false. But to adapt a point made earlier, Y may not be a
discriminating refl ector of H, but Y may nonetheless be discriminating when it comes
disagreement and belief dependence 259
to X or to sources like X. 17 To see this, let us return to a case from the previous section:
Abby may be a non-discriminating refl ector of Cathy’s testimony with respect to the
question of whether the defendant in question is guilty—that is, she is such that she
would accept Cathy’s opinion on this matter whether it is true or false—but she may be
supremely discriminating when it comes to the kind of testimony that she generally
accepts. She may, for instance, know a great deal about Cathy’s testimonial habits, her
competence and sincerity, and her experience as a forensic scientist. Or Abby may be
highly discriminating with respect to forensic scientists in general or bloodstain pattern
analysts in particular. The fact that she does not have independent evidence about the
specifi c proposition in question does not thereby entail that she does not possess a great
deal of other relevant evidence that enables her to function as an epistemic fi lter, thereby
making her disagreement with a third party have force beyond that provided by Abby’s
belief alone.
To put this point another way, suppose that there are two non-discriminating
refl ectors of Cathy’s testimony with respect to the defendant’s guilt: Abby and Annie.
Both would share Cathy’s opinion on this question whether it is true or false, but only
Abby is discriminating when it comes to the source of her information. In particular,
Abby would be in such a non-discriminating relationship with a testifi er only if she
had good evidence of that source’s general reliability and trustworthiness. Thus, Abby
would be a non-discriminating refl ector of Cathy’s belief about the defendant’s guilt
only if she had good reason to think that Cathy is a competent forensic scientist who
is appropriately trained to analyze bloodstain patterns. Moreover, Abby is excellent at
discriminating among bloodstain pattern analysts in general. Annie, on the other
hand, is non-discriminating “all the way down,” that is, she would be in such a non-
discriminating relationship with a testifi er regardless of the evidence that she pos-
sessed about the source’s general reliability and trustworthiness. So Annie would share
Cathy’s opinion about the defendant’s guilt even if she had absolutely no reason to
think that Cathy is a competent forensic scientist or bloodstain pattern analyst. More-
over, Annie is very poor at discriminating among bloodstain pattern analysts in gen-
eral. Now, compare Situation A, where Dan disagrees with Cathy and Abby over the
defendant’s guilt, with Situation B, where Dan disagrees with Cathy and Annie over
this question. Surely the beliefs in Situation A provide more evidential worth on behalf
of the defendant’s guilt than those found in Situation B. For even though both Abby
and Annie are non-discriminating refl ectors of Cathy’s opinion, and neither’s belief
about the defendant’s guilt is even partially conditionally independent of Cathy’s
belief in Goldman’s sense, Abby’s trust of Cathy’s testimony is itself well-grounded
while Annie’s is not. That is, while Abby may be blindly trusting of Cathy’s testimony
with respect to the question of the defendant’s guilt, she is neither blindly trusting of
Cathy’s testimony across the board nor of forensic scientists in general. This contrasts
with Annie, whose blind trust extends to Cathy’s testimony on all other matters and to
17 I recently discovered that Coady makes a similar point in his (2006).
260 jennifer lackey
other forensic scientists. Thus, given that the evidential worth on behalf of the defend-
ant’s guilt is greater in Situation A than it is in Situation B, Belief Independence3 is
false.
These considerations reveal why it may be misleading to focus on a case such as the
guru and his blind followers, as Goldman does. Typically, blind followers are not only
non-discriminating when it comes to a particular opinion of their guru’s, they are
also non-discriminating of their guru’s opinions across the board and perhaps even of
gurus in general. So, while such a case may lend intuitive support to Belief Independ-
ence3, there are many other instances of non-independence that fail to do so.
Of course, Goldman can simply characterize non-independence in such a way
that it captures being non-discriminating “all the way down.” In particular, he can
characterize Y being a non-discriminating refl ector in general rather than of X with
respect to H , where this is understood in terms of Y being as likely to follow anyone’s
opinion on any proposition whether it is true or false. But then the second horn of
Dilemma above kicks in: Y, who is a non-discriminating refl ector of X all the way
down, is not an epistemic peer of the original parties to the disagreement, and thus
this sort of case is irrelevant to the satisfaction of the thesis under consideration.
Goldman’s concept of non-independence, then, fails to provide support for Belief
Independence3.
It is worth noting that I do not regard the acceptance or rejection of a version of
Belief Independence as simply a minor quibble over a principle that does not have
much general epistemic importance. For to fail to recognize the additional epis-
temic value of a belief formed dependently, though autonomously, is to ignore the
signifi cance of epistemic agency in the acquisition and retention of beliefs. Even
when we are dependent on the testimony of others, we often exercise our agency in
the ways discussed above—monitoring the incoming testimony for defeaters, pos-
sessing beliefs about the reliability and trustworthiness of the testimonial source,
and bearing the responsibility of off ering a fl at-out assertion. 18 This results in beliefs
that have epistemic force that those formed blindly or non-autonomously simply
do not possess, and reveals the extent to which we are actively involved in our epis-
temic lives.
4 The Correlation thesis and collaborative dependence
There is another thesis worth considering that is related to Belief Independence3 and is
suggested by the following passage from Kelly:
Whatever evidence is aff orded for a given claim by the fact that several billion people confi -
dently believe that that claim is true, that evidence is less impressive to the extent that the indi-
18 I should mention that I take this exercise of agency to be insuffi cient to render us deserving of credit
for many of the true beliefs that we acquire via testimony. For my arguments against the so-called Credit
View of Knowledge, see Lackey ( 2007 and 2009 ).
disagreement and belief dependence 261
viduals in question have not arrived at that belief independently. That is, the evidence provided
by the fact that a large number of individuals hold a belief in common is weaker to the extent
that the individuals who share that belief do so because they have infl uenced one another, or
because they have been infl uenced by common sources. ( Kelly 2010 : 147)
Here, Kelly is not saying that doxastic revision is required in the face of peer disagreement
only when there is belief independence or the absence of complete belief dependence.
Rather, he is saying that doxastic revision is required in the face of such disagreement to the
extent that there is belief independence . Let us call this Correlation, and characterize it as follows:
Correlation: With respect to the question whether p , A’s disagreement with epistemic peer B
requires doxastic revision for A to the extent that B’s belief whether p is independent of the
beliefs that fi gure in other instances of disagreement that require doxastic revision for A.
According to Correlation, the amount of doxastic revision required in the face of peer
disagreement correlates with the amount of belief independence there is; the more
independent the belief in question is, the more doxastic revision is required by those
peers who oppose it. Conversely, the more dependent the belief in question is, the less
doxastic revision is required by those peers who hold dissenting beliefs. Otherwise put,
the evidential worth of, say, ten epistemic peers who independently arrive at the belief
that p will always be greater than the evidential worth of these same peers dependently
arriving at the belief that p . For instance, suppose that you and I both disagree with
Bernie over which bird has the largest wingspan—you and I say it is the wandering
albatross and Bernie claims it is the California condor. If you and I both base our beliefs
partially in the same bird guidebook and partially in our own ornithological knowledge,
then, according to Correlation, Bernie needs to revise his belief in the face of our dis-
agreement less than he would have to were our beliefs entirely independent of one
another. Accordingly, if our beliefs about the largest wingspan are grounded primarily
in our own independently formed background knowledge, then more doxastic revision
is required on Bernie’s part than when there is partial source dependence.
Now while Correlation appears weaker and more plausible than any version of Inde-
pendence thus far considered, it is worth pointing out that it is unclear what such a the-
sis requires at far ends of the spectrum. For instance, it is natural to think that complete
belief independence lies at one end of the spectrum, while complete belief dependence
lies at the other. It is also intuitive to think that, in keeping with the spirit of Correlation,
the former requires the maximum amount of doxastic revision necessary in the face of
disagreement, while no revision whatsoever is required in the latter. But if this is the
case, then the problems affl icting Independence1, Independence2, and Independence3
arise with respect to Correlation at these far ends. For instance, if autonomous depend-
ence, even when it is complete, brings with it additional epistemic force in the form of
the hearer critically assessing the incoming testimony for defeaters, possessing back-
ground information about the reliability and trustworthiness of the testimonial source,
and bearing the responsibility of off ering a fl at-out assertion, then some doxastic revi-
sion is required even at the far end of the dependence side. Moreover, as we saw above, to
262 jennifer lackey
the extent that we go any further down on the dependence side, the worry arises that
the parties to the disagreement are no longer epistemic peers.
Perhaps this problem could be avoided by simply stipulating that the ends of the
independence spectrum require maximal and minimal doxastic revision in the face of
peer disagreement, rather than maximum and no revision. This may avoid the worries
affl icting the various versions of Independence, but there are also questions regarding
how to understand Correlation itself. In the passage quoted above, Kelly argues that the
evidence provided by a group of disagreeing peers is weaker to the extent that the mem-
bers of such a group have been infl uenced by one another or by common sources. Now
obviously independence is just one factor relevant to the evidential worth of a group of
disagreeing peers. Suppose, for instance, that there are two groups of epistemic peers.
Group 1 consists of fi fteen research scientists, each of whom independently witnessed a
study showing that Lipitor successfully lowers cholesterol while Group 2 consists of
fi fteen research scientists, fi ve of whom independently witnessed this study and then
shared this information with the remaining ten. If the fi ve witnesses in Group 2 are the
best in the scientifi c fi eld while the fi fteen in Group 1 are simply respectable scientists,
or if the studies in the former are more competently performed than in the latter, it is
not at all clear that Group 1 provides better evidence on behalf of the success of Lipitor
in lowering cholesterol. Now, these considerations are not intended as arguments against
Correlation; rather, they are meant to indicate that in order for this thesis to have any
plausibility, the competence of the members of the groups in question and the relevant
fi rst order evidence needs to remain as equal as possible across the two groups being
compared. Of course, keeping the evidence strictly equal is not possible since the very
fact that we are comparing independently acquired evidence with dependently acquired
evidence makes the evidence diff erent. But comparing, not two diff erent groups of
epistemic peers, but rather one group of the same peers in diff erent states of belief inde-
pendence, will avoid some of the obvious asymmetries mentioned above.
To this end, suppose that fi fteen pediatricians gather at a daylong meeting of the
American Academy of Pediatrics to discuss the eff ects of immunizations, particularly
the issue of whether the Measles, Mumps, Rubella vaccine (MMR) is causally linked to
autism. All fi fteen doctors have excellent medical training, all have years of practicing
medicine under their belts, and all have done research specifi cally on immunizations.
Despite this, there is widespread disagreement among them at the start of the meeting—
about one third of those present thinks there is no such connection whatsoever between
the MMR and autism, another third thinks there is certainly such a connection, and the
last third is undecided. After hours of discussion and debate, there is far more consensus
among the pediatricians than there was at the start of the day—twelve of those present
now agree that there is not a direct causal link between the MMR and autism, but that
the MMR may be a contributing factor to those children already disposed to have
autism. This change in view was brought about, not through acquiring new pieces of
evidence on the question at issue—for all of the doctors present at the meeting were up
to date on the latest information and studies on the matter—but rather through seeing
disagreement and belief dependence 263
the evidence that they already possessed in a diff erent light. The objections and argu-
ments off ered by their colleagues and the sustained and sophisticated discussion that
took place at the meeting enabled the “converted doctors” to process all of the relevant
data in a more coherent way than they were able to do previously. This is not an uncom-
mon phenomenon. We read something over and over again without quite seeing the
force of the argument, and then another person comes along and makes the same point
in a slightly diff erent way and everything falls into place. The transition in such a case is
brought about, not through the addition of new evidence or information, but through
various forms of interpersonal infl uence, such as manner of presentation, clarity of
speech or thought, organization of ideas, and so on.
Let us call the infl uence that the other pediatricians have on the converted doctors in
this kind of scenario—the sort of dependence found when a group of epistemic peers
work together on a common project— collaborative dependence . Let us also call the twelve
converted pediatricians in the above scenario Group A, and let us compare it with Group
A', which consists of the same pediatricians holding the same beliefs about the relationship
between the MMR and autism, but their beliefs were formed entirely independently of
one another. Now, according to Correlation, since the doctors in Group A were infl u-
enced by one another, the evidential worth of their disagreeing with another epistemic
peer is weaker than that of the members of Group A' when they disagree with this same
peer. But this strikes me as clearly wrong. For why would the fact that the pediatricians in
Group A heard objections from their colleagues to their views and engaged in sustained
and sophisticated discussion about the connection between the MMR and autism neces-
sarily make their beliefs evidentially weaker in the face of disagreement with another peer?
Couldn’t this sort of scrutiny and debate make their beliefs evidentially stronger ? Indeed,
isn’t one of the central reasons why researchers discuss their proposals with colleagues,
present their work at conferences, and collaborate with peers because they think that this
process at least sometimes results in better epistemic support for their views? 19
It may be argued, however, that Correlation, properly understood, is subject to a ceteris
paribus clause and, moreover, that “other things are not equal” in the above case. In par-
ticular, the pediatricians in Group A have engaged in the sort of intellectual discussion
that provides additional support for their beliefs, while the doctors in Group A' have not.
To make things equal, we should consider a scenario in which a group of doctors—
which we can call Group A''—has engaged in collaborative thinking with other pedia-
tricians, but not with one another. The justifi cation that each doctor has can then be
equalized, and compared with the beliefs possessed by the doctors in Group A. In such a
case, the opinions of the pediatricians in Group A'' intuitively should weigh more than
the opinions of the doctors in Group A, which is precisely what Correlation predicts. 20
19 Even if it is denied that the pediatricians in Group A engaging in sustained and sophisticated discussion
about the connection between the MMR and autism renders their beliefs evidentially stronger in the face
of disagreement with their peers, all that is needed to challenge Correlation is that there beliefs are not
clearly evidentially weaker than those of Group A'.
20 I am grateful to David Christensen for this objection.
264 jennifer lackey
By way of response to this objection, notice that the only way in which the addi-
tion of such a ceteris paribus clause vindicates Correlation is if the kind of collabora-
tion found in the original scenario changes the evidence that the doctors in Group A
have, thereby preventing the doctors in Groups A and A' from being evidential
equals. Recall, however, that, ex hypothesi , the pediatricians in Group A do not acquire
any new evidence through discussing the connection between the MMR and autism
with their colleagues. Rather, their collaboration at the meeting enables them to
process or see the evidence that they already possess in a diff erent light. Indeed, there
is no particular piece of information that bears on the question at issue that the
pediatricians in Group A possess that the doctors in Group A' do not also possess. 21
This certainly seems suffi cient for the relevant notion of evidential equality to exist
between the two groups. Moreover, given that the colleagues’ infl uence on one
another is only in terms of how the evidence is processed, this surely is quite minimal
belief dependence. If this level of dependence prevents the parties to the disagree-
ment from being epistemic peers, then a worry expressed earlier becomes again rel-
evant: Correlation is an empty principle because there is never disagreement among
epistemic peers when one of the parties to the debate has a belief that is either source
or testimonial dependent.
What these considerations reveal is that belief dependence, particularly when it is of
the collaborative sort such as that found in the case above, can strengthen rather than
weaken the overall evidential or epistemic situation of the group members’ beliefs.
Indeed, we can imagine that were it not for the infl uence of their fellow researchers at the
American Academy of Pediatrics meeting, the converted pediatricians would never have
arrived at the conclusion that there is not a direct causal link between the MMR and
autism, though the MMR may be a contributing factor to those children already dis-
posed to have autism. We can further suppose that were it not for the particular objections
and arguments off ered by their colleagues, the converted doctors would not have the
grounding for their beliefs that they in fact have. Given this, contrary to Kelly’s claim, it is
not the case that the “evidence provided by the fact that a large number of individuals
hold a belief in common is weaker to the extent that the individuals who share that belief
do so because they have infl uenced one another, or because they have been infl uenced by
21 A possible exception here is the knowledge that the pediatricians in Group A possess that there are
eleven other competent pediatricians who share their view about the relationship between the MMR and
autism. Specifi cally, the doctors in Group A are aware that a signifi cant number of experts in their fi eld
reached the same conclusion about the issue under consideration, and this information may provide addi-
tional support for their beliefs. In contrast, for all the pediatricians in Group A' know, each one of them may
be the only expert who holds the view about the connection between the MMR and autism, and while this
does not undermine their conclusion, it certainly puts them in a worse epistemic situation than their col-
leagues in Group A are in. Of course, the proponent of Correlation may argue that this knowledge possessed
by the doctors in Group A is additional evidence that those pediatricians in Group A' lack, thereby prevent-
ing them from being evidential equals and, therewith, epistemic peers. If this is the case, then the scenario
can certainly be modifi ed so that the doctors in Group A' are also aware that their colleagues share their view
about the connection between the MMR and autism.
disagreement and belief dependence 265
common sources” ( Kelly 2010 : 147). For here is a case where the evidence provided by
the fact that twelve doctors share a belief about the connection between the MMR and
autism is stronger to the extent that they have been infl uenced by one another. This point
can be further supported by supposing that working in a group makes the researchers in
Group A more conscientious, careful, and thorough than they would have been were
they each working independently. Perhaps the presence of their colleagues serves as a
“check” or “monitoring device” on their work that leads to better overall performance
with respect to their research. In such a case, peer infl uence would clearly not lead to evi-
dential or epistemic inferiority, as Kelly claims, but evidential or epistemic superiority.
5 Concluding remarks
We have seen, then, that there is no interpretation of the Belief Independence thesis that
turns out to be true. 22 When the dependence in question is only partial in a case of peer
disagreement, additional evidence relevant to the belief in question can be possessed by
the dependent party, thereby necessitating doxastic revision that goes beyond that
required by the original source’s belief. When the dependence is complete but autono-
mous, additional epistemic force can be brought to the disagreement in various forms,
such as through the hearer monitoring the incoming information for defeaters, possess-
ing beliefs about the reliability and trustworthiness of the source, and bearing the
responsibility of off ering a fl at-out assertion in the fi rst place. These considerations
apply, not only to source and testimonial dependence, but also to Goldman’s concept of
non-independence. Furthermore, when the dependence in question is complete but
non-autonomous, then the parties to the disagreement are not epistemic peers, and are
thus not subsumed by the thesis in the fi rst place. Finally, when the dependence is col-
laborative, even the signifi cantly weaker Correlation thesis is undermined since the
infl uence of peers can often lead to better evidential or epistemic situations, rather than
worse ones. Thus, the amount of doxastic revision required in the face of disagreement
does not track the amount of independence possessed by the target belief. So what does
such belief revision track?
In closing, let me briefl y suggest an answer to this question. In so doing, I shall claim
that the considerations here adduced provide additional support for an approach to the
epistemology of disagreement that I have developed elsewhere. According to my justifi -
cationist account, the epistemic power of an instance of peer disagreement, or lack
thereof, depends on the degree of justifi ed confi dence with which the belief in question
is held. So, for instance, if a belief enjoys a very high degree of justifi ed confi dence, then
no doxastic revision may be required in the face of peer disagreement, while substantial
22 I should emphasize that, with respect to peer disagreement, while I am arguing that there is no notion
of belief dependence that is suffi cient for no doxastic revision being required beyond that needed by the
original belief, belief independence may nonetheless be suffi cient for such additional doxastic revision being
necessary.
266 jennifer lackey
doxastic revision may be required if such a belief enjoys a very low degree of justifi ed
confi dence. And, of course, there may be many cases that fall on the spectrum between
no doxastic revision required, and substantial doxastic revision being necessary, depend-
ing on the amount of justifi ed confi dence possessed by the target belief.
Now, one moral of the arguments in this paper is that the focus on belief dependence
and independence in determining the amount of doxastic revision required in the face of
peer disagreement is misguided. What I want to here suggest is that this ought to be
replaced with a focus on the overall justifi catory status of the beliefs involved in peer dis-
agreement, thereby providing additional support for a broadly justifi cationist approach to
the epistemology of disagreement. Consider, for instance, the case of the pediatricians
discussing the connection between the MMR and autism. We saw that, despite there
being a fair amount of collaborative dependence among the converted pediatricians’
beliefs, more doxastic revision can be required in the face of disagreeing with them—
whom I called Group A—than with the members of a group—which I called Group
A'—whose beliefs were each formed independently. The diff erence between these two
groups lies precisely in the quality of the justifi cation possessed by their respective beliefs.
In particular, subjecting one’s beliefs to discussion and scrutiny by one’s colleagues can
result in better supported beliefs epistemically than does arriving at these same beliefs
completely on one’s own. Such instances of collaborative dependence are ones where the
extent to which the target beliefs are justifi ed deviates signifi cantly from the extent to
which these beliefs are independent, and our intuitions about the amount of doxastic
revision required in such cases track the former, not the latter.
Similar results apply in the other scenarios. In the case involving Goldman’s notion of
non-independence, Abby is a non-discriminating refl ector of Cathy’s testimony with
respect to the question of whether the defendant in question is guilty, but she is very
discriminating when it comes to the kind of testimony that she generally accepts. This
grounds her belief about the defendant’s guilt better than, say, the belief of another peer,
who is non-discriminating with respect to both the question at issue and the reliability
of the source on which she depends. Once again, a justifi cationist approach corresponds
with intuition, while the principles of independence endorsed by the nonconformist
and conformist alike do not: more doxastic revision is required in the face of disagree-
ment with both Cathy and Abby than with Cathy alone, and there is less independence
and more justifi cation in the former than there is in the latter. The upshot of these con-
siderations, then, is that rather than formulating principles involving independence that
govern doxastic revision, as nonconformists and conformists do, we should focus on the
overall justifi catory credentials of the beliefs involved in peer disagreement. 23
23 I am grateful to Rachael Briggs, David Christensen, Robert Cummins, Sandy Goldberg, Alvin Gold-
man, Matthew Mullins, audience members at the University of Tennessee, Knoxville, the University of
Notre Dame, the University of Illinois at Urbana-Champaign, the University of Illinois at Chicago, the
Collective Knowledge and Epistemic Trust Conference at the Alfried Krupp Wissenschaftskolleg in Greif-
swald, Germany, the University of Copenhagen, the Social Epistemology Conference at the Technische
Universität Berlin in Berlin, Germany, the Grupo de Accion Filosofi ca in Buenos Aires, and, especially,
Baron Reed, for helpful discussions and/or comments on the ideas in this paper.
disagreement and belief dependence 267
References
Bergmann, Michael (2009) “Rational Disagreement after Full Disclosure,” Episteme 6: 336–53.
Christensen, David (2007) “Epistemology of Disagreement: The Good News,” The Philosophical
Review 116: 187–217.
—— (2011) “Disagreement, Question-Begging and Epistemic Self-Criticism,” Philosophers’
Imprint 11: 1–22.
Coady, David (2006) “When Experts Disagree,” Episteme 3: 68–79.
Elga, Adam (2007) “Refl ection and Disagreement,” Noûs 41: 478–502.
—— (2010) “How to Disagree About How to Disagree,” in Richard Feldman and Ted A. Warf-
ield (eds.) Disagreement (Oxford: Oxford University Press).
Feldman, Richard (2006) “Epistemological Puzzles about Disagreement,” in Stephen
Hetherington (ed.) Epistemology Futures (Oxford: Oxford University Press), 216–36.
—— (2007) “Reasonable Religious Disagreements,” in Louise Antony (ed.) Philosophers without
Gods: Meditations on Atheism and the Secular Life (Oxford: Oxford University Press).
Fumerton, Richard (2010) “You Can’t Trust a Philosopher,” in Richard Feldman and Ted A.
Warfi eld (eds.) Disagreement (Oxford: Oxford University Press).
Goldberg, Sanford (2006) “Reductionism and the Distinctiveness of Testimonial Knowledge,” in
Jennifer Lackey and Ernest Sosa (eds.) The Epistemology of Testimony (Oxford: Oxford University
Press).
Goldman, Alvin (2001) “Experts: Which Ones Should You Trust?” Philosophy and Phenomenologi-
cal Research 63: 85–110.
—— (2010) “Epistemic Relativism and Reasonable Disagreement,” in Richard Feldman and
Ted A. Warfi eld (eds.) Disagreement (Oxford: Oxford University Press).
Gutting, Gary (1982) Religious Belief and Religious Skepticism (Notre Dame: University of Notre
Dame Press).
Hinchman, Edward S. (2005) “Telling as Inviting to Trust,” Philosophy and Phenomenological
Research 70: 562–87.
Kelly, Thomas (2005) “The Epistemic Signifi cance of Disagreement,” in John Hawthorne and
Tamar Szabo Gendler (eds.) Oxford Studies in Epistemology , i (Oxford: Oxford University
Press), 167–96.
—— (2010) “Peer Disagreement and Higher-Order Evidence,” in Richard Feldman and Ted
A.Warfi eld (eds.) Disagreement (Oxford: Oxford University Press).
Lackey, Jennifer (2007) “Why We Don’t Deserve Credit for Everything We Know,” Synthese 158:
345–61.
—— (2008) Learning from Words (Oxford: Oxford University Press).
—— (2009) “Knowledge and Credit,” Philosophical Studies 142: 27–42.
—— (2010a ) “A Justifi cationist View of Disagreement’s Epistemic Signifi cance,” in Adrian Had-
dock, Alan Millar, and Duncan Pritchard (eds.) Social Epistemology (Oxford: Oxford University
Press).
—— (2010b) “What Should We Do When We Disagree,” in Tamar Szabó Gendler and John
Hawthorne (eds.) Oxford Studies in Epistemology (Oxford: Oxford University Press).
McGrath, Sarah (2008) “Moral Disagreement and Moral Expertise,” in Russ Shafer-Landau
(ed.) Oxford Studies in Metaethics 3: 87–107.
Moff ett, Marc (2007) “Reasonable Disagreement and Rational Group Inquiry,” Episteme 4: 352–67.
268 jennifer lackey
Rosen, Gideon (2001) “Nominalism, Naturalism, Epistemic Relativism,” Philosophical Perspectives
15: 69–91.
Sosa, Ernest (2010) “The Epistemology of Disagreement,” in Adrian Haddock, Alan Millar, and
Duncan Pritchard (eds.) Social Epistemology (Oxford: Oxford University Press).
van Inwagen, Peter (1996) “It is Wrong, Everywhere, Always, and for Anyone, to Believe Any-
thing on Insuffi cient Evidence,” in Jeff Jordan and Daniel Howard-Snyder (eds.) Faith, Free-
dom, and Rationality: Philosophy of Religion Today (London: Rowman and Littlefi eld), 137–53.
—— (2010) “We’re Right, They’re Wrong,” in Richard Feldman and Ted A. Warfi eld (eds.)
Disagreement (Oxford: Oxford University Press).
Wedgwood, Ralph (2007) The Nature of Normativity (Oxford: Oxford University Press).
White, Roger (2005) “Epistemic Permissiveness,” in Philosophical Perspectives , xix (Oxford:
Blackwell), 445–59.
Index
a posteriori 193–4 a priori 106 , 127 n. 5 , 145, 193–4, 219 n. 20 Aarnio, Maria Lasonen 25 n. 40 , 93 n. 24 ,
94 n. 25 , 95 n. 27 acceptance 207–8 accuracy 102–4 actus reus 226 , 228 Alston, William P. 207 n. 6 , 236 Antony, Louise 167 n. 1 Apollo (example) 57–64 armchair philosophy 127 , 190–1 , 195 n. 13 ,
197–9 , 201–2 assertion 44 , 126 , 128 , 147 , 168 , 182 , 184–8 ,
251–3 , 256 , 260–1 , 265 asymmetry 103–7 , 113 , 218 ; see also symmetry Audi, Robert 206 nn. 3 , 5 , 207 n. 6 , 209 n. 9 ,
213 n. 13 , 216 n. 15 , 217 n. 16 , 218 n. 18 , 219 n. 19
Balaguer, Mark 126 Ballantyne, Nathan 103 n. 11 , 116 n. 27 , 221 n. 21 Bealer, George 208 n. 8 , 219 n. 20 belief independence 170 , 244–5 , 248 , 255 ,
261–2 , 265 n. 22 Belief Independence (principle) 244–8 , 250 ,
256 , 260 , 265 Belief Independence2 (principle) 248–50 ,
252–8 , 261 Belief Independence2* (principle) 255 Belief Independence3 (principle) 258 , 260–1 Benacerraf, Paul 195 n. 13 Bergmann, Michael 168 n. 4 , 171 n. 12 , 181 n.
27 , 243 n. 3 blame (blameworthiness, blamelessness) 17–30 ,
122 , 132 , 147 , 154 n. 16 , 155–62 , 165 , 192 n. 6 , 224–6 , 228
blindsight 193 , 195–6 Bogardus, Tomas 34 BonJour, Laurence 18 n. 26 , 196 n. 14 , 219 n. 20 Braun, David 126 Bridge Builder case 19–20 , 22–3 , 27 Briggs, Rachael 266 n. 23 Broome, John 109 n. 17 Brown, Jessica 116 n. 27 Budolfson, Mark 116 n. 27
Cameron, Ross 139 Cariani, Fabrizio 167 n. 1 Chalmers, David 29 n. 46 , 138 Chisholm, Roderick 229 , 232 Chomsky, Noam 201 n. 20
Christensen, David 2 , 34–40 , 41 n. 13 , 43–4 , 46 , 47 , 49 , 52 n. 22 , 54–5 , 58 n. 60 , 65 , 82 , 86 n. 15 , 95 n. 26 , 100 , 101 n. 8 , 106 n. 13 , 109–13 , 116 , 165 n. 23 , 167 n. 1 , 169 , 172–3 , 225 n. 3 , 238–40 , 243 n. 4 , 254 n. 12 , 263 n. 20 , 266 n. 23
Coady, David 259 n. 17 Coff man, E. J. 221 n. 21 cognitive disparity 205–12 , 214–15 , 218–22 Cohen, L. J. 207 n. 6 Cohen, Stewart 2 , 3 , 34 , 36 n. 5 , 37 n. 6 , 77 n. 1 , 230 coherentism 185 , 191 collaborative dependence 245 , 263 , 266 Comesaña, Juan 115 n. 23 , 116 n. 27 competence 35 , 157 , 169 , 173–4 , 192–5 , 202 ,
238 , 259 , 262 Condition 2 → No Defeater (2→~D)
(principle) 136–7 , 143 , 146–9 Consequentialist Norm (CN) 20 confl icting-ideals view 91–6 Correlation (principle) 261–5 Cummins, Robert 195 n. 13 , 266 n. 23
Daly, Chris 147 Dancy, Jonathan 212 n. 12 , 218 n. 18 Darwall, Stephen 226 David, Marian 227 n. 7 DEF (principle) 180–1 defeat (defeaters) 12 n. 12 , 13–14 , 18–19 , 21–5 , 58
n. 3 , 66–8 , 107 n. 14 , 114–15 , 135–6 , 167–71 , 176 n. 21 , 180 n. 26 , 181 , 186 , 188 , 209 , 232–7 , 239 , 249 , 251 , 253 , 256 , 260–1 , 265
defect, epistemic 95 , 122–3 , 132–3 , 135–7 , 144 , 148–9 , 154–64 , 209 , 235 , 239
Dennett, Daniel 138 DePaul, Michael 208 n. 8 DeRose, Keith 224 n. 2 Dever, Josh 116 n. 27 Devlin, John 116 n. 27 dogmatism 12 , 14 , 18–19 , 22 , 26 , 28 , 43–9 , 88–9 Dorr, Cian 9 n. 1 , 11 n. 9 , 126 , 167 n. 1 , 177 n.
23 , 185 n. 33 , 188 n. 39 Doxastic Uniqueness (principle) 101–3 Dreier, James 218 n. 18
Ebels-Duggan, Sean 167 n. 1 , 176 n. 22 Elga, Adam 34 , 36 n. 5 , 37 n. 6 , 47 n. 17 , 49–51 ,
54–6 , 59 , 62 , 64 , 69 , 70 , 77 n. 1 , 78 n. 3 , 82 n. 9 , 83 , 84 , 85 n. 13 , 86 n. 17 , 88 , 92 n. 22 , 100 , 169 n. 8 , 225 n. 3 , 243 n. 4 , 244–7 , 252 n. 9
Enoch, David 116 n. 27
270 index
error theories 122–6 , 131 , 137–42 , 145–58 , 162–5
Evans, Gareth 140 Evans, Ian 116 n. 27 Evidence & ~Skepticism → Defeater
(Ev&~Sk→D) (principle) 135–6 , 144–8 Evidence-Bridge Principle (EBP) 23–5 Evidence of Evidence (Ev-of-Ev)
(principle) 133–6 , 141–4 excusability 219 , 223–31 , 233–4 , 241 expected epistemic utility 25–30 experimental philosophy 190–1 , 201–2 experts (expertise) 13 , 19 , 37 , 44 , 46 , 49 , 55–6 ,
58 , 62 , 82 , 84 , 94 , 98 , 100 , 123–32 , 137–8 , 140 , 142–8 , 164–5 , 175 , 182 n. 30 , 190–1 , 197–9 , 248 , 251 , 257 , 264 n. 21
Extra Weight View 50–1
fallibilism 89 , 94 , 224–5 , 231 , 234 Fantl, Jeremy 207 n. 6 Feldman, Richard 34 , 35 n. 3 , 54–5 , 98 , 99 n. 4 ,
101 , 125 n. 2 , 205 n. 1 , 218 n. 17 , 221 n. 21 , 237 n. 9 , 238 n. 10 , 243 n. 4 , 254 n. 13
Field, Hartry 83 n. 12 , 126 , 195 n. 13 Fine, Kit 142 Fitelson, Branden 35 n. 4 , 67 n. 11 foundationalism 191–5 Frances, Bryan 3 , 86–7 , 167 nn. 1 , 2 , 171 n. 12 ,
187 n. 36 Frances, Margaret 165 n. 23 Frege, Gottlob 226 Fricker, Lizzie 167 n. 1 Fricker, Miranda 167 n. 1 Fumerton, Richard 167 nn. 1 , 2
Gendler, Tamar 195 n. 13 Gettier, Edmund 192 , 197 n. 16 Gibbons, John 168 n. 5 Goldberg, Sanford 167 n. 2 , 168 n. 3 , 171 n. 12 ,
176 nn. 21 , 22 , 187 nn. 36 , 38 , 188 n. 41 , 252 n. 11 , 266 n. 23
Goldman, Alvin 167 n. 1 , 230 , 244 n. 6 , 257 n. 16 , 257–60 , 266 n. 23
Greco, John 207 n. 6 Grenade (case) 20 Grice, Paul 188 n. 41 Gutting, Gary 243 n. 1
habit 18–23 , 25 , 27–9 Harman, Gilbert 45 Hawthorne, John 195 n. 13 , 224 n. 2 , 231 , 233–5 Hegel, G. W. F. 139 n. 13 Heil, John 206 n. 4 Hernandez, Jill 216 n. 15 Hinchman, Edward 252 n. 10 Holocaust denier 40–4 , 46 Horgan, Terence 139
Horowitz, Sophie 77 n. 1 Horwich, Paul 140 , 142 , 147 Howard-Snyder, Daniel 207 n. 6 Hudson, Hud 140
incoherence 14 , 21 , 23 , 55 , 58 , 60 , 81–4 , 96 , 110–11 , 139 n. 11 , 235
inconsistency 31 , 43 , 55 , 60 , 78 , 82–94 , 121 , 126–7 , 134 , 139–42 , 150–1 , 168–9 , 210
Independence (principle) 2–3 , 36–48 , 78 , 100 , 105
instability 27 n. 44 , 78–82 , 84–5 introspection 192 , 194 intuitionism 208 n. 8 , 218 intuitions 9 , 15 , 21 , 23 , 26 , 28 , 30 , 35 , 38 , 60 , 84 ,
88 , 175 , 182–3 , 190–1 , 193–202 , 208–9 , 214–15 , 220 , 231 , 253 , 266
Jackson, Frank 147 Jeff rey, Richard 70 Jehle, David 35 n. 4 Johnston, Mark 147 Jordan, Jeff 207 n. 6 Joyce, Jim 102 Judgment Screens Evidence (JSE)
(principle) 67–72
King, Nate 52 n. 22 Kelly, Thomas 2 , 3 , 35 , 36 n. 5 , 45 , 48 n. 18 ,
50 n. 19 , 55–6 , 64 , 66 , 85 n. 14 , 99 , 101–9 , 111–14 , 116 n. 27 , 147 , 152 n. 15 , 170 n. 10 , 177 , 180 n. 26 , 225 n. 4 , 237 n. 9 , 238 n. 10 , 243 nn. 1 , 3 , 244–5 , 251–2 n. 9 , 260–5
Knowledge Disagreement Norm (KDN) 9 , 11–30
Korman, Dan 255 n. 14 Kornblith, Hilary 34 , 37 n. 6 , 167 n. 2 , 169 n. 8 ,
181 n. 29 , 187 n. 36 Kratzer, Angelika 10 n. 5 Kripke, Saul 45 Kvanvig, Jonathan 224 n. 2 , 227 n. 7
Lackey, Jennifer 3 , 40 n. 10 , 43 , 77 n. 1 , 85 n. 14 , 100 n. 7 , 116 , 167 n. 1 , 171 nn. 12 , 13, 14 , 172 n. 15 , 177 n. 23 , 178 n. 25 , 202 n. 22 , 206 n. 2 , 218 n. 17 , 222 n. 22 , 245 n. 7 , 252 n. 10 , 260 n. 18
Lance, Mark Norris 218 n. 18 Laudan, Larry 195 n. 13 Lee, Matthew 103 n. 11 , 116 n. 27 Lehrer, Keith 230 Leonard, Nick 167 n. 1 , 181 n. 28 level-connecting 84–94 Lewis, David 83 n. 12 , 114–15 , 128 , 139 Liggins, David 147 likemindedness 245–7 List, Christian 167 n. 1
index 271
Ludlow, Peter 167 n. 1 , 174 n. 19 , 185 n. 32 luminosity 15 n. 21 , 19 , 22 , 24 , 26–7 , 29 , 59 , 224 ,
231 , 234–5
McGinn, Colin 201 n. 20 McGrath, Matthew 207 n. 6 McGrath, Sarah 244 n. 6 Mackie, J. L. 126 Master Argument 167–8 , 181 , 183–4 , 186–7 Mastroianni, Ariella 165 n. 23 Matheson, Jonathan 34 Maud (clairvoyant) 18 , 20–1 Mele, Alfred R. 207 n. 6 mens rea 225–31 Merricks, Trenton 126 Might be Justifi ed (MJ) (principle) 59–60 Miller, Alex 147 Milne, Peter 116 n. 24 Moff ett, Marc 243 n. 3 Mogensen, Andreas 16 n. 24 Moore, G. E. 208 n. 8 Moorean reasoning 48 , 141–2 , 149–52 Mullins, Matthew 167 n. 1 , 183 n. 31 ,
266 n. 23
norms, epistemic 9–10 , 13 , 15–17 , 28–30 , 49 , 51–2 , 187 , 227 , 229–33 ; see also Knowledge Disagreement Norm
operationalizability 15–17 Ortiz, David 57 ought-claims 9–21 , 25–30
particularism 218 PASD (assumption) 184 , 186 peer, epistemic 14 , 23 n. 33 , 34–7 , 46 , 49 , 51–2 ,
55–8 , 60–9 , 80 , 85 , 98–116 , 125 n. 2 , 156 , 161 , 169 , 177–9 , 182–3 , 200 , 217–18 , 220–1 , 236 , 238 , 243–58 , 260–6
Peirce, C. S. 195 n. 13 perception 149 , 152 , 192 , 194 , 196–7 Pittard, John 77 n. 1 Plantinga, Alvin 99 n. 5 Pollock, John 236–7 Potrč, Matjaž 139 , 218 n. 18 praise (praiseworthiness) 17 , 20–1 , 25–30 ;
see also blame (blameworthiness) presupposition 129 , 201 , 207–8 , 214 , 221
Ramsey, William 208 n. 8 ratifi cation 60 n. 4 , 70–2 Rational Refl ection (principle) 82 n. 9 ,
85 n. 13 Rawls, John 182 , 208 n. 8 Reed, Baron 167 n. 1 , 181 n. 28 , 266 n. 23 regress 61–4 , 68–72 , 191–2 Reichenbach, Hans 195 n. 13
reliability (unreliability) 25 n. 39 , 26 , 44 , 64 , 135–6 , 146 , 170–9 , 190 n. 2 , 193 , 195–7 , 199 , 249 , 253 , 256 , 259–61 , 265–6
renegade, philosophical or epistemic 122 , 126 , 131–3 , 136–8 , 141–51 , 155–65
Rescher, Nicholas 195 n. 13 Restaurant (case) 37–9 , 46 , 172–3 Right Reasons View 55 , 57–8 , 61–6 , 177 Robinson, Brooks 158–9 Roeber, Blake 202 n. 22 Rosen, Gideon 27 n. 44 , 99 n. 5 , 126 , 243 n. 3 Ross, W. D. 208 n. 8 , 211–20
Salmon, Wesley 195 n. 13 Schechter, Joshua 77 n. 1 , 82 n. 10 , 86 n. 18 ,
92 n. 22 scope, narrow 109–10 scope, wide 109–11 self-evidence 205 , 211–12 , 216–19 Shafer-Landau, Russ 167 n. 1 , 181 n. 28 Sher, George 226 Sider, Ted 126 , 142 skepticism 217 , 221 , 236
metaphilosophical 122–3 , 125–6 , 130–65 , 170 n. 10 , 181 n. 28 , 187 , 198–202
Smith, Barry 167 n. 1 Sorensen, Roy 45 Sosa, Ernest 3 , 43 , 85 n. 14 , 208 n. 8 source dependence 245 , 247–50 , 252–4 , 256 , 265 Spectre, Levi 116 n. 27 Stanley, Jason 224 n. 2 , 231 , 233–5 Steup, Matthias 207 n. 6 Strahovnik, Vojko 218 n. 18 Sundell, Tim 167 n. 1 superior, epistemic 85 n. 14 , 89 , 98 , 100 , 121–43 ,
146 , 149 , 155–6 , 159–61 , 163 , 224 , 236 , 238 symmetry 34 , 99 , 108 , 171 n. 11 , 180 ; see also
asymmetry
testability (untestability) 191 , 195–9 , 201 testimonial dependence 245 , 256 , 265 testimony 40 , 44–6 , 194 , 220 , 244–57 , 259–61 ,
266 Thomson, Judith Jarvis 224 n. 2 , 226 Timmons, Mark C. 207 n. 6 Tollefsen, Deb 167 n. 1 Total Evidence View 35–6 , 45–6 , 49–51 , 99 , 104 ,
106 , 111 , 180 n. 26 transparency 9 , 15–17 , 19 , 25–30 two-state solution 21 , 25–8
Über-rule 92–4 Uniqueness (principle) 101 , 103 n. 11 , 104
van Inwagen, Peter 99 n. 5 , 114–15 , 126 , 243 n. 3 vice, formal 47–9 Vogel, Jonathan 77 n. 1
272 index
Warenski, Lisa 222 n. 22 Warfi eld, Ted (Fritz) 52 n. 22 , 205 n. 1 , 218 n. 17 ,
221 n. 21 Weatherson, Brian 2 , 67 n. 10 , 80 nn. 4 , 5 , 82 n.
11 , 84 , 116 n. 27 , 185 Wedgwood, Ralph 55–6 , 85 n. 14 , 243 n. 3 , 226 Weinberg, Jonathan 191 n. 3 , 195 n. 13 , 201 n. 21 Weiner, Matthew 186 n. 35 , 224 n. 2
Weintraub, Ruth 116 n. 27 White, Roger 101 n. 9 Williamson, Timothy 14 n. 16 , 15 n. 21 , 24–5 ,
26 n. 43 , 32 , 59 , 94 n. 25 , 140 , 142 , 154 , 167 n. 1 , 177 n. 23 , 224 , 231
Wright, Crispin 45 n. 15 , 147
Zimmerman, Michael J. 27 n. 44